The results of the biannual TOP500 list of supercomputers are in — and although competition isn’t as fierce with the pandemic, there are plenty of ambitious new developments, especially from India, which is a relative newcommer in the supercomputer scene.
This year, India is present with two exciting entries: PARAM-Siddhi AI, their latest and fastest supercomputer, ranks #63 on the list, a considerable increase from #78 in November 2019.
Developed as part of the National Supercomputer Mission (NSM) in collaboration with Nvidia and Atos, the supercomputer will become a part of the Centre for Development of Advanced Computing unit, where it will be used by scientists for research purposes, but also by tech startups.
According to India’s Ministry of Electronics and Information Technology Secretary, PARAM-Siddhi AI will play a key role in the nation’s plan of becoming a powerhouse for AI development. The supercomputer is also expected to fast-forward India’s progress in the robotics field and even help in disaster management.
India’s second entry in TOP500 is Mihir, ranking #146 on the list. With a capacity of 6.5 petaflops, the supercomputer is currently used to predict extreme weather events such as cyclones, earthquakes, and floods, but it can also forecast air quality.
Meanwhile, however, Japan continues to retain its supercomputer crown, and it’s not even close.
Fugaku from Japan – the world’s fastest supercomputer for two years in a row
Named after Mount Fuji, Fugako was developed in Japan by Fujitsu Ltd. at Riken’s facility in Kobe, as a replacement for the K supercomputer that was withdrawn from service. At its peak, the K supercomputer was capable of over 10 million computations per second. In comparison, Fugaku is now capable of 442,010 teraflops/second, a 6.4% increase compared to last year’s test. For those who don’t know, a teraflop is a mathematical measurement of a computer’s performance that indicates how many floating operations a computer can do per second. If a computer has a power of 5 teraflops, it can do 5 trillion floating operations. So, at 442,010 teraflops/second, Fugaku is one powerful beast.
Fugaku won by a landslide; #2 on the list, IBM’s Summit, delivered only 148,800 teraflops, followed by Sierra, also from IBM, with a score of 94,600 teraflops. Last year’s winner, Sunway TaihuLight, from China, came in fourth, with 93,000 teraflops.
In total, the added performance of all the supercomputers on the list amounted to 2.43 exaflops, up from 2.22 exaflops.
Is Moore’s Law close to being reached?
In 1965, Gordon Moore, Intel co-founder, predicted that the number of transistors that can fit into one single unit doubles every two years. The chip industry has lived to that prediction, and now silicon chips can hold a billion times more transistors compared to the 1960s. As a result, it has become increasingly difficult for chip manufacturers to fit in more transistors every year. Even Intel, which is at the forefront of chip design, has encountered some barriers.
However, that doesn’t mean that there’s no longer room for innovation. There is plenty of room — for now, at least. Manufacturers will continue to develop bleeding-edge chips, but we won’t see the same kind of development as what we’re used to. For reference, humans have had roughly the same number of neurons for billions of years. Our brain capacity has already been reached, but that doesn’t mean we stopped evolving.
In theory, Moore’s Law was reached years ago. What manufacturers are trying to do now is get creative with the limit of transistors by thinking of new ways of packaging chips. And that’s exactly what Fugaku did. According to Satoshi Matsuoka, Fugaku’s impressive 6.4% bump in performance is due to it finally being able to use the entire machine, not just parts of it. To achieve these results, the manufacturer also had to rethink what they did and inventing their own processors.
What’s also promising is that, despite COVID-19, the research centers have continued to install new systems. In fact, the crisis has even raised the interest in new architectures, and research institutions are now more open to innovation and want to try new technologies.
What are supercomputers used for?
Technology has come a long way since the first supercomputer, the UNIVAC, developed for the US Navy Research and Development Center. But are the applications of these computers in the real world, and how they can push humanity forward?
Trading. Nowadays, the API technologies in Forex trading help high-frequency traders meet highly-specific needs, but the use of technology goes much further than that. Supercomputers could soon become a must for professionals, since they are able to process large numbers of orders in record time, and they can enhance every aspect of online trading. They can do everything from analyzing market data to executing trades and maximizing the efficiency of trader-applied algorithms.
Scientific research. Supercomputers are capable of calculation-intensive tasks, which means that they can be used by researchers to solve problems much faster, without sacrificing accuracy. Supercomputers can be used for climate research, predicting weather patterns, molecular modeling, and quantum physics. So far, universities, research labs, and military agencies are some of the heaviest users of supercomputers. Among other tasks, supercomputers were used to model the swine flu, forecast hurricanes, map out the bloodstream, and even research the Big Bang. This year, Fugaku was used to help Japan’s fight against COVID-19.
Special purpose supercomputers. Sometimes, supercomputers can be built to meet a specific purpose. For example, supercomputers Belle and Hydra were designed to play chess, Gravity Pipe was built for astrophysics, and the Riken Research Institute in Japan (where Fugaku is now used) developed the MDGRAPE-3 supercomputer for protein structure computation molecular dynamics.
But that’s not all. Supercomputers have applications in just about any field, including but not limited to cryptocurrency, car safety ratings, and simulating nuclear explosions.