IBM, Nvidia land $325M supercomputer deal

US Energy Department funds two huge machines that combine IBM and Nvidia chips with Mellanox networking. A further $100 million goes toward next-gen supercomputers.


IBM

In a Department of Energy deal worth $325 million, IBM will build two massive supercomputers called Sierra and Summit that combine a new supercomputing approach from Big Blue with Nvidia processing accelerators and Mellanox high-speed networking.


The companies and US government agency announced the deal on Friday ahead of a twice-yearly supercomputing conference that begins Monday. The show focuses on the high-end systems -- sometimes as large as a basketball court -- that are used to calculate car aerodynamics, detect structural weaknesses in airplane designs and predict the performance of new drugs.


The funds will pay for two machines, one for civilian research at Oak Ridge National Laboratory and one for nuclear weapons simulation at Lawrence Livermore National Laboratory. They'll each clock in with a peak performance surpassing 100 petaflops -- that's a quadrillion calculations per second as measured in the Top500 list that ranks the world's fastest machines. Trying to do that with modern laptops would take something like 3 million of them, Nvidia estimates.


In addition, the DOE will spend about $100 million on a program called FastForward2 to make next-generation, massive-scale supercomputers 20 to 40 times faster than today's high-end models, Energy Secretary Ernest Moniz was scheduled to announce Friday. It's all part of a project called Coral after the national labs involved: Oak Ridge, Argonne and Lawrence Livermore.


'We expect that critical supercomputing investments like Coral and FastForward2 will again lead to transformational advancements in basic science, national defense, environmental and energy research that rely on simulations of complex physical systems and analysis of massive amounts of data,' Moniz said in a statement.


Supercomputing progress faltering?

The deal is a lucrative feather in the cap for the companies. IBM will build the overall system using a design that marries main processors from its own Power family with Volta accelerators from Nvidia. IBM has decades of experience in high-performance computing, but Nvidia, most of whose revenue comes from graphics chips to speed up videogames, is a relative newcomer.


The world is accustomed to steady increases in computing power, but growth of supercomputing progress slowed in recent years. No longer to processor clock speeds conveniently ratchet up to higher gigahertz levels each year, and the constraints of funding, equipment cooling and electrical power consumption are formidable.


To tackle the problem, IBM is adopting a supercomputing approach it calls data-centric design. The general idea is to distribute processing power so it's close to data storage areas, reducing the performance and energy-consumption problems associated with moving data around a system.


'At the individual compute element level we continue the Von Neumann approach,' IBM said of its design, referring to the traditional computer architecture that combines a central processor and memory. 'At the level of the system, however, we are providing an additional way to compute, which is to move the compute to the data.'


Modern architecture

The system encompasses relatively new computing trends, including flash-memory storage that's faster but more expensive than hard drives, and the graphic processing unit (GPU) boost from Nvidia. Such accelerators aren't as versatile as general-purpose central processing units, but they can solve particular types of math problems faster. That's why accelerators from Nvidia, AMD, and Intel have found a place in supercomputing systems.


'This is a huge endorsement for the Tesla GPU accelerator platform,' said Sumit Gupta, general manager of Nvidia's Tesla accelerated computing business. 'To be able to build up these large systems, you need the energy efficiency that GPU accelerators provide.'


One big problem with systems that include both CPUs and GPUs is getting data where it belongs. CPUs generally run the show, offloading some work to GPUs, but to do so, they have to transfer data from CPU memory to GPU memory. To speed that up, Nvidia offers its NVLink interconnect, which IBM said is five to twelve times faster than today's technology at the transfer.


Another key player in the system is Mellanox, which is supplying high-speed networking equipment using the InfiniBand standard to rapidly shuttle data around the system.


Comments

Popular posts from this blog

5 Reasons iPhone 6 Won't Be Popular

Eset nod32 ativirus 6 free usernames and passwords

Apple's self