KQED Warns Of Supercomputing ‘Energy Wall’ – NVIDIA Helps Scale It With GPUs

Category: Featured Graphics | Posted on July 02, 2011

0 person

Lauren Sommer wrote a great blog over the weekend on KQED about how supercomputers have hit the “energy wall” – a decidedly real supercomputing problem that NVIDIA’s GPU technology can help to overcome. This is what 1,000 homes looks like. The blog post mentions the Hopper supercomputer, located at the Lawrence Berkeley National Lab (LBNL). The system consumes 3 megawatts of electricity (enough to power 2,000-3,000 homes a year) and has performance of 1 petaflop per second (equivalent to about 68,000 laptops). It’s hard to imagine these numbers scaling to exascale systems – the “energy wall” here would just be too high to reasonably surmount. In fact, I just got back from the International Supercomputing Conference in Hamburg, where the running joke was that power companies would soon be giving supercomputers away for free if you sign up for a five-year power contract with them. Here at NVIDIA, we’ve been working on a solution to the supercomputing power crisis for several years. Supercomputers can use NVIDIA Tesla GPUs to dramatically accelerate supercomputing applications. Like a turbocharger on your car, GPUs kick in to boost your standard Intel or AMD CPUs when you need the extra oomph. Using GPUs is a much more energy efficient way of supercomputing.  You choose the right processor to the do the right job.  When I edit pictures of my kids, for example, my computer’s sequential Intel or AMD x86 CPU is used to access the hard disk, retrieve the file, and open it.  Once the picture is open, and I want to do red-eye reduction or remove the blur, the GPU kicks into gear to accelerate the job. Three of the Top Five supercomputers in the world are accelerated by NVIDIA Tesla GPUs. One of these is the Tsubame 2.0 system at the Tokyo Institute of Technology. Like the Hopper system at LBNL, it delivers 1 petaflop per second of performance. But thanks to its GPUs, it consumes less than half the power of the Hopper system.  To be exact, Tsubame achieves 1.19 Petaflop/sec and sips a “mere” 1.4 megawatts of electricity. Half the power for the same performance is a big leap forward. But we have a long road ahead, especially as we move towards exascale supercomputers that will be 1,000 times more powerful than the current petaflop supers. Otherwise, the power companies will indeed start giving away supercomputers away for free!

original content by blogs.nvidia.com

Related Articles

Comments and Discussion


View all