Posts Tagged ‘High Performance Computing’

IBM’s Deep Computing unit is pushing hard in the HPC space again!  A few questions come to mind, can the code scale?  How do you administer such a system?  With 1.6 million cpu’s is there someone constantly walking around the system exchanging failed cpu’s and memory.

Code-named Sequoia, the supercomputer has been ordered up by the U.S. Department of Energy’s National Nuclear Security Administration, an agency that has long relied on IBM to provide the supercomputing muscle to safeguard the nation’s nuclear stockpile. Advertisement Sequoia will include 1.6 million microprocessors – more than 10 times as many as were built into last summer’s Roadrunner.

Roadrunner, which also performs computer simulations to keep nuclear weapons safe, is deployed at the Department of Energy’s Los Alamos lab, where it consists of 288 refrigerator-sized racks that take up 6,000 square feet – more than twice the size of the average American home.

Sequoia will be almost half that size, housed in 96 refrigerator-sized racks occupying just 3,422 square feet.

But despite its smaller footprint, when it is put into action in 2012 it will be more powerful than the combined performance of the top 500 supercomputers in the world today.

Sequoia is scheduled for delivery in 2011 and will go into service in 2012.

Full article:  http://www.lohud.com/article/20090203/BUSINESS01/902030335


Read Full Post »

Here are a few pictures and videos of International Supercomputing in Germany from June ’07. It was a great event and Microsoft definitely threw one of the best parties I’ve ever been to.

Here is the view from the conference centre of the river Elbe:


Me manning the booth:

From the isle:

The following pictures are from the Microsoft party, they had a themed saxon night that was quite interesting.

The view from the mansion:

Youtube videos of the party:

Read Full Post »