Feeds:
Posts
Comments

Posts Tagged ‘HPC’

Interesting article at HPCWire,  Parallel Programming: Some Fundamental Concepts

Read Full Post »

Really interesting read on what Google data centers are comprised of.

Google’s big surprise: each server has its own 12-volt battery to supply power if there’s a problem with the main source of electricity. The company also revealed for the first time that since 2005, its data centers have been composed of standard shipping containers–each with 1,160 servers and a power consumption that can reach 250 kilowatts.

Read Full Post »

IBM’s Deep Computing unit is pushing hard in the HPC space again!  A few questions come to mind, can the code scale?  How do you administer such a system?  With 1.6 million cpu’s is there someone constantly walking around the system exchanging failed cpu’s and memory.

Code-named Sequoia, the supercomputer has been ordered up by the U.S. Department of Energy’s National Nuclear Security Administration, an agency that has long relied on IBM to provide the supercomputing muscle to safeguard the nation’s nuclear stockpile. Advertisement Sequoia will include 1.6 million microprocessors – more than 10 times as many as were built into last summer’s Roadrunner.

Roadrunner, which also performs computer simulations to keep nuclear weapons safe, is deployed at the Department of Energy’s Los Alamos lab, where it consists of 288 refrigerator-sized racks that take up 6,000 square feet – more than twice the size of the average American home.

Sequoia will be almost half that size, housed in 96 refrigerator-sized racks occupying just 3,422 square feet.

But despite its smaller footprint, when it is put into action in 2012 it will be more powerful than the combined performance of the top 500 supercomputers in the world today.

Sequoia is scheduled for delivery in 2011 and will go into service in 2012.

Full article:  http://www.lohud.com/article/20090203/BUSINESS01/902030335

Read Full Post »

I recently had the privelage of having my name published in Scientific Computing World.

Instead, when scheduling jobs, Moab can change the operating system on each core, depending on a user’s preferences. ‘Users are not tied down – they can change the operating system to suit the application,’ says Chris Vaughan, a systems engineer with Cluster Resources, which helped with the installation of Darwin.

PDF copy is available here: Putting the user first

Read Full Post »

Sun’s plan of zero data centers by 2015 is very ambitious but achievable, instead of having their data centres down they hall they’ll be a few miles away. Sun has realized that by outsourcing their datacentre management it is cheaper than having themselves manage it. Their biggest challenges they will face is migration is their current data and practices to this new platform and developing the tools to do this. Currently the tools are not here yet to do this but they are taking a gamble that within 7 years they should be able to do this. I’m sure virtualization will be a standard by then and software as a service will be the norm. With virtual machines becoming very stable and reliable they will have eliminated the problem of having to walk into the room and reset a machine. Which other than swapping out dead hardware is probably their only physical need to have the machine in a reachable location.

There are some other interesting write ups on this at the following blogs from Nick Carr and at InsideHPC.com

Read Full Post »

In High School I used to make the local paper for my athletic prowess. Now 10 years later it’s the work I’m involved in that gets picked up by not regional but worldwide news organizations. I’m very proud to have helped plan, install and facilitate the collaboration for this project.

“Hybrid Windows and Linux clusters increase the number of addressable users and improve cluster efficiency,” said Shawn Hansen, Director of HPC marketing at Microsoft. “With Moab, customers can increase their productivity and utilization and broaden their reach by tapping into the larger base of scientists and engineers who use Windows.”

The dynamic hybrid cluster hinges on Moab — an intelligence, scheduling and policy engine from Cluster Resources — which optimally determines when the OS should be modified based upon workload and defined policies. When conditions are met, Moab triggers the change via a site’s preferred OS-modification technology, such as diskful and diskless provisioning, dual boot or virtualization (i.e., via Hyper-V within Windows Server 2008, VMware, or Xen).

This is what my High School friend Chris Brooks had to say about it all.

(13:32:15) piLLsdiGGah: Linux, HPC, ROI, Moab
(13:32:19) piLLsdiGGah: sounds like diseases
(13:32:35) piLLsdiGGah: I am afraid that you might get sick being around all of it

A few other sites picked up the story as well.

http://biz.yahoo.com/bw/071113/20071112006602.html?.v=1

http://www.pr-inside.com/moab-software-makes-windows-linux-hybrid-r297237.htm

http://www.hpcwire.com/hpc/1891078.html

Read Full Post »

Here are a few pictures and videos of International Supercomputing in Germany from June ’07. It was a great event and Microsoft definitely threw one of the best parties I’ve ever been to.

Here is the view from the conference centre of the river Elbe:

Conference:

Me manning the booth:

From the isle:


The following pictures are from the Microsoft party, they had a themed saxon night that was quite interesting.

The view from the mansion:

Youtube videos of the party:

Read Full Post »

Older Posts »