It’s not everyday when IBM decides to work on something big & boy when they think big they mean big. It’s no easy task when they decide to make the entire internet into an application. This mega system relies on a re-tooled version of IBM’s Blue Gene supercomputers so loved by the high performance computing crowd. IBM’s researchers have proposed tweaking the Blue Gene systems to run today’s most popular web applications such as Linux, Apache, MySQL and Ruby on Rails. What IBM assumes is that both large SMP (symmetric multi-processing) systems and clusters have their merits for massive computing tasks, but most organizations looking to crunch through really big jobs have preferred clusters, which provide certain economic advantages. Customers can buy lots of general purpose hardware and networking components at a low cost and cobble the systems together to equal or surpass the performance of gigantic SMPs Sun Microsystems,, Google and Microsoft stand as just some of the companies using these clusters to offer software, processing power and storage to other businesses. Their customers tap into these larger systems and can “grow” their applications as needed by firing up more and more of the provided computing infrastructure. But there are a few problems with this approach, including the amount of space and energy the clusters require. So, IBM wants to angle Blue Gene boxes at the web software jobs, believing it can run numerous applications on a single box at a lower cost than a cluster.

IBM’s unique Blue Gene design has attracted a lot of attention from national labs and other major HPC customers. In fact, four of the 10 fastest supercomputers on the planet rely on the Blue Gene architecture, including the world’s fastest machine: the Blue Gene/L at Lawrence Livermore National Laboratory. The older Blue Gene/P system combines hundreds and thousands of low-power processor cores to make a single box. A typical configuration would include four 850MHz PowerPC cores arranged in a system-on-a-chip model with built-in memory and interconnect controllers. You can take 32 of these “nodes” and pop them onto a card. Then you have 16 of those cards slot into a midplane. Each server rack has two midplanes, leaving you with 1024 nodes and 2TB of memory. In theory, you can connect up to 16,384 racks, providing up to 67.1m cores with 32PB of memory. That’ll get some work done. Each rack boasts IO bandwidth of 640Gb/s, which puts our theoretical system at 10.4Pb/s. The architecture of Blue Gene gives IBM a so-called “hybrid” approach, according to the researchers, where they can get the best of both the SMP and cluster worlds. IBM is making heavy use of a Linux microkernel, network-based management, software appliances and a quasi-stateless approach. We just hope that they don’t use it to control the internet, just help run it faster.

Source :