TweakTown NewsRefine News by Category:
Researchers at the University of Southampton have done something that only us mere mortals could only dream of, build a supercomputer from Raspberry Pi's and Lego.
They've called this Iridis-Pi, which is a very small 64-node cluster made from Raspberry Pi's Debian Wheezy distribution, linked through Ethernet. On their lonesome, Raspberry Pi's are not that powerful, but in a cluster with 1TB worth of storage in SD cards, that's another question.
Rackmounting the cluster was done in a very interesting way, where team lead Simon Cox and his son James put the entire array into two towers of Lego. LEGO!!! There are even instructions so you could do this at home if you've got the money for some Raspberry Pi's and some spare Lego around. The entire system cost less than $4,026 or so to make, which is not too bad at all.
IBM has bragging rights at the moment, with the world's fastest server chip clocking in at an incredible 5.5GHz. IBM's new zEnterprise EC12 mainframe cost the company $1 billion in development, and offers 25% more performance courtesy of their hexacore processors.
IBM's zEnterprise EC12 mainframes are available in multiple configurations, with as many as 120 cores available. All models will include transactional execution support, as well as Enhanced-DAT2, allowing 2GB page frames for more efficient utilization of huge quantities of RAM.
Another jewel of the newly-introduced zEnterprise EC12 mainframe is IBM's cryptographic co-processor, Crypto Express4S. It's quite special as its tamper-proof, providing privacy when handling transactions, and other similarly sensitive data. Crypto Express4S also offers multiple security configurations to support the requirements, and needs of bankers and other organizations handling sensitive data. This includes the information on smart passports and ID cards.
The U.S. Department of Energy have granted a two-year, $12.4 million contract for the research and development of exascale computer technology to NVIDIA. Scientists from DoE, and engineers from NVIDIA will work together in order to advance the field and produce an exascale computer that operates at a "reasonable" power level.
The focus of the joint effort will be on developing processor architecture, circuits, memory architecture, high-speed signalling, and programming models. The work done will involve thousands of throughput-optimized cores that will handle most of the heavy lifting, while some latency-optimized cores will do the residual serial computing. Seven DoE laboratories will guide NVIDIA as to what kind of scientific workloads the exascale computer will need to handle.
It was only last week that AMD were granted $12.6 million from the FastForward program for the same exascale research. The future is looking quite green indeed.
AMD has been granted a $12.6 million grant under the FastForward program, where they'll use the funds to research next-generation supercomputing technology. FastForward is part of a joint effort between the National Nuclear Security Administration and the Department of Energy designed to advance research of exascale computers.
Exascale computers are going to open a can of whoop ass against the current supercomputers like Blue Waters, currently installed at the University of Illinois at Urbana-Champaign that max out at around a thousand trillion operations per second, otherwise known as a petaflop. Exascale is set to process data up to a thousand times faster than current-generation petascale supercomputers. We're talking about some serious power here.
AMD will split the $12.6 million into $9.6 million to fund processor research and will use the remaining $3 million for memory advancements. This can only be good news, as AMD have been struggling for quite a while now. AMD have also previously worked with the U.S. government on supercomputer projects, with Oak Ridge National Laboratory's Jaguar supercomputer being AMD-powered. Upgrades for that system known as Titan, are already under way. AMD have provided nearly 20,000 Opteron processors, worth close to $300,000.
The flick has been switched for the most powerful GPU supercomputer, Emerald, at the Science and Technology Facilities Council's Rutherford Appleton Laboratory (RAL) in Oxfordshire, U.K., the two systems working together "will give businesses and academics unprecedented access to their super-fast processing capability".
The insane amounts of power will allow researchers to run simulations that range from health care to astrophysics. The supercomputer combo will be used to look at the Tamiflu vaccine's effect on swine flu, Square Kilometre Array project data, climate change modelling and 3G/4G communications modelling. The official launch of the e-Infrastructure South Consortium took place at the same time, coinciding with Emerald's unveiling.
Liquid cooling has been becoming more and more mainstream, thanks in part to closed-system water cooling units. IBM didn't want supercomputers left out so they designed a water cooling system for Europe's most powerful supercomputer. However, things become just a bit tougher when you start dealing with 18,000 processors compared to one or two.
The supercomputer sports 18,000 Xeon processors along with 324TB of memory. Both the processors and the memory are liquid cooled in this new system. The genius behind this system is that it cuts down on cooling costs for the supercomputer as well as cutting down on heating costs for the surrounding buildings.
It does this by heating the water to 45*C and then by pumping it through an exchanger which provides heat for the surrounding buildings. This water cooling system can, according to IBM, result in a 40% reduction of power usage which is good for up to 1 million euros. This is just the start of liquid cooling for IBM as they want to put the coolant pathways directly into the chip.
Just when you thought tape was dead, the National Center for Supercomputing Applications is getting ready to build a new storage infrastructure that will include 380 petabytes (PB) of magnetic tape capacity which will be backed up by 25 petabytes of online disk storage made up from 17,000 SATA drives.
The new storage infrastructure is said to be built to support one of the world's most powerful supercomputers, Blue Waters. Blue Waters was commissioned by the National Science Foundation (NSF), and is expected to have a peak performance of 11.5 petaflops. The NCSA says that they're building the system to:
Predict the behavior of complex biological systems, understand how the cosmos evolved after the Big Bang, design new materials at the atomic level, predict the behavior of hurricanes and tornadoes, and simulate complex engineered systems like the power distribution system and airplanes and automobiles.
Microsoft has announced a victory in the MinuteSort test. They claim to have tripled the amount of data sorted by the previous record holder, a Yahoo team. MinuteSort is a test to see how much data can be sorted in just a mere 60 seconds. As more data moves into the cloud, this ability to sort data quickly becomes a bigger and bigger issue.
According to Microsoft's post on TechNet, "In raw numbers, the team's system sorted 1401 gigabytes in just 60 seconds - using 1033 disks across 250 machines." This hardware compared to what Yahoo ran is roughly "one-sixth of the hardware resources" and managed to sort around 3 times as much data. You can see that the Microsoft solution is much more efficient.
Additionally, it's interesting to note that Microsoft Research didn't use Hadoop as one might expect. Instead, the researchers at Microsoft created a new system called "Flat Datacenter Storage." The "flat" portion is the important part of the system. Microsoft explains:
[Microsoft Research's Jeremy] Elson compares FDS to an organizational chart. In a hierarchical company, employees report to a superior, then to another superior, and so on. In a "flat" organization, they basically report to everyone, and vice versa.
Google and green, it goes hand-in-hand and their next data center will be built with energy savings in mind. Google have previously been good at this with other data centers that are energy-efficient and green. Their latest data center to be built in Taiwan will use thermal energy storage.
Thermal energy storage systems commonly use chilled liquid or ice to act as a thermal battery, enabling a data center operator to run air conditioning at night (when rates are obviously cheaper) and during the day, pump the chilled liquid around the facility for cooling.
Increasing electricity rates in Taiwan will be a big reason for Google to tap the thermal storage solution, where they can skip the peak power rates at night and just use liquid or ice as its also cleaner, and a longer lasting way to store energy rather than using batteries. A Google exec has cited the the increasing electricity rates in Taiwan is a reason for building the new system, and also notes that the new Taiwan-based data center will use 50-percent less energy than typical facilities.
Google is planning on spending $700 million on three new data centers in Taiwan, which will be the the company's third data center cluster in Asia, after their first two stops for construction in Hong Kong and Singapore. This will be the first time that Google have used thermal energy storage systems for a data center.
IBM over the next five years will build a low-power, exascale computer for largest-ever radio telescope, promises it won't be Skynet
Over the next five years, IBM is set to work with the Netherland's National Institute of Radio Astronomy (ASTRON) where tehy hope to develop a low-powered, exascale supercomputer. Not impressed yet? Hold onto your chair, dear reader. According to IBM, this supercomputer would be millions of times faster than today's high-end desktop PCs, and possibly thousands of times faster than even the most recent super computers.
The exascale computer would be used to analyze data collected by SKA (square-kilometer array), which is a cutting-edge radio telescope set to become the largest and most sensitive of its kind ever built. ASTRON hopes to have the telescope ready by 2024. While it's still a fair way off, the excitement will only build over time.
Now, this is where you don your math hat, and get ready to have your eyes widen a little: to compare to what we know, and use now, exascale refers to a computing device that is just incredibly fast, where the number of floating-point operations per second it can perform isn't measured by gigaflops or even petaflops, but exaflops. Today's highest-end desktop CPUs rank up around 20 gigaflops, not that impressive in terms of scale to this beast.