TweakTown NewsRefine News by Category:
Liquid cooling has been becoming more and more mainstream, thanks in part to closed-system water cooling units. IBM didn't want supercomputers left out so they designed a water cooling system for Europe's most powerful supercomputer. However, things become just a bit tougher when you start dealing with 18,000 processors compared to one or two.
The supercomputer sports 18,000 Xeon processors along with 324TB of memory. Both the processors and the memory are liquid cooled in this new system. The genius behind this system is that it cuts down on cooling costs for the supercomputer as well as cutting down on heating costs for the surrounding buildings.
It does this by heating the water to 45*C and then by pumping it through an exchanger which provides heat for the surrounding buildings. This water cooling system can, according to IBM, result in a 40% reduction of power usage which is good for up to 1 million euros. This is just the start of liquid cooling for IBM as they want to put the coolant pathways directly into the chip.
Just when you thought tape was dead, the National Center for Supercomputing Applications is getting ready to build a new storage infrastructure that will include 380 petabytes (PB) of magnetic tape capacity which will be backed up by 25 petabytes of online disk storage made up from 17,000 SATA drives.
The new storage infrastructure is said to be built to support one of the world's most powerful supercomputers, Blue Waters. Blue Waters was commissioned by the National Science Foundation (NSF), and is expected to have a peak performance of 11.5 petaflops. The NCSA says that they're building the system to:
Predict the behavior of complex biological systems, understand how the cosmos evolved after the Big Bang, design new materials at the atomic level, predict the behavior of hurricanes and tornadoes, and simulate complex engineered systems like the power distribution system and airplanes and automobiles.
Microsoft has announced a victory in the MinuteSort test. They claim to have tripled the amount of data sorted by the previous record holder, a Yahoo team. MinuteSort is a test to see how much data can be sorted in just a mere 60 seconds. As more data moves into the cloud, this ability to sort data quickly becomes a bigger and bigger issue.
According to Microsoft's post on TechNet, "In raw numbers, the team's system sorted 1401 gigabytes in just 60 seconds - using 1033 disks across 250 machines." This hardware compared to what Yahoo ran is roughly "one-sixth of the hardware resources" and managed to sort around 3 times as much data. You can see that the Microsoft solution is much more efficient.
Additionally, it's interesting to note that Microsoft Research didn't use Hadoop as one might expect. Instead, the researchers at Microsoft created a new system called "Flat Datacenter Storage." The "flat" portion is the important part of the system. Microsoft explains:
[Microsoft Research's Jeremy] Elson compares FDS to an organizational chart. In a hierarchical company, employees report to a superior, then to another superior, and so on. In a "flat" organization, they basically report to everyone, and vice versa.
Google and green, it goes hand-in-hand and their next data center will be built with energy savings in mind. Google have previously been good at this with other data centers that are energy-efficient and green. Their latest data center to be built in Taiwan will use thermal energy storage.
Thermal energy storage systems commonly use chilled liquid or ice to act as a thermal battery, enabling a data center operator to run air conditioning at night (when rates are obviously cheaper) and during the day, pump the chilled liquid around the facility for cooling.
Increasing electricity rates in Taiwan will be a big reason for Google to tap the thermal storage solution, where they can skip the peak power rates at night and just use liquid or ice as its also cleaner, and a longer lasting way to store energy rather than using batteries. A Google exec has cited the the increasing electricity rates in Taiwan is a reason for building the new system, and also notes that the new Taiwan-based data center will use 50-percent less energy than typical facilities.
Google is planning on spending $700 million on three new data centers in Taiwan, which will be the the company's third data center cluster in Asia, after their first two stops for construction in Hong Kong and Singapore. This will be the first time that Google have used thermal energy storage systems for a data center.
IBM over the next five years will build a low-power, exascale computer for largest-ever radio telescope, promises it won't be Skynet
Over the next five years, IBM is set to work with the Netherland's National Institute of Radio Astronomy (ASTRON) where tehy hope to develop a low-powered, exascale supercomputer. Not impressed yet? Hold onto your chair, dear reader. According to IBM, this supercomputer would be millions of times faster than today's high-end desktop PCs, and possibly thousands of times faster than even the most recent super computers.
The exascale computer would be used to analyze data collected by SKA (square-kilometer array), which is a cutting-edge radio telescope set to become the largest and most sensitive of its kind ever built. ASTRON hopes to have the telescope ready by 2024. While it's still a fair way off, the excitement will only build over time.
Now, this is where you don your math hat, and get ready to have your eyes widen a little: to compare to what we know, and use now, exascale refers to a computing device that is just incredibly fast, where the number of floating-point operations per second it can perform isn't measured by gigaflops or even petaflops, but exaflops. Today's highest-end desktop CPUs rank up around 20 gigaflops, not that impressive in terms of scale to this beast.
Well not really. Or rather, not yet.
Columbia doctors want to use Watson to diagnose patients, so they've been testing "him" for almost a year to see how the trivia super computer stacks up in medical problem-solving.
The project is led by Herbert Chase, a professor of clinical medicine in the Department of Biomedical Informatics. Through a series of tests, questions, inquiries, and experiments, Chase hopes to retrofit the knowledge bot with an understanding of diagnostic medicine.
"It's been impossible for probably 20 or 30 years for a human to process the information required to practice medicine at the highest, evidence-based, guideline-based level,"Chase said in the Columbia news release.
Evidently, the minute "trouble" that Watson had with some of the JEOPARDY! questions is a bonus for the researchers. During the popular game show, contestants got a live feed of Watson's logic processing in reference to the posed question (answer?). There were often two or three wrong answers with probability factors accompanying each possibility. The stakes are a much higher in medicine, as well as the vast amount of information available. Try entering in "headache" as a symptom in any web-based diagnosis site and you'll get hits on everything from Hangover to Brain Tumor.
In a result that should have surprised no one, IBM's supercomputer "Watson" soundly beat two of the best Jeopardy! contestants at their own game in a three day competition held this week. While the humans were able to put up a battle here and there, Watson's 90 32-core IBM Power 750 Express servers and16 terabytes of memory were too much for the mere human brainpower of Ken Jennings and Brad Rutter.
Rutter, Jeopardy's all time leading money winner and Jennings, who won 74 straight Jeopardy appearances, didn't even come close in the overall tally. Watson's $77,147 in earnings would have beaten the combined totals for Jennings ($24,000) and Rutter ($21,600). Watson's win netted a $1 million prize which IBM is donating to World Vision.
Wow, just wow. China has just surpassed the US and the rest of the world by revealing the world's fastest supercomputer.
Dubbed the Tianhe-1A, it is located at the National Supercomputer Center (where else would you build a Supercomputer but the National Supercomputer Center!!) in Tianjin. The Tianhe-1A scored 2.507 petaflops as measured by the LINPACK benchmark.
The Tianhe-1A now beats the Cray's 2.3 petaflops.
Tianhe-1A was able to achieve it's record by using 7, 168 NVIDIA Tesla M2050 GPU's and 14, 336 Intel Xeon CPU's consuming a very nice 4.04 megawatts.
Together with Taiwan's National Chao Tung University and NVIDIA, ASUS has finished constructing a Xeon W3580 powered desktop super computer with three Tesla c1060 cards, giving it 960 Processing Cores and a whopping 1.1 TeraFlops of computing power.
Other specs of the ESC 1000 supercomputer include 24GB of RAM, a Quadro FX 5800 graphics card, 500GB HDD and 1100W power supply, housed in a chassis with dimensions of just 445 x 217.5 x 545 mm.
ASUS aim the system towards specialised environments such as for the purposes of scientific research, image manipulation and engineering modeling. The system is apparently ready for mass production as of now, but is yet to give an actual date as to its availability or the hefty price tag to come with it.
You know a product is really here when you see offerings from companies like ASRock. Well today pictures of their new X58 Super Computer board popped up on the internet.
Tech Connect Magazine has a quick piece on this board and the feature set it offers to "the rest of the audience".
Read all about it here
Joining the Bloomfield game, ASRock has designed a X58 motherboard of its own and it can be seen just below. Bearing the 'SuperComputer' name just for the sake of it, the motherboard has one Socket 1366 ready to house any of the three Core i7 processors prepared by Intel, six DDR3 memory slots and four PCI-Express x16 slots ensuring CrossFireX and SLI support.