IT/Datacenter & Super Computing News - Page 23

The latest and most important IT/Datacenter & Super Computing news - Page 23.

Follow TweakTown on Google News

The fastest GPU supercomputer in the UK was just switched on, good morning, Emerald

Anthony Garreffa | Jul 4, 2012 11:27 PM CDT

The flick has been switched for the most powerful GPU supercomputer, Emerald, at the Science and Technology Facilities Council's Rutherford Appleton Laboratory (RAL) in Oxfordshire, U.K., the two systems working together "will give businesses and academics unprecedented access to their super-fast processing capability".

The insane amounts of power will allow researchers to run simulations that range from health care to astrophysics. The supercomputer combo will be used to look at the Tamiflu vaccine's effect on swine flu, Square Kilometre Array project data, climate change modelling and 3G/4G communications modelling. The official launch of the e-Infrastructure South Consortium took place at the same time, coinciding with Emerald's unveiling.

This is a consortium of four U.K. universities, Bristol, Oxford, University College London and Southampton, who will collaborate with RAL and work with the supercomputers. The Engineering and Physical Sciences Research Council (EPSRC) funded the supercomputers with a £3.7 million grant. The EPSRC press release has a tonne of details and specifications for the supercomputers, and also states:

Continue reading: The fastest GPU supercomputer in the UK was just switched on, good morning, Emerald (full post)

SMART Storage Systems releases Optimus Ultra+ SSD

Paul Alcorn | Jun 26, 2012 9:55 AM CDT

SMART Storage Systems has announced their newest entry into their Enterprise SSD family, the Optimus Ultra+. The 'Ultra' part of the name comes from two central facets of performance, speed and endurance. This SAS 6/Gb/s SSD sports some impressive numbers, with 100,000 random read IOPS and 60,000 random write IOPS. The SSD also supports dual-port SAS, which allows the throughput to climb to an unheard of 1GB/s.

The real story here is the endurance however. The Optimus Ultra+ is rated for 50 Drive Writes Per Day (DWPD) for five years. This means that the capacity of the drive can be written and deleted 50 times every day for five years.

The Optimus family of SSDs all centers around one philosophy; providing SLC-like endurance with MLC pricing. The attraction of MLC over SLC is simple; SLC commands ridiculously high prices while MLC is becoming garden-variety. Even in the consumer market we are now seeing MLC drop below the dollar per GB threshold. This low price level will always be welcome in any market, but creating Enterprise-class MLC is not an easy task.

Continue reading: SMART Storage Systems releases Optimus Ultra+ SSD (full post)

NCSA is building a supercomputer with 380 petabytes of storage... of magnetic tape capacity

Anthony Garreffa | May 25, 2012 12:25 AM CDT

Just when you thought tape was dead, the National Center for Supercomputing Applications is getting ready to build a new storage infrastructure that will include 380 petabytes (PB) of magnetic tape capacity which will be backed up by 25 petabytes of online disk storage made up from 17,000 SATA drives.

The new storage infrastructure is said to be built to support one of the world's most powerful supercomputers, Blue Waters. Blue Waters was commissioned by the National Science Foundation (NSF), and is expected to have a peak performance of 11.5 petaflops. The NCSA says that they're building the system to:

Predict the behavior of complex biological systems, understand how the cosmos evolved after the Big Bang, design new materials at the atomic level, predict the behavior of hurricanes and tornadoes, and simulate complex engineered systems like the power distribution system and airplanes and automobiles.

Continue reading: NCSA is building a supercomputer with 380 petabytes of storage... of magnetic tape capacity (full post)

Microsoft Research roughly triples amount of data sorted in 60 seconds while using less hardware

Trace Hagan | May 21, 2012 3:31 PM CDT

Microsoft has announced a victory in the MinuteSort test. They claim to have tripled the amount of data sorted by the previous record holder, a Yahoo team. MinuteSort is a test to see how much data can be sorted in just a mere 60 seconds. As more data moves into the cloud, this ability to sort data quickly becomes a bigger and bigger issue.

According to Microsoft's post on TechNet, "In raw numbers, the team's system sorted 1401 gigabytes in just 60 seconds - using 1033 disks across 250 machines." This hardware compared to what Yahoo ran is roughly "one-sixth of the hardware resources" and managed to sort around 3 times as much data. You can see that the Microsoft solution is much more efficient.

Additionally, it's interesting to note that Microsoft Research didn't use Hadoop as one might expect. Instead, the researchers at Microsoft created a new system called "Flat Datacenter Storage." The "flat" portion is the important part of the system. Microsoft explains:

Continue reading: Microsoft Research roughly triples amount of data sorted in 60 seconds while using less hardware (full post)

Google's next data center to be more energy efficient, uses thermal energy storage

Anthony Garreffa | Apr 5, 2012 6:40 AM CDT

Google and green, it goes hand-in-hand and their next data center will be built with energy savings in mind. Google have previously been good at this with other data centers that are energy-efficient and green. Their latest data center to be built in Taiwan will use thermal energy storage.

Thermal energy storage systems commonly use chilled liquid or ice to act as a thermal battery, enabling a data center operator to run air conditioning at night (when rates are obviously cheaper) and during the day, pump the chilled liquid around the facility for cooling.

Increasing electricity rates in Taiwan will be a big reason for Google to tap the thermal storage solution, where they can skip the peak power rates at night and just use liquid or ice as its also cleaner, and a longer lasting way to store energy rather than using batteries. A Google exec has cited the the increasing electricity rates in Taiwan is a reason for building the new system, and also notes that the new Taiwan-based data center will use 50-percent less energy than typical facilities.

Continue reading: Google's next data center to be more energy efficient, uses thermal energy storage (full post)

IBM over the next five years will build a low-power, exascale computer for largest-ever radio telescope, promises it won't be Skynet

Anthony Garreffa | Apr 2, 2012 9:26 PM CDT

Over the next five years, IBM is set to work with the Netherland's National Institute of Radio Astronomy (ASTRON) where tehy hope to develop a low-powered, exascale supercomputer. Not impressed yet? Hold onto your chair, dear reader. According to IBM, this supercomputer would be millions of times faster than today's high-end desktop PCs, and possibly thousands of times faster than even the most recent super computers.

The exascale computer would be used to analyze data collected by SKA (square-kilometer array), which is a cutting-edge radio telescope set to become the largest and most sensitive of its kind ever built. ASTRON hopes to have the telescope ready by 2024. While it's still a fair way off, the excitement will only build over time.

Now, this is where you don your math hat, and get ready to have your eyes widen a little: to compare to what we know, and use now, exascale refers to a computing device that is just incredibly fast, where the number of floating-point operations per second it can perform isn't measured by gigaflops or even petaflops, but exaflops. Today's highest-end desktop CPUs rank up around 20 gigaflops, not that impressive in terms of scale to this beast.

Continue reading: IBM over the next five years will build a low-power, exascale computer for largest-ever radio telescope, promises it won't be Skynet (full post)