K, one of the world's fastest supercomputers based in Japan, is capable of 8.162 petaflops of performance, thanks to its insane 82,944 processors. The supercomputer is capable of driving 1016 billion operations per second, but even then, it is still hard pressed to compete with the brain in your head reading this article.
It took K around 40 minutes to simulate just 1 single second of human brain activity, even with all of its performance prowess. The experiment on simulated human brain activity involved 1.73 billion virtual nerve cells that were connected to 10.4 trillion virtual synapses, with every virtual synapse containing 24 bytes of memory.
NEST was used on the software side of things, which is a simulator for spiking neural network models that focuses on dynamics, size and structures of neural systems, versus exact morphology of individual neurons.
Just months ago the US government was shut down, with hundreds of thousands of jobs in the air, millions of US citizens affected, but that's nothing when it comes to the blank cheques it signs to the National Security Agency for "research".
The US spy agency is reportedly working on a quantum computer that would break through any encryption thanks to its pure, insane amount of processing power. Edward Snowden is behind the leaks - come on, you're not surprised now, are you - revealing a program that is worth some $79.7 million, dubbed "Penetrating Hard Targets".
The Washington Post is reporting the news, stating that the majority of the research is being done at the University of Maryland's Laboratory for Physical Sciences.
I don't know why it hasn't built its own yet, but The Pentagon has just dangled a carrot in front of hackers' eyes: offering up The Cyber Grand Challenge. The challenge will run for three years, with contestants needing to meet the Defense Advanced Research Projects Agency's (DARPA) requests.
DARPA would like to see a fully automated system that is capable of protecting itself from hackers, with the ability to respond to attacks within a matter of hours or seconds, versus a couple of days. The system, that I'm going to call Skynet, should be capable of updating its own cote on-the-fly and have decent reasoning abilities that are better than human experts based on vulnerability scanner signatures, intrusion detection signatures and security patches.
There are three prizes on offer by the government, the first is the grand prize of $2 million, with second and third places seeing a nice $1 million and $750,000 respectively. With all of the power, technology and secrecy, it's truly mind boggling that DARPA can't just build Skynet on its own.
It looks like the NSA's latest data center based in Utah is having all sorts of issues, with The Wall Street Journal reporting that it has seen 10 meltdowns happen in the last 13 months alone thanks to electrical surges.
This means that the NSA is using so much power trying to keep track on every human being on Earth that it is killing its data centers. Hundreds of thousands of dollars, if not millions of dollars in hardware has been killed, not to count the amount of man hours that would be pumped into the data center to try and fix things.
The WSJ somehow got its hands on a project documents that detail the issues the NSA is facing at its Utah data center, with arc fault failures being the core issues. An official who spoke with the WSJ described it as "a flash of lightning inside a 2-foot box" that caused huge explosions, melting metal and outright destroying circuits inside the data center.
US tax payer NSA spends over $1 million a month in power alone, chewing over 65 megawatts of juice, which is enough to power a city containing 20,000 people, this is a big issue. Backup generators have so far failed tests, with the cooling systems untested and then mixing in that the government and its contractors are disagreeing about "the adequacy of the electrical cooling systems" it truly is a laugh.
Something that doesn't mean much in mainstream news today, is going to mean worlds more in the coming decades, and this is quantum computing. Google and NASA announced their Quantum Artificial Intelligence Lab back in May, but now we get to check it out, in the video below.
The video comes thanks to the two giants making a short film for the Imagine Film Science Festival, with Google and NASA explaining that the AI lab would eventually solve optimization problems that are quite simply beyond the scope of traditional computers. NASA could use the quantum computing to help them look deeper into the dark beyond that is space, while Google could use it to improve medicine - especially with its latest announcement.
It's an interesting video, where we get to take a look into one of D-Wave's second-generation quantum computers, where each system requires a giant enclosure to keep the hardware temps down to near absolute zero. I'm guessing that this system could most likely run Crysis (that joke is getting really old, but someone has to say it, right?).
A team of artificial and natural knowledge researchers from the University of Illinois at Chicago have IQ-tested one of the most advanced artificial intelligence systems in the world to see how smart it is.
The results? It is about as smart as the average four-year-old child. The UIC team will report their findings in detail at the US Artificial Intelligence Conference in Bellevue, Washington, tomorrow. The UIC team put an artificial intelligence system developed at MIT called 'ConceptNet4' through the verbal parts of the Weschsler Preschool and Primary Scale of Intelligence Test, which is a standard IQ assessment for young children.
They found that ConceptNet4 had the IQ of a four-year-old child, but unlike most children, the machine's scores were quite uneven throughout most of the test. Robert Sloan, Professor and head of Computer Science at UIC and lead author of the study said: "If a child had scores that varied this much, it might be a symptom that something was wrong."
LSI Corporation have something kinda big to talk about for the weekend, and that news is IBM are now offering versions of their High IOPS Modular Adapters based on LSI's Nytro WarpDrive technology. These models join a growing list of PCIe Flash cards that are designed to be used with IBM System x server series.
IBM's System x server series are used by large clients that require insane speed for Big Data analytics. LSI's Nytro WarpDrive products provide ultra-low-latency, high-performance storage for data-intensive applications, all while helping cloud and enterprise datacenters reduce their storage footprint, as well as those ever-increasing energy costs. The IBM High IOPS capacity options range from 300GB to 800GB of SLC and MLC Flash memory for IBM System x servers.
You've probably never heard of Total, but what they're digging around the world to find you most likely use - oil and gas. Total are one of the world's major oil and gas groups, and on March 25 they will announce they're working with SGI for their new supercomputer.
SGI will be helping Total out with their SGI ICE X HPC system as the platform for its Total's new supercomputer named "Pangea." The companies will work together on Pangea, hoping to allow for more efficient upstream oil and gas exploration, as well as the discovery of reserves under challenging geological conditions. The new supercomputer will help scientists develop more complete visualizations of seismic landscapes over time, which will provide them with a better idea of what is happening beneath the Earth's surface.
Pangea is quite capable, delivering performance of up to 2.3 PFLOPS - considering the world's fastest supercomputers are only 10x faster than that - it still makes Pangea mighty fast. Pangea's unique computing architecture is based over 110,000 calculation cores, 7PB of storage, and an innovative cooling system that is intertwined with the processors. Power requirements sit at 2.8mW.
NVIDIA begins shipping GRID VCA, costs just $24,900 - features 8 GPUs, 16 threads of CPU and 192GB of RAM
NVIDIA have begun shipping their GRID Visual Computing Appliance (VCA), where designers, animators and visual production users can purchase for just $24,900. On top of this, there is a $2,400 yearly software license fee.
What do you get for $25,000? 8 GPUs, 16 threads of CPU and 192GB of RAM capable of delivering service up to 8 users, with the 16 GPU model doubling this to 16 simultaneous users. If a studio had a decent amount of designers or artists, this could definitely be an option worth looking into.
Apple did have the datacenter-designed Xserve for quite a while, but when they went into retirement Apple fans were left with no alternative. But there is someone who has thought of a way around this.
Over at Steve's Blog, who is anonymous, he has thought up of a solution to Apple's Xserve who has disappeared from the market. "Steve" has worked with vendors to developed a custom 1U shelving, cooling from car radiators and four-in-one power cables that cram 160 Mac mini's as well as a managing Xserve into a single enclosure.
This is quite the feat considering 160 machines would all obviously run hot, and we all know heat rises. Each machine sports a quad-core Core i7 processor and an SSD and this cluster features double the cores at 640 cores, when compared to the competing Xserve cluster. There's power consumption savings and a 45-second, network-controlled reboot with the cluster.
The UK's University of Cambridge is looking to host a new center where they'll see experts look into the possible dangers associated with advanced artificial intelligence (AI). Founded by philosophy professor Huw Price, cosmology professor Martin Rees, and Skype co-founder Jann Tallinn.
The University says that its Center for the Study of Existential Risk is set to open on campus sometime next year, and while acknowledging the far-fetched nature of movie-based AI like HAL 9000's rebellion, Price has told the AP that "it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". Adding:
It tends to be regarded as a flakey concern, but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous. What we're trying to do is to push it forward in the respectable scientific community.
Microsoft is looking to capture the energy in the gas produced by sewage and harness that otherwise wasted energy for data centers. Microsoft has just gotten approval to test out a new modular data center that will be powered by a biogas fuel cell. The fuel cell will be situated at a sewage plant in Wyoming.
"A person is consuming data and that person's waste is going to power the data center," Microsoft data center researcher Sean Parker told Technology Review. "It's been a mind shift...when we smell that methane at a water treatment plant, we realize we're smelling energy."
Supposedly this isn't just some project designed to make Microsoft appear "green" to the public. They actually seem genuinely interested in being able to co-locate smaller data centers around sources of biogas. This means datacenters could find their way to your local sewage system, farm, or landfill.
The only issue I foresee with locating data centers at farms and possibly other out of the way locations is that getting a fast internet connection to them could be tough. They will definitely need a fiber line, which could delay or increase costs for this new idea. If it proves successful, though, it will deal with all the excess methane being produced and might help fight global warming.
There's a new supercomputer on the block, Titan, and boy is it damn powerful. The Department of Energy's Oak Ridge National labs flicked the 'on' switch on Titan, powering up the latest, fastest supercomputer.
Titan sports 299,008 CPU cores, 18,688 GPUs, and over 700 terabytes of memory - wow. Titan is capable of a peak speed of 27 quadrillion calculations per second (petaflops), which is ten times the power of its predecessor, and has now slotted into the spot of the world's fastest supercomputer.
Titan is based on the Cray XK7 system, sporting 18,688 computing nodes, each sporting an AMD Opteron 6274 processor, and NVIDIA Tesla K20 GPU accelerator. NVIDIA's GPUs do most of the computing calculations for simulations, with the Opteron cores managing the GPUs.
Titan takes up 4,352 square feet of floorspace in ORNL's National Center for Computational Sciences. Someone has to say it, and I'll be the first - can it run Crysis? I know people are going to cringe at the thought of the old "can it run Crysis", but I still get a laugh out of it.
Researchers at the University of Southampton have done something that only us mere mortals could only dream of, build a supercomputer from Raspberry Pi's and Lego.
They've called this Iridis-Pi, which is a very small 64-node cluster made from Raspberry Pi's Debian Wheezy distribution, linked through Ethernet. On their lonesome, Raspberry Pi's are not that powerful, but in a cluster with 1TB worth of storage in SD cards, that's another question.
Rackmounting the cluster was done in a very interesting way, where team lead Simon Cox and his son James put the entire array into two towers of Lego. LEGO!!! There are even instructions so you could do this at home if you've got the money for some Raspberry Pi's and some spare Lego around. The entire system cost less than $4,026 or so to make, which is not too bad at all.
IBM has bragging rights at the moment, with the world's fastest server chip clocking in at an incredible 5.5GHz. IBM's new zEnterprise EC12 mainframe cost the company $1 billion in development, and offers 25% more performance courtesy of their hexacore processors.
IBM's zEnterprise EC12 mainframes are available in multiple configurations, with as many as 120 cores available. All models will include transactional execution support, as well as Enhanced-DAT2, allowing 2GB page frames for more efficient utilization of huge quantities of RAM.
Another jewel of the newly-introduced zEnterprise EC12 mainframe is IBM's cryptographic co-processor, Crypto Express4S. It's quite special as its tamper-proof, providing privacy when handling transactions, and other similarly sensitive data. Crypto Express4S also offers multiple security configurations to support the requirements, and needs of bankers and other organizations handling sensitive data. This includes the information on smart passports and ID cards.
The U.S. Department of Energy have granted a two-year, $12.4 million contract for the research and development of exascale computer technology to NVIDIA. Scientists from DoE, and engineers from NVIDIA will work together in order to advance the field and produce an exascale computer that operates at a "reasonable" power level.
The focus of the joint effort will be on developing processor architecture, circuits, memory architecture, high-speed signalling, and programming models. The work done will involve thousands of throughput-optimized cores that will handle most of the heavy lifting, while some latency-optimized cores will do the residual serial computing. Seven DoE laboratories will guide NVIDIA as to what kind of scientific workloads the exascale computer will need to handle.
It was only last week that AMD were granted $12.6 million from the FastForward program for the same exascale research. The future is looking quite green indeed.
AMD has been granted a $12.6 million grant under the FastForward program, where they'll use the funds to research next-generation supercomputing technology. FastForward is part of a joint effort between the National Nuclear Security Administration and the Department of Energy designed to advance research of exascale computers.
Exascale computers are going to open a can of whoop ass against the current supercomputers like Blue Waters, currently installed at the University of Illinois at Urbana-Champaign that max out at around a thousand trillion operations per second, otherwise known as a petaflop. Exascale is set to process data up to a thousand times faster than current-generation petascale supercomputers. We're talking about some serious power here.
AMD will split the $12.6 million into $9.6 million to fund processor research and will use the remaining $3 million for memory advancements. This can only be good news, as AMD have been struggling for quite a while now. AMD have also previously worked with the U.S. government on supercomputer projects, with Oak Ridge National Laboratory's Jaguar supercomputer being AMD-powered. Upgrades for that system known as Titan, are already under way. AMD have provided nearly 20,000 Opteron processors, worth close to $300,000.
The flick has been switched for the most powerful GPU supercomputer, Emerald, at the Science and Technology Facilities Council's Rutherford Appleton Laboratory (RAL) in Oxfordshire, U.K., the two systems working together "will give businesses and academics unprecedented access to their super-fast processing capability".
The insane amounts of power will allow researchers to run simulations that range from health care to astrophysics. The supercomputer combo will be used to look at the Tamiflu vaccine's effect on swine flu, Square Kilometre project data, climate change modelling and 3G/4G communications modelling. The official launch of the e-Infrastructure South Consortium took place at the same time, coinciding with Emerald's unveiling.
Liquid cooling has been becoming more and more mainstream, thanks in part to closed-system water cooling units. IBM didn't want supercomputers left out so they designed a water cooling system for Europe's most powerful supercomputer. However, things become just a bit tougher when you start dealing with 18,000 processors compared to one or two.
The supercomputer sports 18,000 Xeon processors along with 324TB of memory. Both the processors and the memory are liquid cooled in this new system. The genius behind this system is that it cuts down on cooling costs for the supercomputer as well as cutting down on heating costs for the surrounding buildings.
It does this by heating the water to 45*C and then by pumping it through an exchanger which provides heat for the surrounding buildings. This water cooling system can, according to IBM, result in a 40% reduction of power usage which is good for up to 1 million euros. This is just the start of liquid cooling for IBM as they want to put the coolant pathways directly into the chip.
Just when you thought tape was dead, the National Center for Supercomputing Applications is getting ready to build a new storage infrastructure that will include 380 petabytes (PB) of magnetic tape capacity which will be backed up by 25 petabytes of online disk storage made up from 17,000 SATA drives.
The new storage infrastructure is said to be built to support one of the world's most powerful supercomputers, Blue Waters. Blue Waters was commissioned by the National Science Foundation (NSF), and is expected to have a peak performance of 11.5 petaflops. The NCSA says that they're building the system to:
Predict the behavior of complex biological systems, understand how the cosmos evolved after the Big Bang, design new materials at the atomic level, predict the behavior of hurricanes and tornadoes, and simulate complex engineered systems like the power distribution system and airplanes and automobiles.
Microsoft has announced a victory in the MinuteSort test. They claim to have tripled the amount of data sorted by the previous record holder, a Yahoo team. MinuteSort is a test to see how much data can be sorted in just a mere 60 seconds. As more data moves into the cloud, this ability to sort data quickly becomes a bigger and bigger issue.
According to Microsoft's post on TechNet, "In raw numbers, the team's system sorted 1401 gigabytes in just 60 seconds - using 1033 disks across 250 machines." This hardware compared to what Yahoo ran is roughly "one-sixth of the hardware resources" and managed to sort around 3 times as much data. You can see that the Microsoft solution is much more efficient.
Additionally, it's interesting to note that Microsoft Research didn't use Hadoop as one might expect. Instead, the researchers at Microsoft created a new system called "Flat Datacenter Storage." The "flat" portion is the important part of the system. Microsoft explains:
[Microsoft Research's Jeremy] Elson compares FDS to an organizational chart. In a hierarchical company, employees report to a superior, then to another superior, and so on. In a "flat" organization, they basically report to everyone, and vice versa.