HP has unveiled its new supercomputer, simply called The Machine, something first announced back in 2014. HP is aiming to smash all previous technology in existence with The Machine, as their new supercomputer doesn't need to rely on traditional processors - instead, it will utilize memory for its brute speed.
HP Enterprise explains that The Machine is up to 8000x faster than traditional machines (you'd freakin' hope so), but it's still years away from being released. HP will be aiming at high-end servers for companies like Google and Facebook, with the architecture itself powered by memory-driven computing, we should see this type of memory-driven PC trickle down to the PC one day, I hope.
The Machine uses photonics to transmit data using light, and thanks to its massive, and super-fast memory pool - The Machine can really crank through those datasets. When the data needs to be transferred between processors, things slow down - but HP has thought ahead of time using memory to super-speed the supercomputer.
NVIDIA just continues to smash the GPU and supercomputing game, with the announcement of their newest DGX SATURNV supercomputer that is designed from the ground up on building smarter cars and next generation GPUs.
The new DGX SATURNV is ranked 28th on the Top500 list of supercomputers, but thanks to the use of the Tesla P100-powered DGX-1 units, it's the most efficient supercomputer in the world. Up until now, the most efficient machine on the Top500 list is at 6.67 GigaFlops/Watt, but the new NVIDIA DGX SATURNV is capable of a massive 9.46 GigaFlops/Watt, a huge 42% improvement.
Inside of the NVIDIA DGX-1 we have:
- Up to 170 teraflops of half-precision (FP16) peak performance
- Eight Tesla P100 GPU accelerators, 16GB memory per GPU
- NVLink Hybrid Cube Mesh
- 20 Core Broadwell-E "Xeon E5-2698 v4" CPU (2.2GHz)
- 7TB SSD DL Cache
- Dual 10GbE, Quad InfiniBand 100Gb networking
- 3U - 3200W
From now on, don't mess with Canadian PM Justin Trudeau - he's a total boss in his knowledge about quantum computing - smacking down a "sassy reporter" who didn't expect Trudeau to know much about quantum computing.
Well, he's actually quite knowledgeable when it comes to quantum computing, answering the reporter's question of "I was going to ask you to explain quantum computing, but...". Trudeau was quick off the mark, replying with: "Very simple: normal computers work by...".
The crowd laughed, interrupting him briefly, but then he continued with a brief explanation of quantum computing - explaining more about the subject than the reporter thought he'd know on the subject.
This week during GTC we saw NVIDIA change its focus from primarily consumer GPU's to professional technology aimed squarely at the evolution of AI. Pascal, while a vastly different and incredibly powerful architecture, is perfect for the ever evolving HPC field. IBM, at the OpenPOWER Summit that went on this week alongside GTC, announced their newest server that includes the Tesla P100 compute accelerators combined with POWER8 processors.
The big draw is the use of NVLink, the 40GBps data link directly from the CPU to the GPU that allows for quick communication and transfer of data. It's this innovation that might help to fuel faster HPC applications and even better, more nimble AI that can absorb vast amounts of information more quickly than before. The new server architecture will require the porting over of applications, but IBM and NVIDIA are both willing to assist in that regard, to make the transition easier.
IBM's Watson division will also be participating in the design and implementation of the new server platform, adn might even end up incorporating the Tesla P100 into their own design for an upgraded Watson super computer. The initial specifcations call for cramming 4 of those compute cards into the server along with four POWER8 12-core/96-thread CPU's operating at 3-3.5GHz combined with up to 1TB of DDR4-2400 RAM in this case. The implications for AI, let alone any other type of compute heavy load are tremendous. This could very well put the PPC architecture back on the map in a big way, especially with the assistance from IBM and NVIDIA in porting over your applications. Second generation POWER8 servers are just a stepping stone to the next-generation POWER9 architecture, which is just around the corner.
Tyan announced at the OpenPOWER summit this past week that they're going to start supporting IBM's OpenPOWER initiative by offering 1U POWER8-based servers for the HPC and in-memory application markets. POWER processors might not be as prolific as Xeon, but Tyan is of the mind that variety is the spice of life, and that there's a market for these processors that could well be untapped.
They're going to offer a total of three different configurations with their new GT75-BP012 server platform. This particular platform is a single-CPU design that allows for a massive amount of memory to be installed, though at slightly slower DDR3L speeds. They're positioning these to compete in niche markets that might not need such high processing requirements but need that extra capacity of RAM to be able to keep more things persistent so they run slightly faster as a result. It'll be difficult to compete with the price-performance ratio of the typical, and even lower-cost Xeon's, but with far more DRAM here, it could be useful in some markets.
The maximum configuration will have a single 10-core/80-thread POWER8 CPU running at 2.095GHz with 1024GB of DDR3L-1600MHz RAM, four 10GbE ports, four GbE ports and 1 PCIe expansion slot, that will actually support NVIDIA's forthcoming Pascal P100 GPU. These also have support for IBM's own Centaur memorry buffer chips that allow for even more in-memory buffer capacity at DDR3 speeds. The low-end will have an 8-core/64-thread POWER8 CPU running at 2.328GHz with the same 1TB of DDR3L RAM limit. a 750W PSU will be powering the servers.
There's no information on what these servers will cost though Tyan is expecting them to be available sometime by the end of the month.
It looks like The Matrix and the Terminator movies weren't enough to make us stop trying to create an AI takeover, but now Facebook has just announced plans to open source its Open Rack-compatible hardware design for AI computing - something that has been codenamed Big Sur.
Facebook's Kevin Lee and Serkan Piantino explained that Big Sur was built to use 8 x high-performance GPUs, consuming 300W each. They were using NVIDIA's Tesla Accelerated Computing Platform, claiming that Big Sur was twice as fast as previous generations, something that were using off-the-shelf components and design.
The increased speed allows Facebook to train neural networks twice as fast, as well as exploring networks that are twice as large as before. In the end, training can be distributed between the 8 x GPUs, with the size and speed of the networks being scaled by another factor of two.
President Obama doesn't have much longer in office, but one of his last executive orders while he's in power, is that the United States build the world's fastest supercomputer by 2025.
The National Strategic Computing Initiative has been kicked off to get the US building an Exascale capable machine that would lead the world in the technological arms race. The new system will be developed by various arms of the federal government, and then be boosted up in speed to help research in various topics. One area would be helping NASA "better understand turbulence for aircraft design", reports Engadget.
As it stands, the US is behind both China and Japan when it comes to supercomputer speed. China's Tianhe-2 has been the world's fastest supercomputer for nearly two and a half years now, but with the federal government behind it, and I'm sure a boat load of taxpayers' money, the US will have Skynet online in 2025.
This massive stockpile of components will all be slotted nicely together in order to cool the NNSA's first Advanced Simulation and Computing Program's product - named the Trinity Supercomputer.
All of this gear is called 'warm-water cooling' and it's what you'd expect in order to provide an energy-saving alternative for some of the world's most advanced tech.
An explanation from the Los Alamos National Labarotory reads: "The Trinity supercomputer is the first of the NNSA's Advanced Simulation and Computing program's advanced technology systems. Once installed, Trinity will be the first platform large and fast enough to begin to accommodate finely resolved 3D calculations for full-scale, end-to-end weapons calculations. But the installation of such a powerful supercomputer is no small task." But wait, there's more! "In order to accommodate Trinity, the SCC first had to undergo a series of major mechanical and electrical infrastructure upgrades. Because energy conservation is a priority at Los Alamos, these upgrades included a shift to warm water cooling technology (which will result in a major energy savings), as well as a decrease in the use of city/well water for cooling towers.."
OCZ Storage Solutions has just announced the release of their Vertex 460A. The original Vertex series has been a stellar product with a history that spans back to the original version with the first-gen Indilinx Barefoot controller. The new version leverages Toshiba's latest A19 MLC NAND flash. The A19nm process geometry is the second generation of Toshiba 19nm MLC. The new version also features the Barefoot 3 controller and sequential speeds of 545/525 MB/s read/write (480GB model). Random speeds also top out at 95,000/90,000 random read/write IOPS, respectively, but performance varies depending upon capacity, as noted in the graphic below.
The Vertex 460A features an endurance rating of 20GB of writes for the three-year warranty period. OCZ is providing their new ShieldPlus warranty, which provides advance shipping and covers return shipping costs if there is the need for an RMA. The new Vertex also features Acronis True Image for cloning an existing installation to the SSD, and a 3.5" desktop adaptor. OCZ recently launched a new online shop, and we expect units to be available there shortly.
K, one of the world's fastest supercomputers based in Japan, is capable of 8.162 petaflops of performance, thanks to its insane 82,944 processors. The supercomputer is capable of driving 1016 billion operations per second, but even then, it is still hard pressed to compete with the brain in your head reading this article.
It took K around 40 minutes to simulate just 1 single second of human brain activity, even with all of its performance prowess. The experiment on simulated human brain activity involved 1.73 billion virtual nerve cells that were connected to 10.4 trillion virtual synapses, with every virtual synapse containing 24 bytes of memory.
NEST was used on the software side of things, which is a simulator for spiking neural network models that focuses on dynamics, size and structures of neural systems, versus exact morphology of individual neurons.