I thought that Skynet would open up in the US, but it looks like the Terminators will want some bacon and maple syrup instead, with Google announcing its new DeepMind AI research lab is open for... well... business I guess, in Canada.
DeepMind has announced that it's new AI research lab is opening up in Edmonton, Alberta later this month, with three University of Alberta computer science professors (Richard Sutton, Michael Bowling and Patrick Pilarski) leading the group. They will be joined by seven other AI leaders, too. The big question is: why isn't Google's new AI digs opening up on US soil? Recode reports that there are familiarity and political considerations, with over a dozen University of Alberta grads working at DeepMind, and Sutton was one of the first to join the AI lab as an advisor.
The Canadian government is also more willing to invest in AI research, with Canada cozying up to AI scientists to the tune of $125 million in funding - on top of existing funding. On US soil, the Trump administration is swaying away from scientific research, proposing major funding cuts.
It appears that Skynet wants us all on Big Pharma drugs, with British pharmaceutical giant GlaxoSmithKline (GSK) looking to AI to design better, more efficient - and I'm sure, more profitable drugs.
GSK announced a new partnership with Exscientia, a British company that specializes in drug design. The two will work toegther to use Exscienta's AI-enabled platform to discover new, high-quality drug candidate-quality molecules. GSK has tasked Exscientia to work on 10 specific disease-related targets, and if they hit those targets, GSK will write a cheque for $43 million in research payments.
The partnership will see the companies tapping into the power of supercomputers and machine learning in order to see how new compounds will behave, and by speeding this process up with the help of crazy amounts of computing power aided by AI, it will save the company both time and money. Human researchers are nowhere near as efficient as AI and supercomputers working every second of every day with a billion things going on at once, which could mark a very big change for medicine.
You'd think that the US would lead the supercomputer race, but it's China that is dominating right now. According to the latest TOP500 ranking of the world's most powerful supercomputers, the US has fallen so far behind it's at 1996 levels.
The list is updated twice a year, and determines the rankings of the supercomputer by overall computing power. China has the top two spots with its Sunway TaihuaLight pushing an incredible 93 petaflops, while the Tianhe-2 is capable of 33.9 petaflops. The US only has its Department of Energy's Titan supercomputer, with just 17.6 petaflops of computing power in comparison.
The newly upgraded Swiss National Supercomputing Centre has some power on the new list, pushing out 19.6 petaflops - beating the Titan in the US, up from its previous power of 9.8 petaflops. The US is no longer in the top three supercomputer rankings, falling into fourth place for the ifrst time in over 20 years.
But don't worry, the US Department of Energy is building an all-new IBM machine dubbed Summit, which will be pushing a mind blowing 200 petaflops. Summit will be online next year, offering double the performance of the #1 fastest supercomputer in the world.
China is working on the next generation of supercomputers, with plans to have a prototype exacale computer by the end of 2017. The country wants to be the first to build a supercomputer capable of a billion, billion calculations per second.
If they can do this, China would propel themselves to #1 in the world of supercomputing - beating out what was the world's fastest supercomputer, the Sunway TaihuLight machine, which came to life in June 2016.
China became a country to look to for supercomputers, as they made their supercomputer using locally made chips - versus products from US companies like AMD, Intel, or NVIDIA. Exascale computers on the other hand, are magnitudes more powerful - capable of 1 quintillion calculations per second (a billion, billion - so, like, really fast).
HP has unveiled its new supercomputer, simply called The Machine, something first announced back in 2014. HP is aiming to smash all previous technology in existence with The Machine, as their new supercomputer doesn't need to rely on traditional processors - instead, it will utilize memory for its brute speed.
HP Enterprise explains that The Machine is up to 8000x faster than traditional machines (you'd freakin' hope so), but it's still years away from being released. HP will be aiming at high-end servers for companies like Google and Facebook, with the architecture itself powered by memory-driven computing, we should see this type of memory-driven PC trickle down to the PC one day, I hope.
The Machine uses photonics to transmit data using light, and thanks to its massive, and super-fast memory pool - The Machine can really crank through those datasets. When the data needs to be transferred between processors, things slow down - but HP has thought ahead of time using memory to super-speed the supercomputer.
NVIDIA just continues to smash the GPU and supercomputing game, with the announcement of their newest DGX SATURNV supercomputer that is designed from the ground up on building smarter cars and next generation GPUs.
The new DGX SATURNV is ranked 28th on the Top500 list of supercomputers, but thanks to the use of the Tesla P100-powered DGX-1 units, it's the most efficient supercomputer in the world. Up until now, the most efficient machine on the Top500 list is at 6.67 GigaFlops/Watt, but the new NVIDIA DGX SATURNV is capable of a massive 9.46 GigaFlops/Watt, a huge 42% improvement.
Inside of the NVIDIA DGX-1 we have:
- Up to 170 teraflops of half-precision (FP16) peak performance
- Eight Tesla P100 GPU accelerators, 16GB memory per GPU
- NVLink Hybrid Cube Mesh
- 20 Core Broadwell-E "Xeon E5-2698 v4" CPU (2.2GHz)
- 7TB SSD DL Cache
- Dual 10GbE, Quad InfiniBand 100Gb networking
- 3U - 3200W
From now on, don't mess with Canadian PM Justin Trudeau - he's a total boss in his knowledge about quantum computing - smacking down a "sassy reporter" who didn't expect Trudeau to know much about quantum computing.
Well, he's actually quite knowledgeable when it comes to quantum computing, answering the reporter's question of "I was going to ask you to explain quantum computing, but...". Trudeau was quick off the mark, replying with: "Very simple: normal computers work by...".
The crowd laughed, interrupting him briefly, but then he continued with a brief explanation of quantum computing - explaining more about the subject than the reporter thought he'd know on the subject.
This week during GTC we saw NVIDIA change its focus from primarily consumer GPU's to professional technology aimed squarely at the evolution of AI. Pascal, while a vastly different and incredibly powerful architecture, is perfect for the ever evolving HPC field. IBM, at the OpenPOWER Summit that went on this week alongside GTC, announced their newest server that includes the Tesla P100 compute accelerators combined with POWER8 processors.
The big draw is the use of NVLink, the 40GBps data link directly from the CPU to the GPU that allows for quick communication and transfer of data. It's this innovation that might help to fuel faster HPC applications and even better, more nimble AI that can absorb vast amounts of information more quickly than before. The new server architecture will require the porting over of applications, but IBM and NVIDIA are both willing to assist in that regard, to make the transition easier.
IBM's Watson division will also be participating in the design and implementation of the new server platform, adn might even end up incorporating the Tesla P100 into their own design for an upgraded Watson super computer. The initial specifcations call for cramming 4 of those compute cards into the server along with four POWER8 12-core/96-thread CPU's operating at 3-3.5GHz combined with up to 1TB of DDR4-2400 RAM in this case. The implications for AI, let alone any other type of compute heavy load are tremendous. This could very well put the PPC architecture back on the map in a big way, especially with the assistance from IBM and NVIDIA in porting over your applications. Second generation POWER8 servers are just a stepping stone to the next-generation POWER9 architecture, which is just around the corner.
Tyan announced at the OpenPOWER summit this past week that they're going to start supporting IBM's OpenPOWER initiative by offering 1U POWER8-based servers for the HPC and in-memory application markets. POWER processors might not be as prolific as Xeon, but Tyan is of the mind that variety is the spice of life, and that there's a market for these processors that could well be untapped.
They're going to offer a total of three different configurations with their new GT75-BP012 server platform. This particular platform is a single-CPU design that allows for a massive amount of memory to be installed, though at slightly slower DDR3L speeds. They're positioning these to compete in niche markets that might not need such high processing requirements but need that extra capacity of RAM to be able to keep more things persistent so they run slightly faster as a result. It'll be difficult to compete with the price-performance ratio of the typical, and even lower-cost Xeon's, but with far more DRAM here, it could be useful in some markets.
The maximum configuration will have a single 10-core/80-thread POWER8 CPU running at 2.095GHz with 1024GB of DDR3L-1600MHz RAM, four 10GbE ports, four GbE ports and 1 PCIe expansion slot, that will actually support NVIDIA's forthcoming Pascal P100 GPU. These also have support for IBM's own Centaur memorry buffer chips that allow for even more in-memory buffer capacity at DDR3 speeds. The low-end will have an 8-core/64-thread POWER8 CPU running at 2.328GHz with the same 1TB of DDR3L RAM limit. a 750W PSU will be powering the servers.
There's no information on what these servers will cost though Tyan is expecting them to be available sometime by the end of the month.
It looks like The Matrix and the Terminator movies weren't enough to make us stop trying to create an AI takeover, but now Facebook has just announced plans to open source its Open Rack-compatible hardware design for AI computing - something that has been codenamed Big Sur.
Facebook's Kevin Lee and Serkan Piantino explained that Big Sur was built to use 8 x high-performance GPUs, consuming 300W each. They were using NVIDIA's Tesla Accelerated Computing Platform, claiming that Big Sur was twice as fast as previous generations, something that were using off-the-shelf components and design.
The increased speed allows Facebook to train neural networks twice as fast, as well as exploring networks that are twice as large as before. In the end, training can be distributed between the 8 x GPUs, with the size and speed of the networks being scaled by another factor of two.