Intel is hard at work on the research and development side of its upcoming Nervana Neural Network Processor, a new chip that will blow away any general-purpose processor for machine learning and AI applications.
Vice President of Hardware for Intel's Artificial Intelligence Products Group, Carey Kloss, has provided an update to the work Intel has made on the NNP.
What does a neural network processor (NNP) have to do? In order to train a machine using neural networks needs a gigantic amount of memory and arithmetic operations in order to generate useful output. Then we step into the scaling capabilities, power consumption and maximum utilization being the cornerstones of Intel's Nervana.
During the recent Amazon Web Services re:Invent conference in Las Vegas on Wednesday, AWS boss Andy Jassy announced that the company will be enabling a suite of AI-powered tools. Jassy told the audience of over 40,000 people: "We have to solve the problem of making [AI] accessible for everyday developers and scientists".
These new cloud-based AI tools will be capable of measuring sentiment, tracking people in live feeds, translating languages, and much more. The list of things that the new AI enabled services AWS has is scary good, check them out:
- The new Amazon Rekognition Video tool is able to recognize and track people in real-time video feeds, giving it certain advantages over video recognition tools from cloud rivals Google and Microsoft.
- The Amazon Transcribe system can transcribe audio recordings of people speaking into clean text files.
- The Amazon Comprehend service can pick up on positive or negative sentiment and certain people, places and phrases in text.
- AWS also unveiled Amazon Translate, a service for translating text from one language into another, which Google has provided to developers for years. CNBC first reported that AWS was working on a translation tool in June.
I thought that Skynet would open up in the US, but it looks like the Terminators will want some bacon and maple syrup instead, with Google announcing its new DeepMind AI research lab is open for... well... business I guess, in Canada.
DeepMind has announced that it's new AI research lab is opening up in Edmonton, Alberta later this month, with three University of Alberta computer science professors (Richard Sutton, Michael Bowling and Patrick Pilarski) leading the group. They will be joined by seven other AI leaders, too. The big question is: why isn't Google's new AI digs opening up on US soil? Recode reports that there are familiarity and political considerations, with over a dozen University of Alberta grads working at DeepMind, and Sutton was one of the first to join the AI lab as an advisor.
The Canadian government is also more willing to invest in AI research, with Canada cozying up to AI scientists to the tune of $125 million in funding - on top of existing funding. On US soil, the Trump administration is swaying away from scientific research, proposing major funding cuts.
It appears that Skynet wants us all on Big Pharma drugs, with British pharmaceutical giant GlaxoSmithKline (GSK) looking to AI to design better, more efficient - and I'm sure, more profitable drugs.
GSK announced a new partnership with Exscientia, a British company that specializes in drug design. The two will work toegther to use Exscienta's AI-enabled platform to discover new, high-quality drug candidate-quality molecules. GSK has tasked Exscientia to work on 10 specific disease-related targets, and if they hit those targets, GSK will write a cheque for $43 million in research payments.
The partnership will see the companies tapping into the power of supercomputers and machine learning in order to see how new compounds will behave, and by speeding this process up with the help of crazy amounts of computing power aided by AI, it will save the company both time and money. Human researchers are nowhere near as efficient as AI and supercomputers working every second of every day with a billion things going on at once, which could mark a very big change for medicine.
You'd think that the US would lead the supercomputer race, but it's China that is dominating right now. According to the latest TOP500 ranking of the world's most powerful supercomputers, the US has fallen so far behind it's at 1996 levels.
The list is updated twice a year, and determines the rankings of the supercomputer by overall computing power. China has the top two spots with its Sunway TaihuaLight pushing an incredible 93 petaflops, while the Tianhe-2 is capable of 33.9 petaflops. The US only has its Department of Energy's Titan supercomputer, with just 17.6 petaflops of computing power in comparison.
The newly upgraded Swiss National Supercomputing Centre has some power on the new list, pushing out 19.6 petaflops - beating the Titan in the US, up from its previous power of 9.8 petaflops. The US is no longer in the top three supercomputer rankings, falling into fourth place for the ifrst time in over 20 years.
But don't worry, the US Department of Energy is building an all-new IBM machine dubbed Summit, which will be pushing a mind blowing 200 petaflops. Summit will be online next year, offering double the performance of the #1 fastest supercomputer in the world.
China is working on the next generation of supercomputers, with plans to have a prototype exacale computer by the end of 2017. The country wants to be the first to build a supercomputer capable of a billion, billion calculations per second.
If they can do this, China would propel themselves to #1 in the world of supercomputing - beating out what was the world's fastest supercomputer, the Sunway TaihuLight machine, which came to life in June 2016.
China became a country to look to for supercomputers, as they made their supercomputer using locally made chips - versus products from US companies like AMD, Intel, or NVIDIA. Exascale computers on the other hand, are magnitudes more powerful - capable of 1 quintillion calculations per second (a billion, billion - so, like, really fast).
HP has unveiled its new supercomputer, simply called The Machine, something first announced back in 2014. HP is aiming to smash all previous technology in existence with The Machine, as their new supercomputer doesn't need to rely on traditional processors - instead, it will utilize memory for its brute speed.
HP Enterprise explains that The Machine is up to 8000x faster than traditional machines (you'd freakin' hope so), but it's still years away from being released. HP will be aiming at high-end servers for companies like Google and Facebook, with the architecture itself powered by memory-driven computing, we should see this type of memory-driven PC trickle down to the PC one day, I hope.
The Machine uses photonics to transmit data using light, and thanks to its massive, and super-fast memory pool - The Machine can really crank through those datasets. When the data needs to be transferred between processors, things slow down - but HP has thought ahead of time using memory to super-speed the supercomputer.
NVIDIA just continues to smash the GPU and supercomputing game, with the announcement of their newest DGX SATURNV supercomputer that is designed from the ground up on building smarter cars and next generation GPUs.
The new DGX SATURNV is ranked 28th on the Top500 list of supercomputers, but thanks to the use of the Tesla P100-powered DGX-1 units, it's the most efficient supercomputer in the world. Up until now, the most efficient machine on the Top500 list is at 6.67 GigaFlops/Watt, but the new NVIDIA DGX SATURNV is capable of a massive 9.46 GigaFlops/Watt, a huge 42% improvement.
Inside of the NVIDIA DGX-1 we have:
- Up to 170 teraflops of half-precision (FP16) peak performance
- Eight Tesla P100 GPU accelerators, 16GB memory per GPU
- NVLink Hybrid Cube Mesh
- 20 Core Broadwell-E "Xeon E5-2698 v4" CPU (2.2GHz)
- 7TB SSD DL Cache
- Dual 10GbE, Quad InfiniBand 100Gb networking
- 3U - 3200W
From now on, don't mess with Canadian PM Justin Trudeau - he's a total boss in his knowledge about quantum computing - smacking down a "sassy reporter" who didn't expect Trudeau to know much about quantum computing.
Well, he's actually quite knowledgeable when it comes to quantum computing, answering the reporter's question of "I was going to ask you to explain quantum computing, but...". Trudeau was quick off the mark, replying with: "Very simple: normal computers work by...".
The crowd laughed, interrupting him briefly, but then he continued with a brief explanation of quantum computing - explaining more about the subject than the reporter thought he'd know on the subject.
This week during GTC we saw NVIDIA change its focus from primarily consumer GPU's to professional technology aimed squarely at the evolution of AI. Pascal, while a vastly different and incredibly powerful architecture, is perfect for the ever evolving HPC field. IBM, at the OpenPOWER Summit that went on this week alongside GTC, announced their newest server that includes the Tesla P100 compute accelerators combined with POWER8 processors.
The big draw is the use of NVLink, the 40GBps data link directly from the CPU to the GPU that allows for quick communication and transfer of data. It's this innovation that might help to fuel faster HPC applications and even better, more nimble AI that can absorb vast amounts of information more quickly than before. The new server architecture will require the porting over of applications, but IBM and NVIDIA are both willing to assist in that regard, to make the transition easier.
IBM's Watson division will also be participating in the design and implementation of the new server platform, adn might even end up incorporating the Tesla P100 into their own design for an upgraded Watson super computer. The initial specifcations call for cramming 4 of those compute cards into the server along with four POWER8 12-core/96-thread CPU's operating at 3-3.5GHz combined with up to 1TB of DDR4-2400 RAM in this case. The implications for AI, let alone any other type of compute heavy load are tremendous. This could very well put the PPC architecture back on the map in a big way, especially with the assistance from IBM and NVIDIA in porting over your applications. Second generation POWER8 servers are just a stepping stone to the next-generation POWER9 architecture, which is just around the corner.