NVIDIA has just announced that it is going all-in with AI, pushing into the world of deep neural networks (DNNs) and artificial intelligence, with a bunch of new tools and huge partnerships with some of the largest technology companies in China.
NVIDIA founder and CEO, Jen-Hsun Huang took the stage as usual, saying: "At no time in the history of computing have such exciting developments been happening, and such incredible forces in computing been affecting our future. What technology increases in complexity by a factor of 350 in five years? We don't know any. What algorithm increases in complexity by a factor of 10? We don't know any. We are moving faster than Moore's Law".
Huang unveiled NVIDIA's new TensorRT 3, which is a new inferencing platform that the company claims allowed a previously-trained DNN to run in a production environment capable of going through 45,000 images per second. This is all powered by the HGX server that has 8 x Tesla V100 accelerators.
The big difference here is that NVIDIA's HGX server consumes 3kW to do this with the 8 x Tesla V100 accelerators, but compared to the traditional CPU-based platform that rocks 160 dual-CPU servers, which draws 65kW... making NVIDIA's GPU-based solution 2066% more power efficient. We can see why NVIDIA is pushing into the AI and DNN markets so hard.
NVIDIA also announced deals with massive companies like Alibaba, Baidu, and Tencent's cloud platforms. Huawei, Inspur, and Lenovo have all decided on NVIDIA's HGX server architecture. NVIDIA also has landed deals with China's Big Five internet and AI companies: Alibaba, Tencent, Baidu, JD.com, and iFlyTech are already in the TensorRT 3 platform.
Huang finished by saying: "Our vision is to enable every researcher everywhere to enable AI for the goodness of mankind. We believe we now have the fundamental pillars in place to invent the next era of artificial intelligence, the era of autonomous machines".