Nvidia and Baidu announced that they will work together on bringing AI technology to the cloud, self-driving cars, and in homes.
Over the past few years, Nvidia has increasingly focused on developing GPUs that are not just generally faster, but also optimized for machine learning. This focus has enabled Nvidia to build GPUs that are now orders of magnitude faster for machine learning tasks compared to their predecessors from only a few years ago.
This progress has also been enabled by some competition from Google’s TPU, which has showed that it’s worth building more specialized machine learning chips because of all the additional performance they bring. Nvidia seems to have learned from this, because its recent Volta-based Tesla GPUs also come with “Tensor Cores,” which deliver an improvement of 6-12x over typical FP32/FP16 operations in the previous Tesla architecture.
AMD has recently entered the machine learning market, too, with its own Radeon Vega Frontier Edition. However, although the Frontier Edition seems to be competitive in FP32 performance with Nvidia’s V100 (12.5 vs 14 TFLOPS), it doesn’t seem to have any Tensor Core equivalent right now, which still puts the V100 significantly ahead (112 TFLOPS for mixed-precision operations).
Nvidia Deep Learning Platform In Baidu Cloud
The broad partnership between Nvidia and Baidu is not that surprising, considering Nvidia is the leading provider of machine learning chips and software tools to accompany them, while Baidu is a big Chinese cloud services provider that naturally wants to enhance its AI capabilities with the latest deep learning-optimized hardware.
“NVIDIA and Baidu have pioneered significant advances in deep learning and AI,” said Ian Buck, NVIDIA vice president and general manager of accelerated computing.
“We believe AI is the most powerful technology force of our time, with the potential to revolutionize every industry. Our collaboration aligns our exceptional technical resources to create AI computing platforms for all developers — from academic research, startups creating breakthrough AI applications, and autonomous vehicles,” he added.
At a recent developer-focused conference in China, Baidu announced that it will deploy the Nvidia HGX architecture in its data center, along with Tesla Volta V100 GPUs, which will be used for training neural networks, and Tesla P4 inference accelerators.
Baidu said it will also optimize its PaddlePaddle deep learning framework for Nvidia’s GPUs. The framework has already been in use in many of Baidu’s services, including its search engine, image classification services, real-time speech recognition, and other AI-enhanced services developed by third-parties.
Nvidia Drive PX 2 In Baidu’s Self-Driving Car Platform
Apollo, which is Baidu’s self-driving car initiative, will take advantage of Nvidia’s Drive PX 2 self-driving car platform. The autonomous car Baidu demoed earlier this year at CES Asia was already using the Drive PX 2 platform. Baidu said that several Chinese car makers, including Changan, Chery Automobile, FAW, and Greatwall Motor, have already joined the Apollo alliance.
Shield TV And DuerOS AI Integration
Nvidia will sell the Shield TV with DuerOS integration in the Chinese market. DuerOS is Baidu’s conversational AI system, and a competitor to other AI assistants such as Amazon’s Alexa, Google Assistant, and others.