Nvidia CEO Jensen Huang announced today at the Consumer Electronics Show (CES) in the United States that the company will collaborate with Toyota, a car manufacturer with annual sales of 100 million units, to develop autonomous driving technology based on Nvidia’s technology. This move directly competes with Tesla and is expected to have a significant impact on the global automotive industry with the introduction of visual computing AI.
Huang stated that the success of Tesla and Waymo in the United States has confirmed the potential of visual AI in the field of autonomous driving and further demonstrated that the autonomous vehicle industry could reach trillions of dollars.
According to Huang, the operation of autonomous driving depends on three built-in computers in the vehicle, including an AI computing computer that provides a large amount of computing power, a simulation computer that can detect and assess the environment, and a data generation system. Depending on different business cases, these computers may be one or two devices.
“I am very excited to announce today that Toyota will be collaborating with Nvidia to develop next-generation autonomous vehicles,” Huang said. “With over a billion cars on the road and over a trillion miles driven annually, these vehicles will be the future of autonomous driving. This will be a massive industry worth trillions of dollars and is expected to grow at a rate of $5 billion per year.”
The autonomous driving platform developed by Nvidia, called “AGX,” is equipped with Nvidia Drive OS and is designed to be a fully functional and safe autonomous vehicle. In addition to the core computational chip “Sol,” it is also equipped with numerous cameras, radars, and ultrasonic sensors. The driving information detected by these sensors will be automatically transformed into appropriate driving judgment and output instructions within the platform.
Huang mentioned that the prototype vehicle of this platform has passed the highest ASIL safety automotive standard, with 70,000 lines of code, 15,000 engineer hours invested, and 2 million actual tests. It has gone through numerous challenges before being released.
Huang also demonstrated the new system “Omnimap,” which combines Omniverse with Nvidia Cosmos. The visual effects demonstrated on the AGX platform are accumulated through extensive remote computation by DGX computers. Each AGX appears to have a supercomputer inside, depicting the virtual Omniverse world.
In various environments, the accumulated data from AGX can be processed by DGX to compute neural networks and images, achieving more realistic and accurate Omnimap depiction. DGX can simulate the possible variations in different scenes and time environments, providing infinite changes to make AGX autonomous driving more mature. With computer computation, driving data that may only have a few hundred instances can be effectively increased to billions of miles in the Omniverse, thereby setting higher standards for safety and autonomous driving.
“Based on physical capabilities, we can produce countless amounts of data to train AI in the background. This AI is based on physical principles and is very accurate and reasonable,” Huang said. “We are very excited about this because, just like the development of the computer industry in the past, we will see similar progress in the field of autonomous driving in the coming years, and it will accelerate in the next few years.”
Although Tesla was only mentioned at the beginning, Nvidia’s collaboration with traditional automakers such as Toyota is a clear competition with Tesla. The biggest difference between Nvidia’s disclosed vehicle platform and Tesla’s current autonomous driving is that Nvidia’s platform is a universal public version. The test vehicle does not appear to be a completely new autonomous vehicle but rather a traditional car retrofitted with computers and sensors.
This means that upgrading traditional cars to Nvidia’s autonomous driving may have a high threshold, and these car manufacturers may not qualify for the autonomous driving system for every vehicle. It depends on the compatibility between Nvidia’s AGX and various car manufacturers, and the most likely scenario is that car manufacturers will launch new vehicle models for autonomous driving.
In comparison, Tesla, which was originally designed for autonomous driving, may have a higher probability of seamless upgrades, even if its current vehicles do not have fully autonomous driving capabilities. In terms of autonomous driving solution logic, Tesla has completely switched to a pure vision mode and no longer relies on any radar sensors for fully autonomous driving. Nvidia’s AGX, on the other hand, combines visual AI with mixed solutions such as laser radar to achieve autonomous driving. This inherent difference in development speed and system characteristics between the two may result in different outcomes.