Jensen Huang spoke about how AI is reaching an inflection point and how data center build-out is expected to reach over a trillion dollars shortly. Software itself is transition from human generated to AI generated. He announced that GM is also now partnering with Nvidia for AI and will use Nvidia DRIVE AGX for ADAS systems and in-cabin safety as well as AI and digital twins in manufacturing and for robotics training. This likely marks a move away from Qualcomm. Nvidia has developed an OS for the data center to control and allocate all of the compute that is being deployed. Beyond Blackwell, he previewed Vera-Rubin that is the next generation exaflop class compute (Vera has 88 ARM cores and Rubin has up to 576 GPUs). They are also developing 3-D stacked silicon photonics ethernet switches with port speeds of 1.6 Tbps. Finally, he spoke about open sourcing their humanoid robotics foundational model.
Dragomir Anguelov , VP, Head of Research, Waymo spoke about LLMs for training and how increasing compute power and larger models are making their autonomous driving better. They are building the Waymo Foundational Model to improve on stand-alone visual models. AI is used to help build a simulator that can then test the Waymo Driver for safety. This can save driving millions of miles. Predicting the behavior of other road users is one of the keys to improving the driving stack. 3D Gaussian Splatting is used for sensor simulation at reasonable compute load versus previous methods. They also have an editor to alter time of day, weather and add objects into the scene so they can take a physical generated scenario and generate multiple permutations for training.
RJ Scaringe, Rivian CEO, spoke about how they are using AI. Rivian developed their own RTOS and software stack that they have now also licensed to VW. Rivians Gen 2 models have much higher levels of compute and a powerful perception stack, and RJ estimates that 20% of the miles driven are now autonomous and expects that to increase to 60-70%+ over the next few years (as Urban Pilot comes to market). Rivian has 3 (Nvidia) zonal computers, for wiring harness reasons. Self-driving has gone from a perception stack and then rules based driving (in their Gen 1 vehicles) to using end-end AI – so you spend less time classifying each individual object to looking at the whole scene and AI making control decisions. LLMs for self-driving do not have “an internet” of data – so each manufacturer needs to generate that and then train. Gen 2 has 240 TOPS (Nvidia Orin) to do the real time inferencing. As the models get larger, they need more GPUs for off-line training.
Alex Kendall, Co-Founder and CEO, Wayve spoke about their end-to-end, multimodal foundation model for autonomous driving. Training was done with reinforcement learning. The complexity of driving in London pushed Wayve to use and end-to-end AI approach. The foundation model understands different sensor models, driving domains and even different platforms and allows the cars to perform complex tasks with scenarios it has not been specifically trained for. Once the model is driving, the next challenge is to “prove” that its safe. Entering into the ADAS market is critical to allow Wayve to collect data and scale into the robotaxi market. They are working to license the model to auto OEMs and fleet partners around the world, including a recent partnership with Uber and it is flexible to work on different hardware. The demos are shown on a Ford Mustang Mach E vehicle with multiple lidar sensors. They were able to adapt from a UK trained model to US driving with 100 hours of additional training and converge in performance at about 500 hours. A “learned” safety sub-system is used to verify the motion plan using secondary hardware. Wayve’s GAIA synthetic driving model allows them to generate synthetic scenes for testing long-tail scenarios. LiDAR is the best way to get L4 to market in the near term. They are not publicly commenting on the timing for public roll outs.

Additional announcements from Nvidia partners around GTC including Gatik integrating DRIVE AGX into autonomous trucks, Torc collaborating with Flex and Nvida for scalable compute, Volvo integrating DRIVE AGX into EVs, photo-realstic simulation advances from companies including Plus, Foretellix and CARLA and collaborations with Magna, Lenovo and Nuro.
Magna is integrating the Nvidia Drive AGX Thor SoC into it’s next generation of advanced technology solutions and is developing and testing the latest L2+ though L4 software on the platform using Nvidia’s Drive-OS.