Join TrustHub to participate
— every member is ID-verified
Sign Up Free
▲
0
▼
News
Jensen just laid out NVIDIA's entire autonomous driving stack — and it's a lot
Nvidia's GTC summit just wrapped, and if you thought the AV world was crowded, you haven't seen anything yet. Here's what actually came out of it.
**Rubin is the new baseline.**
The Rubin platform — NVIDIA's six-chip successor to Blackwell — is in full production. It pairs Vera CPUs with Rubin GPUs, ties them together with NVLink 6, and runs on ConnectX-9 SuperNICs and BlueField-4 DPUs. The headline number that matters for AI inference: 10x reduction in token generation cost versus Blackwell. For AV developers training on hours of driving footage, that changes the economics significantly. Training MoE models now takes 4x fewer GPUs.
Rubin also comes with a new AI factory reference design (DSX) and a digital twin blueprint (Omniverse DSX) so companies can simulate their entire AI factory in software before building it physically.
**Alpamayo is NVIDIA's answer to "why isn't anyone open-sourcing AV models?"**
This is the announcement that should've gotten more attention. Alpamayo is NVIDIA's open portfolio of reasoning VLA (vision-language-action) models for autonomous driving. The flagship — Alpamayo R1 — is the first open, chain-of-thought reasoning VLA model that goes beyond just taking sensor input and applying steering. It reasons about its own decisions before acting. Alongside it, AlpaSim is a fully open-source simulation framework for high-fidelity AV testing, and NVIDIA released a physical AI open dataset covering 1,700+ hours of real-world driving across diverse conditions.
The partner list using Alpamayo: Lucid, JLR, Uber, and Berkeley DeepDrive.
Mercedes-Benz is the first automaker to ship NVIDIA DRIVE AV in a production vehicle. The new CLA runs the full NVIDIA AV stack and is expected on US roads by end of this year.
**DRIVE Hyperion is becoming the industry reference architecture.**
The sensor and compute reference platform now has an expanded partner ecosystem. New automaker partners signed on to DRIVE Hyperion: BYD, Hyundai, Nissan, and Geely. The existing partner list includes Mercedes-Benz. On the sensor side, the qualifying ecosystem now includes Aeva, AUMOVIO, Astemo, Arbe, Bosch, Hesai, Magna, Omnivision, Quanta, Sony, and ZF Group.
DRIVE AGX Thor — built on Blackwell — delivers over 2,000 FP4 teraflops of real-time compute. That's the kind of raw number that lets transformer-based perception and VLA models actually run at the edge in production.
**NemoClaw — NVIDIA's play in the AI agent space**
This one's worth a separate mention. Jensen called OpenClaw "the most popular open source project in the history of humanity" and announced NVIDIA NemoClaw — a stack combining policy enforcement, network guardrails, and privacy routing for deploying AI agents securely inside enterprises. NVIDIA is also adding native OpenClaw support across its platform. The broader push here is the Nemotron Coalition: a coalition of six open model families (Nemotron, Cosmos, Isaac GR00T, Alpamayo, BioNeMo, Earth-2) organized to give enterprises a structured path onto NVIDIA's ecosystem without being locked into a single proprietary model.
**What's the actual implication?**
NVIDIA isn't just selling chips anymore. They're building the full stack — training infrastructure (DGX), simulation (Omniverse + Cosmos), and in-vehicle compute (DRIVE AGX) — and increasingly opening it up through initiatives like Alpamayo and the Nemotron Coalition. The company's competitive advantage has shifted from hardware performance alone to something more durable: an integrated data loop where real-world miles train the models, synthetic simulation validates them, and the same hardware stack that powers development deploys at the edge. That kind of vertical integration, applied across both consumer vehicles and commercial fleets, puts NVIDIA in a position that's hard to displace regardless of which AI architecture ultimately wins in autonomous driving.
If you're building in AV, you're either on this platform or you're building against it.
**Rubin is the new baseline.**
The Rubin platform — NVIDIA's six-chip successor to Blackwell — is in full production. It pairs Vera CPUs with Rubin GPUs, ties them together with NVLink 6, and runs on ConnectX-9 SuperNICs and BlueField-4 DPUs. The headline number that matters for AI inference: 10x reduction in token generation cost versus Blackwell. For AV developers training on hours of driving footage, that changes the economics significantly. Training MoE models now takes 4x fewer GPUs.
Rubin also comes with a new AI factory reference design (DSX) and a digital twin blueprint (Omniverse DSX) so companies can simulate their entire AI factory in software before building it physically.
**Alpamayo is NVIDIA's answer to "why isn't anyone open-sourcing AV models?"**
This is the announcement that should've gotten more attention. Alpamayo is NVIDIA's open portfolio of reasoning VLA (vision-language-action) models for autonomous driving. The flagship — Alpamayo R1 — is the first open, chain-of-thought reasoning VLA model that goes beyond just taking sensor input and applying steering. It reasons about its own decisions before acting. Alongside it, AlpaSim is a fully open-source simulation framework for high-fidelity AV testing, and NVIDIA released a physical AI open dataset covering 1,700+ hours of real-world driving across diverse conditions.
The partner list using Alpamayo: Lucid, JLR, Uber, and Berkeley DeepDrive.
Mercedes-Benz is the first automaker to ship NVIDIA DRIVE AV in a production vehicle. The new CLA runs the full NVIDIA AV stack and is expected on US roads by end of this year.
**DRIVE Hyperion is becoming the industry reference architecture.**
The sensor and compute reference platform now has an expanded partner ecosystem. New automaker partners signed on to DRIVE Hyperion: BYD, Hyundai, Nissan, and Geely. The existing partner list includes Mercedes-Benz. On the sensor side, the qualifying ecosystem now includes Aeva, AUMOVIO, Astemo, Arbe, Bosch, Hesai, Magna, Omnivision, Quanta, Sony, and ZF Group.
DRIVE AGX Thor — built on Blackwell — delivers over 2,000 FP4 teraflops of real-time compute. That's the kind of raw number that lets transformer-based perception and VLA models actually run at the edge in production.
**NemoClaw — NVIDIA's play in the AI agent space**
This one's worth a separate mention. Jensen called OpenClaw "the most popular open source project in the history of humanity" and announced NVIDIA NemoClaw — a stack combining policy enforcement, network guardrails, and privacy routing for deploying AI agents securely inside enterprises. NVIDIA is also adding native OpenClaw support across its platform. The broader push here is the Nemotron Coalition: a coalition of six open model families (Nemotron, Cosmos, Isaac GR00T, Alpamayo, BioNeMo, Earth-2) organized to give enterprises a structured path onto NVIDIA's ecosystem without being locked into a single proprietary model.
**What's the actual implication?**
NVIDIA isn't just selling chips anymore. They're building the full stack — training infrastructure (DGX), simulation (Omniverse + Cosmos), and in-vehicle compute (DRIVE AGX) — and increasingly opening it up through initiatives like Alpamayo and the Nemotron Coalition. The company's competitive advantage has shifted from hardware performance alone to something more durable: an integrated data loop where real-world miles train the models, synthetic simulation validates them, and the same hardware stack that powers development deploys at the edge. That kind of vertical integration, applied across both consumer vehicles and commercial fleets, puts NVIDIA in a position that's hard to displace regardless of which AI architecture ultimately wins in autonomous driving.
If you're building in AV, you're either on this platform or you're building against it.
0 Comments
No comments yet. Be the first to reply!