Subtitles section Play video
Let's go talk about robotics, shall we?
Let's talk about robots.
Well, the time has come, the time has come for robots.
Robots have the benefit of being able to interact with the physical world and do things that otherwise digital information cannot.
We know very clearly that the world has severe shortage of human laborers, human workers.
By the end of this decade, the world is going to be at least 50 million workers short.
We'd be more than delighted to pay them each $50,000 to come to work.
We're probably gonna have to pay robots $50,000 a year to come to work, and so this is going to be a very, very large industry.
There are all kinds of robotic systems.
Your infrastructure would be robotic.
Billions of cameras in warehouses and factories, 10, 20 million factories around the world.
Every car is already a robot, as I mentioned earlier, and then now we're building general robots.
Let me show you how we're doing that.
Everything that moves will be autonomous.
Physical AI will embody robots of every kind in every industry.
Three computers built by NVIDIA enable a continuous loop of robot AI simulation, training, testing, and real-world experience.
Training robots requires huge volumes of data.
Internet-scale data provides common sense and reasoning, but robots need action and control data, which is expensive to capture.
With blueprints built on NVIDIA Omniverse and Cosmos, developers can generate massive amounts of diverse synthetic data for training robot policies.
First, in Omniverse, developers aggregate real-world sensor or demonstration data according to their different domains, robots, and tasks, then use Omniverse to condition Cosmos, multiplying the original captures into large volumes of photoreal diverse data.
Developers use Isaac Lab to post-train the robot policies with the augmented dataset, and let the robots learn new skills by cloning behaviors through imitation learning, or through trial and error with reinforcement learning AI feedback.
Practicing in a lab is different than the real world.
New policies need to be field-tested.
Developers use Omniverse for software and hardware-in-the-loop testing, simulating the policies in a digital twin with real-world environmental dynamics, with domain randomization, physics feedback, and high-fidelity sensor simulation.
Real-world operations require multiple robots to work together.
Mega, an Omniverse blueprint, lets developers test fleets of post-train policies at scale.
Here, Foxconn tests heterogeneous robots in a virtual NVIDIA Blackwell production facility.
As the robot brains execute their missions, they perceive the results of their actions through sensor simulation, then plan their next action.
Mega lets developers test many robot policies, enabling the robots to work as a system, whether for spatial reasoning, navigation, mobility, or dexterity.
Amazing things are born in simulation.
Today, we're introducing NVIDIA iZake Groot N1.
Groot N1 is a generalist foundation model for humanoid robots.
It's built on the foundations of synthetic data generation and learning in simulation.
Groot N1 features a dual-system architecture for thinking fast and slow, inspired by principles of human cognitive processing.
The slow thinking system lets the robot perceive and reason about its environment and instructions and plan the right actions to take.
The fast thinking system translates the plan into precise and continuous robot actions.
Groot N1's generalization lets robots manipulate common objects with ease and execute multi-step sequences collaboratively.
And with this entire pipeline of synthetic data generation and robot learning, humanoid robot developers can post-train Groot N1 across multiple embodiments and tasks across many environments.
Around the world, in every industry, developers are using NVIDIA's three computers to build the next generation of embodied AI.
Physical AI and robotics are moving so fast.
Everybody pay attention to this space.
This could very well be the world's first and very well likely be the largest industry of all.
At its core, we have the same challenges.
As I mentioned before, there are three that we focus on.
They are rather systematic.
One, how do you solve the data problem?
How, where do you create the data necessary to train the AI?
Two, what's the model architecture?
And then three, what's the scaling loss?
How can we scale either the data, the compute, or both so that we can make AIs smarter and smarter and smarter?
How do we scale?
And those two, those fundamental problems exist in robotics as well.
In robotics, we created a system called Omniverse.
It's our operating system for physical AIs.
You've heard me talk about Omniverse for a long time.
We added two technologies to it.
Today, I'm gonna show you two things.
One of them is so that we could scale AI with generative capabilities and generative model that understand the physical world.
We call it Cosmos.
Using Omniverse to condition Cosmos and using Cosmos to generate an infinite number of environments allows us to create data that is grounded, grounded, controlled by us, and yet be systematically infinite at the same time.
Okay, so you see Omniverse, we used candy colors to give you an example of us controlling the robot in the scenario perfectly, and yet Cosmos can create all these virtual environments.
The second thing, just as we were talking about earlier, one of the incredible scaling capabilities of language models today is reinforcement learning, verifiable rewards.
The question is what's the verifiable rewards in robotics?
And as we know very well, it's the laws of physics.
Verifiable physics rewards.
And so we need an incredible physics engine.
Well, most physics engines have been designed for a variety of reasons.
They can be designed because we want to use it for large machineries, or maybe we design it for virtual worlds, video games and such, but we need a physics engine that is designed for very fine-grained, rigid and soft bodies, designed for being able to train tactile feedback and fine motor skills and actuator controls.
We need it to be GPU accelerated so that these virtual worlds could live in super linear time, super real time, and train these AI models incredibly fast.
And we need it to be integrated harmoniously into a framework that is used by roboticists all over the world, Mojoco.
And so today we're announcing something really, really special.
It is a partnership of three companies, DeepMind, Disney Research, and NVIDIA, and we call it Newton.
Let's take a look at Newton.
Tell me that wasn't amazing.
Hey, Blue.
How are you doing?
How do you like your new physics engine?
You like it, huh?
Yeah, I bet.
I know.
Tactile feedback, rigid body, soft body, simulation, super real time.
Can you imagine just now what you were looking at as complete real-time simulation?
This is how we're gonna train robots in the future.
Just so you know, Blue has two computers, two NVIDIA computers inside.
Look how smart you are.
Yes, you're smart.
Okay.
All right.
Hey, Blue, listen.
How about let's take them home?
Let's finish this keynote.
It's lunchtime.
Are you ready?
Let's finish it up.
We have another announcement.
You're good, you're good.
Just stand right here.
Stand right here.
Stand right here.
All right, good.
Right there.
That's good.
All right, stand.
Okay.
We have another amazing news.
I told you the progress of our robotics has been making enormous progress.
And today we're announcing that Group N1 is open sourced.
Group N1 is open sourced.
Did you see that?
Were you surprised?
The first thing that came to my mind was is Tesla dead?
We all know that Ultima is coming out.
Have you seen that robot?
It's great.
We all know that Tesla is not an electric car company.
Tesla is a robot company.
But as soon as Group N1 is open sourced, the first thing that comes to mind is is Tesla poor?
First of all, I have to say my own point of view.
NVIDIA is bound to be open sourced.
NVIDIA is the brain of the robot.
It doesn't do the body.
And a lot of people who do the body can't have a brain.
So today he put this robot's brain model after being open sourced, what does that mean?
All these manufacturers that do the body or new companies new companies may be able to do it.
He can also be a robot.
He can also be a robot.
It must rely on NVIDIA's open source.
It means that NVIDIA can get the data that everyone does the experiment with him.
This is a terrible thing.
Will Tesla be finished?
We talked about the development of Ultima I have finished talking about it.
In the last episode, I just mentioned this ISA Group N1.
After this open source comes out, there will be more competitors for Tesla.
Manufacturers such as Huawei, Xiaomi, etc.
Ubisoft and Boston Power, etc.
These manufacturers will use it to compete with Tesla.
Tesla's stock price is very low.
Although NVIDIA's stock price is very low, it has not broken the front foot of March.
In other words, on this seal, the market is very unsatisfied with Tesla.
It's not wrong at the moment.
But in the long run, I don't think Tesla must be so cruel.
Tesla's FSD has real world data.
NVIDIA is not running on the road.
What he collects is virtual world data.
It is a collaboration between the platform and Cosmos of ONUVERSE.
So these two things are completely different.
Who is good and who is bad?
I don't think I know now.
I think open source can think of the Android system and Apple back then.
I said a long time ago that I think Tesla is like Apple.
The open source system must be able to kill the closed system.
It doesn't have to depend on the supporters of the closed system and the projects he wants to develop now.
So to a certain extent, Tesla still has an advantage.
But for the project of robot sales, Tesla really needs to pay attention.
Especially now the innovation is very, very fast.
I'm not sure how Tesla will deal with it.
Deepsea was just starting to open up.
In two months, the future pure competitors and partners have often come out.
The changes will be very fast in two months.
We just have to remember one thing.
Taiwan's supply chain will definitely be damaged.
The US traditional supply chain will also be damaged.
Tesla's part is uncertain.
I temporarily think NVIDIA's FSD is still very good.
This is what I will share with you after watching the GDC class.
I don't know what you saw.
Also welcome to discuss with me.
If you are interested in the topic of this robot series, welcome to join the member channel.
My link is at the bottom.
We will track the latest developments regularly to catch up with the companies that are really worth buying this time.
I am JG.