Placeholder Image

Subtitles section Play video

  • Elon Musk appears to have found himself a new obsession, supercomputers.

  • Big ones.

  • The biggest in the world.

  • Elon has not been shy about telling the world that he is laying down fat stacks of cash to build out his computer army.

  • We are looking at tens of billions of dollars being invested in just the next year alone.

  • Now why is he doing all this?

  • Because well, that's where things get interesting.

  • Okay first let's break this whole supercomputer situation up into some manageable chunks and start to identify what goes where because Elon is building multiple computer clusters for different projects that do different things and at some point he seemingly referred to each of them as either the world's biggest or the world's most powerful so it's a bit hard to follow.

  • Let's start with Tesla.

  • They're getting a brand new supercomputer.

  • This is actually under construction right now.

  • The Gigafactory in Austin, Texas, which was already the biggest car manufacturing plant in the world, is getting even bigger and most of that new real estate is being dedicated to supercomputer activity, which is why they are also building this whole dedicated cooling bunker.

  • Elon talked about this recently on X.

  • He wrote, sizing for about 130 megawatts of power and cooling this year but will increase to over 500 megawatts over the next 18 months or so.

  • So as for what exactly Tesla plans to do with all that power, we'll get into those details shortly but first let's talk about an even bigger supercomputer project that Elon is working on for his XAI venture.

  • This one will be built in Memphis, Tennessee and Elon has referred to it as the Gigafactory of Compute.

  • He then went on to tell officials, quote, my vision is to build the world's largest and most powerful supercomputer and I'm willing to put it in Memphis.

  • And then in addition to all of that, Tesla has been kind of secretly converting all of their vehicles into a mobile, decentralized supercomputer network of their own, which will eventually include all of their humanoid robots as well.

  • Okay, let's start breaking down how all of this stuff works.

  • So as many of you know, we are all Canadians here behind the scenes, which is generally pretty awesome except when it comes to the internet.

  • The Canadian government has a weird thing about trying to control what we can and can't see online.

  • It's even started to result in Meta and Google limiting the type of content that I'm able to access.

  • And if that wasn't enough, our Netflix selection sucks up here, too.

  • So in an effort to dodge all of that madness, I've started using a VPN.

  • More specifically, I've been using CyberGhost VPN, not only because they have a cool name, but also because they offer a very high quality product at an even more affordable price and a cool name.

  • With CyberGhost VPN, all of your traffic goes through a secure VPN tunnel.

  • Your IP address is hidden and your data is encrypted.

  • Trust me, you want your data and history secured and encrypted.

  • So what you do online is strictly your business.

  • You can also change your online location in just three clicks and get access to geo-restricted content from dozens of streaming services like Netflix.

  • And that's not all.

  • With CyberGhost VPN, you can even find better online shopping deals or play games blocked in your region.

  • CyberGhost VPN is available for all platforms such as Windows, Mac OS, Android, iOS, and many others.

  • You can use one subscription to protect up to seven devices at a time, so you can easily share with your family and friends.

  • Plus, you get a 45-day money-back guarantee and 24-7 customer support, so everything is risk-free.

  • CyberGhost VPN is offering their best deal ever, just $2.03 per month, and you get four months free, which is 84% off.

  • So, join over 38 million happy users, including me, and click the link in our description to sign up today.

  • Okay, that brand-new GigaTexas data center that's under construction right now is going to be aimed specifically at developing Tesla's full self-driving software.

  • And to do that, Elon says that it will be loaded with a combination of NVIDIA GPUs along with Tesla's own AI-specific hardware.

  • Elon says that the south extension of the factory is custom-built for heavy power compute and cooling, hence the giant cooling bunker.

  • He said earlier this month that the initial system is 50,000 NVIDIA H100 GPUs alongside 20,000 Tesla hardware for AI computers and a massive network of hard drives for video storage.

  • When Elon says that Tesla is spending $10 billion this year on AI, this is where a large chunk of that cash is going right now.

  • The NVIDIA H100 is a current industry standard chip for AI training, which is an extraordinarily complicated process involving endless calculations in an attempt to teach a computer network the difference between right and wrong.

  • We provide the input and then train the network until it produces the desired output.

  • In the most common AI models that we use today, like ChatGPT or MidJourney, we input words or pictures and receive the same as an output.

  • In some more advanced new models, you can even input text and get video as an output.

  • In Tesla's AI models, the input is video and the output is driving a car.

  • So this is significantly more complex than what most people have grown accustomed to, and as a result, it requires a much more powerful and sophisticated hardware setup to make it all work.

  • That's where Tesla's AI hardware comes into play.

  • In addition to the training computer, Tesla also uses a powerful computer inside their vehicles.

  • This is called Inference Compute, and it allows the AI to operate in the real world in real time and make decisions at incredibly high speed.

  • You know when you ask ChatGPT a question and it has to kind of think for a second before generating an answer?

  • That's because pretty much every AI model that we use right now is just a cloud-based web app.

  • ChatGPT isn't operating on your computer or your phone.

  • The prompt that you write is sent off to an OpenAI data center, where it's processed through their own giant supercomputer cluster, and then the response is sent back to you.

  • This is fine if you just want a cookie recipe or a picture of a robot on Mars, but when AI is driving your car, it cannot be sending every decision back and forth through the internet.

  • It needs a brain that will do the thinking and decision-making on-site.

  • That is the Tesla AI hardware.

  • Not only is this inference hardware inside every car, it's also used in the Tesla data centers, alongside the NVIDIA hardware.

  • The way that Elon explains this is that the H100s, the Tesla hardware and software video data, is all part of one big training loop.

  • Currently, Tesla is using their AI hardware version 4, but Elon says that before the end of next year, there will be hardware version 5, which is called AI 5.

  • This will roll out in vehicles and data centers.

  • Elon says that the new chip will have 10 times more capability than hardware 4, so that would potentially equal 10 times better performance in the training loop and 10 times better performance running the AI model in the vehicle for autonomous driving.

  • But there's more.

  • The same AI5 chip is not only going into future Tesla vehicles, it's also going into Tesla's This is likely the motivating factor for Tesla building out such a massive new supercomputer cluster.

  • Obviously, full self-driving still needs some work, but Elon said a couple of months back that Tesla was not compute-constrained for FSD.

  • He said that was more about data.

  • Though when it comes to developing brand new models for the Tesla bot, those could prove to be even bigger than full self-driving.

  • There is so much that the bot will need to know in order to function in the real world and perform human tasks, so Tesla will not be done with AI training any time soon, even if we do get full robotaxis next year.

  • And if that wasn't enough, Elon already has some big ideas for what Tesla could do with all of these high-powered inference computers out there in the world.

  • At the recent Tesla shareholder meeting in June, Elon Musk put out this concept about how the company might be able to use future autonomous vehicle hardware in some unconventional ways.

  • The idea is that if there is a point down the road where tens of millions of Tesla vehicles are out there in the world, and many of them are equipped with this AI5 hardware, then you essentially have a giant, decentralized supercomputer made up of individual cars that are all connected by the Tesla network.

  • Elon speculated that by around the end of the decade, there might be a gigawatt of computing power when all of these Tesla vehicles are combined.

  • Going back to that GigaTexas computer cluster, there's up to 500 megawatts of power, which is half a gigawatt.

  • Anyway, the in-car computer of a robotaxi would be occupied with self-driving tasks most of the time, but for periods when the vehicle is either recharging or just not being used, that computing power might be harnessed to do other important jobs.

  • This is basically the same concept behind Amazon Web Services.

  • This is the business model that actually makes money at Amazon, and the concept was born when the company found that all of the data processing hardware that they required to deal with peak traffic situations like Christmas was just sitting around and not being utilized during the other times of the year.

  • So they started renting out that computer hardware to other companies that needed the processing but didn't have the capital to build their own data center.

  • In theory, Tesla's Robotaxi network could kind of do the same thing, even the consumer vehicles as well.

  • You could check a box on the app that would let Tesla access your in-car computer and maybe you get a cut of the revenue it earns.

  • It's kind of like a crypto mining pool if you've ever seen or tried one of those.

  • Okay, so we've talked about AI in cars and autonomous driving, that's all cool, but what about solving the mysteries of the universe?

  • Okay, let's go back to Musk's Gigafactory of Compute, which is not to be confused with his giant computer at the Tesla Gigafactory.

  • This is the new installation in Memphis that will be dedicated to XAI.

  • This is the one that could potentially become the world's largest and most powerful supercomputer by the end of 2025 if you're inclined to follow Elon timelines.

  • In the short term, Elon is targeting 100,000 of the NVIDIA H100 GPUs up and running before the end of this year.

  • That's what he believes the company needs to build out the next generation of their Grok AI language model.

  • This is chatbot that is native to the X platform, available to premium subscribers.

  • It's like ChatGPT except Grok has real-time access to every post on X and Grok also has the freedom of language to write swear words and make cringe attempts at humor, which can be a lot of fun.

  • This is just basic level stuff though, XAI was able to build out Grok 1 at an incredibly fast pace, it only took about 6 months from the company being founded to the release of their first product.

  • Although it still lags behind ChatGPT a little bit in terms of capability and Grok definitely lacks the name recognition and popularity currently being enjoyed by ChatGPT and OpenAI.

  • Grok 2 might be able to start turning the tide.

  • This is the product that XAI is working on currently.

  • The upgrade will allow Grok to both interpret and produce images and visual media so you can have it turn a spreadsheet into a graph or turn a graph into a spreadsheet or identify the content of a photograph or even explain a piece of art or a meme.

  • So far, XAI has been able to do all of this AI training in partnership with a company called Oracle Cloud.

  • Just like we were talking about earlier with renting out data processing, that's exactly what XAI has done here.

  • It looks like they have been renting the equivalent of about 20,000 H100 GPUs from Oracle.

  • Elon has said that the 100,000 H100 cluster will be necessary to train Grok 3, which is still unknown but we have to assume that this would be moving in the direction of a generalized artificial intelligence, something that can deal with text, sound, images, and video as both input and output media, which probably not by coincidence would be the exact kind of AI model that the Tesla Bot will need to fully function as a productive member of society at some point in the future.

  • Now beyond that is where things start to get weird.

  • Elon's vision of his completed Gigafactory of Compute is now looking like 300,000 units of the B200 GPU, which is Nvidia's next big chip release.

  • It's the new most powerful chip in the world for AI training.

  • These are going to be several times more capable than the H100 chips and Elon wants 300,000 of them and he wants this up and running by 2026.

  • Ostensibly the reason that we've been given for XAI pursuing these massive amounts of computing power is simply the mission to understand the universe.

  • More specifically, XAI is a company working on building artificial intelligence to accelerate human scientific discovery.

  • We are guided by our mission to advance our collective understanding of the universe.

  • Now this is all going to be very expensive.

  • XAI recently completed a $6 billion funding round, which will be in addition to the startup's initial $1 billion seed fund.

  • This could theoretically be enough to cover the cost of their initial 100,000 GPU cluster with the H100s but just trying to price out the much bigger cluster of the much more powerful B200s, which Nvidia has said would run between 30 and 40 grand each, which is $9 billion just in GPU hardware alone if we assume the lowest end of the price spectrum.

  • So XAI still has a long way to go.

  • They're going to need to keep drawing massive amounts of funding from groups with very deep pockets and the competition is not sleeping either.

  • Microsoft and OpenAI are said to be considering spending up to $100 billion on a 5 gigawatt AI data center known as Stargate.

  • This would require its own nuclear power plant to operate at full capacity, which is probably why Amazon just recently purchased a Pennsylvania data center site that is literally right next to a nuclear power plant.

  • So even if Musk's Gigafactory of Compute does get built on time, it certainly has a chance of becoming the world's most powerful supercomputer, but it won't wear that crown for long.

  • In the immortal words of Fallout Boy, this ain't a scene, it's a goddamn arms race.

  • Thanks again to CyberGhost for sponsoring this video.

  • They protect your data while you browse and give you full access to blocked online content for just over $2 a month.

  • Click the link in the video description to find their special offer with 84% off and a 45-day money-back guarantee.

Elon Musk appears to have found himself a new obsession, supercomputers.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it