Placeholder Image

Subtitles section Play video

  • Inside a nondescript building in the heart of San Francisco, one of the world's buzziest startups is making our AI-powered future feel more real than ever before.

  • They're behind two monster hits, ChatGPT and Ali, and somehow beat the biggest tech giants to market, kicking off a competitive race that's forced them all to show us what they've got.

  • But how did this under-the-radar startup pull it off?

  • We're inside OpenAI, and we're going to get some answers.

  • Is it magic?

  • Is it just algorithms?

  • Is it going to save us or destroy us?

  • Let's go find out.

  • I love the plants.

  • It feels so alive.

  • So amazing.

  • I love it here.

  • It's giving me very Westworld spa vibes.

  • It's almost like suspended in space and time a little bit.

  • Yeah, it is a little bit of futuristic feel.

  • This is one of the most introspective minds at OpenAI.

  • We all know Sam Altman, the CEO.

  • But Mira Moradi is a chief architect behind OpenAI's strategy.

  • This looks like the OpenAI logo.

  • It is.

  • Ilya actually painted this.

  • Ilya, the chief scientist.

  • Yes.

  • What is the flower meant to symbolize?

  • My guess is that it's AI that loves humanity.

  • We're very focused on dealing with the challenges of hallucination, truthfulness, reliability, alignment of these models.

  • Has anyone left because they're like, you know what, I disagree?

  • There have been, over time, people that left to start other organizations because of disagreements on the strategy around deployment.

  • And how do you find common ground when disagreements do arise?

  • You want to be able to have this constant dialogue and figure out how to systematize these concerns.

  • What is the job of a CTO?

  • It's a combination of guiding the teams on the ground, thinking about longer-term strategy, figuring out our gaps, and making sure that the teams are well-supported to succeed.

  • Yeah.

  • Sounds like a big job.

  • Solving impossible problems.

  • Solving impossible problems, yeah.

  • When you were making the decision about releasing Chat GPT into the wild, I'm sure there was like a go or no-go moment.

  • Take me back to that day.

  • We had Chat GPT for a while, and we sort of hit a point where we could really benefit from having more feedback from how people are using it, what are the risks, what are the limitations, and learn more about this technology that we have created and start bringing it in the public consciousness.

  • It became the fastest-growing tech product in history.

  • It did.

  • Did that surprise you?

  • I mean, what was your reaction to the world's reaction?

  • We were surprised by how much it captured the imaginations of the general public and how much people just loved spending time talking to this AI system and interacting with it.

  • Chat GPT can now mimic a human.

  • It can write.

  • It can code.

  • At the most basic level, how does this all happen?

  • So, Chat GPT is a neural network that has been trained on a huge amount of data on a massive supercomputer, and the goal during this training process was to predict the next word in a sentence, and it turns out that as you train larger and larger models and more and more data, the capabilities of these models also increase.

  • They become more powerful, more helpful, and as you invest more on alignment and safety, they become more reliable and safe over time.

  • Open AI has kind of turbocharged this competitive frenzy.

  • Do you think you can beat Google at its own game?

  • Do you think you can take significant market share in search?

  • We didn't set out to dominate search.

  • What Chat GPT offers is a different way to understand information, and you could be, you know, searching, but you're searching in a much more intuitive way versus keyword-based.

  • I think the whole world is sort of now moving in this direction.

  • The air of confidence, obviously, that Chat GPT sometimes delivers an answer with.

  • Why not just sometimes say, I don't know?

  • The goal is not to predict the next word reliably or safely.

  • When you have such general capabilities, it's very difficult to handle some of the limitations, such as what is correct.

  • Some of these texts and some of the data is biased.

  • Some of it may be incorrect.

  • Isn't this going to accelerate the misinformation problem?

  • I mean, we haven't been able to crack it on social media for like a couple of decades.

  • Misinformation is a really complex, hard problem.

  • Right now, one of the things that I'm most worried about is the ability of models like GPT-4 to make up things.

  • We refer to this as hallucinations.

  • So they will convincingly make up things and it requires being aware and just really knowing that you cannot fully, blindly rely on what the technology is providing as an output.

  • I want to talk about this term hallucination because it's a very human term.

  • Why use such a human term for basically an AI that's just making mistakes?

  • A lot of these general capabilities are actually quite human-like.

  • Sometimes when we don't know the answer to something, we will just make up an answer.

  • We will rarely say, I don't know.

  • And so, there is a lot of human hallucination in a conversation and sometimes we don't do it on purpose.

  • Should we be worried about AI though that feels more and more human?

  • Like, should AI have to identify itself as artificial when it's interacting with us?

  • I think it's a different kind of intelligence.

  • It is important to distinguish output that's been provided by a machine versus another human.

  • But we are moving towards a world where we're collaborating with these machines more and more and so output will be hybrid.

  • All of the data that you're training this AI on, it's coming from writers, it's coming from artists.

  • How do you think about giving value back to those people when these are also people who are worried about their jobs going away?

  • I don't know exactly how it would work in practice that you can sort of account for information created by everyone on the internet.

  • I think there are definitely going to be jobs that will be lost and jobs that will be changed as AI continues to advance and integrate in the workforce.

  • Prompt engineering is a job today.

  • That's not something that we could have predicted.

  • Think of prompt engineers like AI whisperers.

  • They're highly skilled at selecting the right words to coax AI tools into generating the most accurate and illuminating responses.

  • It's a new job born from AI that's fetching hundreds of thousands of dollars a year.

  • What are some tips to being an ace prompt engineer?

  • You know, it's this ability to really develop an intuition for how to get the most out of the model.

  • How to prompt it in the right ways, give it enough context for what you're looking for.

  • One of the things that we talked about earlier was hallucinations and these large language models not having the ability to always be highly accurate.

  • So I'm asking the model with a browsing plugin to fact check this information.

  • And it's now browsing the web.

  • So there's this report that these workers in Kenya were getting paid $2 an hour to do the work on the backend to make answers less toxic.

  • And my understanding is this work is, it can be difficult, right?

  • Because you're reading texts that might be disturbing and trying to clean them up.

  • So we need to use contractors sometimes to scale.

  • We chose that particular contractor because of their known safety standards.

  • And since then we've stopped working with them.

  • But as you said, this is difficult work and we recognize that and we have mental health standards and wellness standards that we share with contractors.

  • I think a lot about my kids and them having relationships with AI someday.

  • How do you think about what the limits should be and what the possibilities should be when you're thinking about a child?

  • I think we should be very careful in general with putting very powerful systems in front of more vulnerable populations.

  • There are certainly checks and balances in place because it's still early and we still don't understand all the ways in which this could affect people.

  • There's all this talk about, you know, relationships and AI.

  • Like, could you see yourself developing a relationship with an AI?

  • I'd say yes as a reliable tool that enhances my life, makes my life better.

  • As we ponder the existential idea that we might all have relationships with AI someday, there's an AI gold rush happening in Silicon Valley.

  • Venture capitalists are pouring money into anything AI startups, hoping to find the next big thing.

  • Reid Hoffman, the co-founder of LinkedIn and an early investor in Facebook, knows a thing or two about striking gold.

  • He was an early open AI backer and is, in a way, trying to take society's hand and guide us all through the age of AI.

  • I mean, gosh, 12 years we've been talking.

  • Maybe longer.

  • That's awesome.

  • A long time.

  • You have been on the ground floor of some of the biggest tech platform shifts in history.

  • The beginnings of the internet, mobile.

  • Do you think AI is gonna be even bigger?

  • I think so.

  • It builds on the internet, mobile, cloud, data.

  • All of these things come together to make AI work.

  • And so that causes it to be the crescendo, the addition to all of this.

  • I mean, one of the problems with the current discourse is that it's too much of the fear-based versus hope-based.

  • Imagine a tutor on every smartphone for every child in the world.

  • That's possible.

  • That's line of sight from what we see with current AI models today.

  • You coined this term, blitzscaling.

  • Blitzscaling, in its precise definition, is prioritizing speed over efficiency in an environment of uncertainty.

  • How do you go as fast as possible in order to be the first to scale?

  • Does AI blitzscale?

  • Well, it certainly seems like it today, doesn't it?

  • I think the speed at which we will integrate it into our lives will be faster than we integrated the iPhone into our lives.

  • There's gonna be a co-pilot for every profession.

  • And if you think about that, that's huge.

  • And not professional activities, because it's gonna write my kids' papers, right?

  • My kids' high school papers?

  • Yes, although the hope is that in the interaction with it, they'll learn to create much more interesting papers.

  • You and Elon Musk go way back.

  • He co-founded OpenAI with Sam Altman, the CEO of OpenAI.

  • You and I have talked a lot over the years about how you have been sort of this node in the PayPal mafia, and you can talk to everyone and maybe you disagree, but you are all still friends.

  • What did Elon say that got you interested so early?

  • Part of the reason I got back into AI, and I was part of sitting around the table in the crafting of OpenAI, was that Elon came to me and said, look, this AI thing is coming.

  • Once I started digging into it, I realized that this pattern, that we're gonna see the next generation of amazing capabilities coming from these computational devices.

  • And then, one of the things I had been arguing with Elon at the time about, was that Elon was constantly using the word robocalypse, which we, as human beings, tend to be more easily and quickly motivated by fear than by hope.

  • So you're using the term robocalypse, and everyone imagines the Terminator and all the rest.

  • Sounds pretty scary.

  • It sounds very scary.

  • Robocalypse doesn't sound like something we want.

  • Yeah, stop saying that.

  • Because actually, in fact, the chance that I could see anything like a robocalypse happening is so de minimis relative to everything else.

  • So you did come together on OpenAI.

  • How did that happen?

  • I think it started with Elon and Sam having a bunch of conversations.

  • And then, since I know both of them quite well, I got called in.

  • And I was like, look, I think this could really make sense.

  • Something should be the counterweight to all of the natural work that's gonna happen within commercial realms.

  • How do we make sure that one company doesn't dominate the industry, but the tools are provided across the industry so innovation can benefit from startups and all the rest?

  • I was like, great.

  • And let's do this thing, OpenAI.

  • I did ask ChatGPT what questions I should ask you.

  • I thought its questions were pretty boring.

  • Yes.

  • Your answers were pretty boring, too.

  • So we're not getting replaced anytime soon.

  • But clearly, this has really struck a nerve.

  • There are people out there who are gonna fall for it.

  • Yes.

  • Shouldn't we be worried about that?

  • Okay, so everyone's encountered a crazy person who's drunk off their ass at a cocktail party who says really odd things, or at least every adult has.

  • And, you know, that's not like the world didn't end.

  • Right?

  • We do have to pay attention to areas that are harmful.

  • Like, for example, someone's depressed, they're thinking about self-harm.

  • You want all channels by which they could get in the self-harm to be limited.

  • That isn't just chatbots.

  • That could be communities of human beings.

  • That could be search engines.

  • You have to pay attention to all the dimensions of it.

  • How are we overestimating AI?

  • It still doesn't really do something that I would say is original to an expert.

  • So, for example, one of the questions I asked was how would Reid Hoffman make money by investing in artificial intelligence?

  • And the answer he gave me was a very smart, very well-written answer that would have been written by a professor at a business school who didn't understand venture capital.

  • Right?

  • So it seems smart.

  • Would study large markets.

  • Would realize what products would be substituted in the large markets.

  • Would find teams to go do that and invest in them.

  • And this was all written, very credible, and completely wrong.

  • The newest edge of the information is still beyond these systems.

  • Billions of dollars are going into AI.

  • My inbox is filled with AI pitches.

  • Last year it was crypto and Web3.

  • How do we know this isn't just the next bubble?

  • I do think that the generative AI is the thing that has the broadest touch of everything.

  • Now, which places are the right places to invest?

  • I think those are still things we're working on now, obviously, as venture capitalists.

  • Part of what we do is we try to figure that out in advance, you know, years before other people see it coming.

  • But I think that there will be massive new companies built.

  • It does seem, in some ways, like a lot of AI is being developed by an elite group of companies and people.

  • Is that something that you see happening?

  • In some ideal universe, you'd say, for a technology that would impact billions of people, somehow billions of people should directly be involved in creating it.

  • But that's not how any technology anywhere in history gets built.

  • And there's reasons you have to build it at speed.

  • But the question is, how do you get the right conversations and the right issues on the table?

  • So do you see an AI mafia forming?

  • I definitely think that there is, because you're referring to the PayPal mafia.

  • Of course.

  • I definitely think that there's a network of folks who have been deeply involved over the last few years will have a lot of influence on how the technology happens.

  • Do you think AI will shake up the big tech hierarchy significantly?

  • What it certainly does is it creates a wave of disruption.

  • For example, with these large language models in search, what do you want?

  • Do you want 10 blue links?

  • Or do you want an answer?

  • In a lot of search cases, you want an answer.

  • And a generated answer that's like a mini Wikipedia page is awesome.

  • That's a shift.

  • So I think we'll see a profusion of startups doing interesting things in this.

  • But can the next Google or Facebook really emerge if Google and Facebook or Meta and Apple and Amazon are running the Playbook and Microsoft?

  • Do I think that we'll be another one to three companies that will be the size of the five big tech giants emerging, possibly from AI?

  • Absolutely, yes.

  • Now, does that mean that one of them is gonna collapse?

  • No, not necessarily.

  • And it doesn't need to.

  • The more that we have, the better.

  • So what are the next big five?

  • Well, that's what we're trying to invest in.

  • You're on the board of Microsoft.

  • Obviously, Microsoft is making a big AI push.

  • Did you bring Satya and Sam or have any role in bringing Satya and Sam closer together?

  • Because Microsoft obviously has $10 billion now in open AI.

  • Well, I think I could, I probably have a, you know, both of them are close to me and know me and trust me well.

  • So I think I've helped facilitate understanding and communications.

  • Elon left open AI years ago and pointed out that it's not as open as it used to be.

  • He said he wanted it to be a nonprofit counterweight to Google.

  • Now, it's a closed source maximum profit company effectively controlled by Microsoft.

  • Does he have a point?

  • Well, he's wrong on a number of levels there.

  • So one is it's run by a 501c3.

  • It is a nonprofit.

  • But it does have a for-profit part.

  • The commercial system, which is all carefully done is to bring in capital to support the nonprofit mission.

  • Now, get to the question of, for example, open.

  • So Dolly was ready for four months before it was released.

  • Why did it delay for four months?

  • Because it was doing safety training.

  • It said, well, we don't wanna have this being used to create child sexual material.

  • We don't wanna have this being used for assaulting individuals or doing deep fakes.

  • So we're not gonna open source it.

  • We're gonna release it through an API so we can be seeing what the results are and making sure it doesn't do any of these harms.

  • So it's open because it has open access to the APIs, but it's not open because it's open source.

  • There are folks out there who are angry actually about open AIs branching out from nonprofit to for-profit.

  • Is there a bit of a bait and switch there?

  • The cleverness that Sam and everyone else figured out is they could say, look, we can do a market commercial deal where we say we'll give you commercial licenses to parts of our technology in various ways.

  • And then we can continue our mission of beneficial AI.

  • The AI graveyard is filled with algorithms that got into trouble.

  • How can we trust open AI or Microsoft or Google or anyone to do the right thing?

  • Well, we need to be more transparent.

  • But on the other hand, of course, a problem exactly as you're alluding to is people say, well, the AI should say that or shouldn't say that.

  • We can't even really agree on that ourselves.

  • So we don't want that to be litigated by other people.

  • We want that to be a social decision.

  • So how does this shake out globally?

  • We should be trying to build the industries of the future.

  • That's what's the most important thing.

  • And it's one of the reasons why I tend to very much speak against people like, oh, we should be slowing down.

  • Do you have any intention of slowing down?

  • We've been very vocal about these risks for many, many years.

  • One of them is acceleration.

  • And I think that's a significant risk that we as a society need to grapple with.

  • Building safe AI systems that are general is very complex.

  • It's incredibly hard.

  • So what does responsible innovation look like to you?

  • Would you support, for example, a federal agency like the FDA that vets technology like it vets drugs?

  • I think some sort of trusted authority that can audit the systems based on some agreed upon principles would be very helpful.

  • I've heard AI experts talk about the potential for the good future versus the bad future.

  • In the bad future, there's talk about this leading human extinction.

  • Are those people wrong?

  • There is certainly a risk that when we have these AI systems that are able to set their own goals, they decide that their goals are not aligned with ours.

  • And they do not benefit from having us around.

  • And could lead to human extinction.

  • That is a risk.

  • I don't think this risk has gone up or down from the things that have been happening in the past few months.

  • I think it's certainly been quite hyped.

  • And there is a lot of anxiety around it.

  • If we're talking about the risk for human extinction, have you had a moment where you're just like, wow, this is big?

  • I think a lot of us at OpenAI joined because we thought that this would be the most important technology that humanity would ever create.

  • But of course, the risks on the other hand are also pretty significant.

  • And this is why we're here.

  • Do OpenAI employees still vote on AGI and when it will happen?

  • I actually don't know.

  • What is your prediction about AGI now?

  • And how far away it really is?

  • We're still quite far away from being at a point where these systems can make decisions autonomously and discover new knowledge.

  • But I think I have more certainty around the advent of having powerful systems in our future.

  • Should we even be driving towards AGI?

  • And do humans really want it?

  • Advancements in society come from pushing human knowledge.

  • Now that doesn't mean that we should do so in careless and reckless ways.

  • I think there are ways to guide this development versus bring it to a screeching halt because of our potential fears.

  • So the train has left the station and we should stay on it.

  • That's one way to put it.

Inside a nondescript building in the heart of San Francisco, one of the world's buzziest startups is making our AI-powered future feel more real than ever before.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it