Placeholder Image

Subtitles section Play video

  • We said from the very beginning we were going to go after AGI at a time when, in the field, you weren't allowed to say that.

  • Because that just seemed impossibly crazy.

  • I remember a rash of criticism for you guys at that moment.

  • We really wanted to push on that, and we were far less resourced than DeepMind and others.

  • And so we said, okay, they're going to try a lot of things, and we've just got to pick one and really concentrate, and that's how we can win here.

  • Most of the world still does not understand the value of a fairly extreme level of conviction on one bet.

  • That's why I'm so excited for startups right now.

  • It is because the world is still sleeping on all of this to such an astonishing degree.

  • We have a real treat for you today.

  • Sam Altman, thanks for joining us.

  • Thanks, Gary.

  • This is actually a reboot of your series, How to Build the Future.

  • And so welcome back to the series that you started.

  • That was like eight years ago. I was trying to think about that.

  • Yeah, that's wild.

  • I'm glad it's being rebooted.

  • That's right.

  • Let's talk about your newest essay on the age of intelligence.

  • You know, is this the best time ever to be starting a technology company?

  • Let's at least say it's the best time yet.

  • Hopefully there'll be even better times in the future.

  • I sort of think with each successive major technological revolution, you've been able to do more than you could before.

  • And I would expect the companies to be more amazing and impactful and everything else.

  • So yeah, I think it's the best time yet.

  • Big companies have the edge when things are like moving slowly and not that dynamic.

  • And then when something like this or mobile or the internet or semiconductor revolution happens, or probably like back in the days of the Industrial Revolution, that was when upstarts had their have their edge.

  • So yeah, this is like and it's been a while since we've had one of these.

  • So this is like pretty exciting.

  • In the essay, you actually say a really big thing, which is ASI, super intelligence, is actually thousands of days away.

  • Maybe. I mean, that's our hope, our guess, whatever.

  • But that's a very wild statement.

  • Yeah. Tell us about it.

  • I mean, that's big.

  • That is really big.

  • I can see a path where the work we are doing just keeps compounding.

  • And the rate of progress we've made over the last three years continues for the next three or six or nine or whatever.

  • You know, nine years would be like 3,500 days or whatever.

  • If we can keep this rate of improvement or even increase it, like that system will be quite capable of doing a lot of things.

  • I think already even a system like O1 is capable of doing like quite a lot of things.

  • From just like a raw cognitive IQ on a closed end, well-defined task in a certain area.

  • I'm like, O1 is like a very smart thing.

  • And I think we're nowhere near the limit of progress.

  • I mean, that was an architecture shift that sort of unlocked a lot.

  • And what I'm sort of hearing is that these things are going to compound.

  • We could hit some like unexpected wall or we could be missing something.

  • But it looks to us like there's a lot of compounding in front of us still to happen.

  • I mean, this essay is probably the most techno-optimist of almost anything I've seen out there.

  • Some of the things we get to look forward to, fixing the climate, establishing a space colony, the discovery of all of physics, near limitless intelligence and abundant energy.

  • I do think all of those things and probably a lot more we can't even imagine are maybe not that far away.

  • And one of, and I think it's like tremendously exciting that we can talk about this even semi-seriously now.

  • One of the things that I always have loved most about YC is it encourages slightly implausible degrees of techno-optimism.

  • And just a belief that like, ah, you can figure this out.

  • And, you know, in a world that I think is like sort of consistently telling people, this is not going to work, you can't do this thing, you can't do that.

  • I think the kind of early PG spirit of just encouraging founders to like think a little bit bigger is like, it is a special thing in the world.

  • The abundant energy thing seems like a pretty big deal.

  • You know, there's sort of path A and path B, you know, if we do achieve abundant energy, it seems like this is a real unlock.

  • Almost any work, not just, you know, knowledge work, but actually like real physical work could be unlocked with robotics and with language and intelligence on tap.

  • Like, there's a real age of abundance.

  • I think these are like the key to, the two key inputs to everything else that we want.

  • There's a lot of other stuff, of course, that matters, but the unlock that would happen if we could just get truly abundant intelligence, truly abundant energy, what we'd be able to make happen in the world, like both like come up with better ideas more quickly and then also like make them happen in the physical world, like to say nothing of, it'd be nice to be able to run lots of AI and that takes energy too.

  • I think that would be a huge unlock in the fact that it's, I'm not sure whether to be surprised that it's all happening at the same time, or if this is just like the natural effect of an increasing rate of technological progress, but it's certainly a very exciting time to be alive and a great time to do a startup.

  • Well, so we sort of walked through this age of abundance, you know, maybe robots can actually manufacture, do anything.

  • Almost all physical labor can then result in material progress, not just for the most wealthy, but for everyone.

  • You know, what happens if we don't unleash unlimited energy?

  • If, you know, there's some physical law that prevents us from exactly that?

  • Solar plus storage is on a good enough trajectory that even if we don't get a big nuclear breakthrough, we would be like okay-ish.

  • But for sure it seems that driving the cost of energy down and the abundance of it up has like a very direct impact on quality of life.

  • And eventually we'll solve every problem in physics, so we're going to figure this out, it's just a question of when.

  • And we deserve it.

  • There's, you know, someday we'll be talking not about fusion or whatever, but about the Dyson sphere and that'll be awesome too.

  • Yeah.

  • This is a point in time, whatever feels like abundant energy to us will feel like not nearly enough to our great-grandchildren.

  • And there's a big universe out there with a lot of matter.

  • Yeah.

  • Wanted to switch gears a little bit to sort of, earlier you were mentioning Paul Graham, who brought us all together, really created Y Combinator.

  • He likes to tell the story of how, you know, how you got into YC was actually, you were a Stanford freshman.

  • And he said, you know what, this is the very first YC batch in 2005.

  • And he said, you know what, you're a freshman and YC will still be here next time, you should just wait.

  • And you said, I'm a sophomore and I'm coming.

  • And you're widely known in our community as, you know, one of the most formidable people.

  • Where do you think that came from?

  • That one story, I think I would happy, I'd be happy if that like drifted off.

  • Well, now it's, it's purely immortalized here.

  • Here it is.

  • My memory of that is that, like, I needed to reschedule an interview one day or something.

  • And PG tried to, like, say, like, just do it next year or whatever.

  • And then I think I said some nicer version of, I'm a sophomore and I'm coming.

  • But yeah, you know, these things get slightly apocryphal.

  • It's funny, I don't, and I say this with no false modesty, I don't like identify as a formidable person at all.

  • In fact, I think there's a lot of ways in which I'm really not.

  • I do have a little bit of a, just like, I don't see why things have to be the way they are.

  • And so I'm just going to, like, do this thing that from first principles seems like fine.

  • And I always felt a little bit weird about that.

  • And then I remember one of the things I thought was so great about YC, and still that I care so much about YC, is it was like a collection of the weird people who are just like, I'm just going to do my thing.

  • The part of this that does resonate as a, like, accurate self-identity thing is,

  • I do think you can just do stuff or try stuff a surprising amount of the time.

  • And I think more of that is a good thing.

  • And then I think one of the things that both of us found at YC was a bunch of people who all believed that you could just do stuff.

  • For a long time, when I was trying to, like, figure out what made YC so special,

  • I thought that it was like, okay, you have this, like, very amazing person telling you you can do stuff I believe in you.

  • And as a young founder, that felt so special and inspiring.

  • Of course it is.

  • But the thing that I didn't understand until much later was it was the peer group of other people doing that.

  • And one of the biggest pieces of advice I would give to young people now is finding that peer group as early as you can was so important to me.

  • And I didn't realize it was something that mattered.

  • I kind of thought, ah, like, I have, you know, I'll figure it out on my own.

  • But, man, being around, like, inspiring peers, so, so valuable.

  • What's funny is both of us did spend time at Stanford.

  • I actually did graduate, which is, I probably shouldn't have done that, but I did.

  • You pursued the path of, you know, far greater return by dropping out.

  • But, you know, that was a community that purportedly had a lot of these characteristics.

  • But I was still beyond surprised at how much more potent it was with a room full of founders.

  • It was, I was just going to say the same thing.

  • Actually, I liked Stanford a lot, but I was, I did not feel surrounded by people that made me, like, want to be better and more ambitious and whatever else.

  • And to the degree I did, the thing you were competing with your peers on was, like, who was going to get the internship at which investment bank?

  • Which I'm embarrassed to say, I fell in that trap.

  • This is, like, how powerful peer groups are.

  • It's a very easy decision to not go back to school after, like, seeing what the, like, YC vibe was like.

  • Yeah.

  • There's a powerful quote by Carl Jung that I really love.

  • It's, you know, the world will come and ask you who you are, and if you don't know, it will tell you.

  • It sounds like being very intentional about who you want to be and who you want to be around as early as possible is very important.

  • Yeah, this was definitely one of my takeaways, at least for myself, is you, no one is immune to peer pressure.

  • And so all you can do is, like, pick good peers.

  • Yeah.

  • Obviously, you know, you went on to create Looped, you know, sell that, go to Green Dot.

  • And then we ended up getting to work together at YC.

  • Talk to me about, like, the early days of YC research.

  • Like, one of the really cool things that you brought to YC was this experimentation.

  • And you sort of, I mean, I remember you coming back to partner rooms and talking about some of the rooms that you were getting to sit in with, like, the Larry and Sergeys of the world.

  • And that, you know, AI was sort of at the tip of everyone's tongue because it felt so close.

  • And yet it was, you know, that was 10 years ago.

  • The thing I always thought would be the coolest retirement job was to get to, like, run a research lab.

  • And it was not specific to AI at that time.

  • When we started talking about YC research, well, not only was it going to, it did end up funding, like, a bunch of different efforts.

  • And I wish I could tell the story of, like, if it was obvious that AI was going to work and be the thing.

  • But, like, we tried a lot of bad things, too, around that time.

  • I read a few books on, like, the history of Xerox Park and Bell Labs and stuff.

  • And I think there were a lot of people, like, it was in the air of Silicon Valley at the time that we need to, like, have good research labs again.

  • And I just thought it would be so cool to do.

  • And it was sort of similar to what YC does in that you're going to, like, allocate capital to smart people.

  • And sometimes it's going to work and sometimes it's not going to.

  • And I just wanted to try it.

  • AI for sure was having a mini moment.

  • This was, like, kind of late 2014, 2015, early 2016 was, like, this superintelligence discussion, like, the book Superintelligence was happening.

  • Both strung me up.

  • Yeah.

  • The DeepMind had had a few, like, impressive results, but a little bit of a different direction.

  • You know, I had been an AI nerd forever.

  • So I was like, oh, it'd be so cool to try to do something.

  • But it was very hard to say.

  • Was ImageNet out yet?

  • ImageNet was out.

  • Yeah.

  • Yeah.

  • For a while at that point.

  • So you could tell if it was a hot dog or not.

  • You could sometimes.

  • Yeah.

  • That was getting there.

  • Yeah.

  • You know, how did you identify the initial people you wanted involved in, you know, YC research and OpenAI?

  • I mean, Greg Brockman was early.

  • In retrospect, it feels like this movie montage.

  • And there were, like, all of these, like, you know, at the beginning of, like, the Bank Heist movie when you're, like, driving around to find the people and whatever.

  • And they're like, you son of a bitch.

  • I'm in.

  • Right.

  • Like, Ilya, I, like, heard he was really smart.

  • And then I watched some video of his.

  • And he's extremely smart.

  • Like, true, true, genuine genius and visionary.

  • But also, he has this incredible presence.

  • And so I watched this video of his on YouTube or something.

  • I was like, I got to meet that guy.

  • And I emailed him.

  • He didn't respond.

  • So I just, like, went to some conference he was speaking at.

  • And we met up.

  • And then after that, we started talking a bunch.

  • And then, like, Greg, I had known a little bit from the early Stripe days.

  • What was that conversation like, though?

  • It's like, I really like what your idea is about AI.

  • And I want to start a lab.

  • Yes.

  • And one of the things that worked really well in retrospect was we said from the very beginning we were going to go after AGI at a time when, in the field, you weren't allowed to say that because that just seemed impossibly crazy and, you know, borderline irresponsible to talk about.

  • So that got his attention immediately.

  • It got all of the good young people's attention and the derision, derision, whatever that word is, of the mediocre old people.

  • And I felt like somehow that was, like, a really good sign and really powerful.

  • And we were, like, this ragtag group of people.

  • I mean, I was the oldest by a decent amount.

  • I was, like, I guess I was 30 then.

  • And so you had, like, these people who were, like, those are these irresponsible young kids who don't know anything by anything.

  • And they're, like, saying these ridiculous things.

  • And the people who that was really appealing to, I guess, are the same kind of people who would have said, like, it's a, you know, I'm a sophomore and I'm coming or whatever.

  • And they were, like, let's just do this thing.

  • Let's take a run at it.

  • And so we kind of went around and met people one by one and then in different configurations of groups.

  • And it kind of came together over the course of, in fits and starts, but over the course of, like, nine months.

  • And then it started, I mean.

  • And then it started.

  • It started happening.

  • And one of my favorite, like, memories of all of OpenAI was Ilya had some reason that with Google or something that we couldn't start in.

  • We announced in December of 2015, but we couldn't start until January of 2016.

  • So, like, January 3rd, something like that of 2016, like very early in the month, people come back from the holidays.

  • And we go to Greg's apartment.

  • Maybe there's 10 of us, something like that.

  • And we sit around.

  • And it felt like we had done this monumental thing to get it started.

  • And everyone's like, so what do we do now?

  • What a great moment.

  • It reminded me of when startup founders work really hard to, like, raise a round.

  • And they think, like, oh, I accomplished this.

  • We did it.

  • We did it.

  • And then you sit down and say, like, fuck, now we got to, like, figure out what we're going to do.

  • It's not time for popping champagne.

  • That was actually the starting gun.

  • And now we got to run.

  • Yeah.

  • And you have no idea how hard the race is going to be.

  • It took us a long time to figure out what we're going to do.

  • But one of the things that I'm really amazingly impressed by, Ilya in particular, but really all of the early people about, is although it took a lot of twists and turns to get here, the big picture of the original ideas was just so incredibly right.

  • And so they were, like, up on, like, one of those flip charts or whiteboards, I don't remember which, in Greg's apartment.

  • And then we went off and, you know, did some other things that worked or didn't work or whatever.

  • And some of them did.

  • And eventually now we have this, like, system.

  • And it feels very crazy and very improbable looking backwards that we went from there to here with so many detours on the way, but got where we were pointed.

  • Was deep learning even on that flip chart initially?

  • Yeah.

  • I mean, more specifically than that, like, do a big unsupervised model and then solve RL was on that flip chart.

  • One of the flip charts from a very, this is before Greg's apartment, but from a very early offsite, I think this is right.

  • I believe there were three goals for the effort at the time.

  • It was, like, figure out how to do unsupervised learning, solve RL, and never get more than 120 people.

  • Missed on the third one, but the, like, the predictive direction of the first two is pretty good.

  • So deep learning.

  • Then the second big one sounded like scaling, like the idea that you could scale.

  • That was another heretical idea that people actually found even offensive.

  • I remember a rash of criticism for you guys at that moment.

  • When we started, yeah, the core beliefs were deep learning works and it gets better with scale.

  • And I think those were both somewhat heretical beliefs.

  • At the time, we didn't know how predictably better it got with scale.

  • That didn't come for a few years later.

  • It was a hunch first and then you got the data to show how predictable it was.

  • But people already knew that if you made these neural networks bigger, they got better.

  • Like, that was, we were sure of that before we started.

  • And what took the, like, word that keeps coming to mind is, like, religious level of belief was that that wasn't going to stop.

  • Everybody had some reason of, oh, it's not really learning.

  • It's not really reasoning.

  • It can't really do this.

  • It's, you know, it's like a parlor trick.

  • And these were, like, the eminent leaders of the field.

  • And more than just saying, you're wrong, they were like, you're wrong.

  • And this is, like, a bad thing to believe or a bad thing to say.

  • It was that there's got to, you know, this is like, you're going to perpetuate an AI winter.

  • You're going to do this.

  • You're going to do that.

  • And we were just, like, looking at these results and saying, they keep getting better.

  • Then we got the scaling results.

  • It just kind of breaks my intuition, even now.

  • And at some point, you have to just look at the scaling loss and say, we're going to keep doing this.

  • And this is what we think it'll do.

  • And it also, it was starting to feel at that time like something about deep learning was just this emergent phenomenon that was really important.

  • And even if we didn't understand all of the details in practice yet, which obviously we didn't and still don't, that there was something really fundamental going on.

  • It was the pgism for this as we had, like, discovered a new square in the periodic table.

  • And so we just, we really wanted to push on that.

  • And we were far less resourced than DeepMind and others.

  • And so we said, okay, they're going to try a lot of things.

  • And we've just got to pick one and really concentrate.

  • And that's how we can win here, which is totally the right startup takeaway.

  • And so we said, well, we don't know what we don't know.

  • We do know this one thing works.

  • So we're going to really concentrate on that.

  • And I think some of the other efforts were trying to outsmart themselves in too many ways.

  • And we just said, we'll just, we'll do the thing in front of us and keep pushing on it.

  • Scale is this thing that I've always been interested in, at kind of just the emergent properties of scale for everything.

  • For startups, turns out for deep learning models, for a lot of other things.

  • I think it's a very underappreciated property and thing to go after.

  • And I think it's, you know, when in doubt, if you have something that seems like it's getting better with scale, I think you should scale it up.

  • I think people want things to be, you know, less is more.

  • But actually, more is more.

  • More is more.

  • We believed in that.

  • We wanted to push on it.

  • I think one thing that is not maybe that well understood about OpenAI is we had just this, even when we were like pretty unknown, we had a crazy talented team of researchers.

  • You know, if you have like the smartest people in the world, you can push on something really hard.

  • Yeah, and they're motivated.

  • And or you created sort of one of the sole places in the world where they could do that.

  • Like one of the stories I heard is just even getting access to compute resources, even today, is this crazy thing.

  • And embedded in some of the criticism from maybe the elders of the industry at the moment was sort of that, you know, you're going to waste a lot of resources and somehow that's going to result in an AI winter.

  • Like people won't give resources anymore.

  • It's funny, people were never sure if we were going to waste resources or if we were doing something kind of vaguely immoral by putting in too much resources.

  • And you were supposed to spread it across lots of bets rather than like conviction on one.

  • Most of the world still does not understand the value of like a fairly extreme level of conviction on one bet.

  • And so we said, okay, we have this evidence.

  • We believe in this thing.

  • We're going to, at a time when like the normal thing was we're going to spread against this bet and that bet and that bet.

  • You're a definite optimist.

  • You're a definite optimist.

  • And I think across like many of the successful YC startups, you see a version of that again and again.

  • Yeah, that sounds right.

  • When the world gives you sort of pushback and the pushback doesn't make sense to you, you should do it anyway.

  • Totally.

  • One of the many things that I'm very grateful about getting exposure to from the world of startups is how many times you see that again and again and again.

  • And before, I think before YC, I really had this deep belief that somewhere in the world, there were adults in charge, adults in the room, and they knew what was going on.

  • And someone had all the answers.

  • And, you know, if someone was pushing back on you, they probably knew what was going on.

  • And the degree to which I now understand that, you know, to pick up the earlier phrase, you can just do stuff.

  • You can just try stuff.

  • No one has all the answers.

  • There are no like adults in the room that are going to magically tell you exactly what to do.

  • And you just kind of have to like iterate quickly and find your way.

  • That was like a big unlock in life for me to understand.

  • There is a difference between being high conviction just for the sake of it.

  • And if you're wrong and you don't adapt and you don't try to be like truth-seeking, it still is really not that effective.

  • The thing that we tried to do was really just believe whatever the results told us and really kind of try to go do the thing in front of us.

  • And there were a lot of things that we were high conviction and wrong on.

  • But as soon as we realized we were wrong, we tried to like fully embrace it.

  • Conviction is great until the moment you have data one way or the other.

  • And there are a lot of people who hold on it past the moment of data.

  • So it's iterative.

  • It's not just you're wrong and I'm right.

  • You have to go show your worth.

  • But there is a long moment where you have to be willing to operate without data.

  • And at that point, you do have to just sort of run on conviction.

  • Yeah. It sounds like there's a focusing aspect there too.

  • Like you had to make a choice and that choice had better, you know, you didn't have infinite choices.

  • And so, you know, the prioritization itself was an exercise that made it much more likely for you to succeed.

  • I wish I could go tell you like, oh, we knew exactly what was going to happen.

  • And it was, you know, we had this idea for language models from the beginning.

  • And, you know, we kind of went right to this.

  • But obviously, the story of OpenAI is that we did a lot of things that helped us develop some scientific understanding, but we're not on the short path.

  • If we knew then what we know now, we could have speedrun this whole thing to like an incredible degree.

  • It doesn't work that way. Like you don't get to be right at every guess.

  • And so we started off with a lot of assumptions, both about the direction of technology, but also what kind of company we were going to be and how we were going to be structured and how AGI was going to go and all of these things.

  • And we have been like humbled and badly wrong many, many, many times.

  • And one of our strengths is the ability to get punched in the face and get back up and keep going.

  • This happens for scientific bets, for, you know, being willing to be wrong about a bunch of other things.

  • We thought about how the world was going to work and what the sort of shape of the product was going to be.

  • Again, we had no idea, or I at least had no idea, maybe Alec Radford did.

  • I had no idea that language models were going to be the thing.

  • You know, we started working on robots and agents playing video games and all these other things.

  • Then a few years later, GPT-3 happened.

  • That was not so obvious at the time.

  • It sounded like there was a key insight around positive or negative sentiment around GPT-1.

  • Even before GPT-1.

  • Oh, even before.

  • I think the paper was called The Unsupervised Sentiment Neuron.

  • I think Alec did it alone.

  • By the way, Alec is this unbelievable outlier of a human.

  • And so he did this incredible work, which was just looking at...

  • He noticed there was one neuron that was flipping positive or negative sentiment as it was doing these generative Amazon reviews, I think.

  • Other researchers might have hyped it up more, made a bigger deal out of it or whatever.

  • But, you know, it was Alec.

  • So it took people a while to, I think, fully internalize what a big deal it was.

  • And he then did GPT-1.

  • Somebody else scaled it up and did GPT-2.

  • But it was off of this insight that there was something amazing happening where...

  • And at the time, unsupervised learning was just not really working.

  • So he noticed this one really interesting property, which is there was a neuron that was flipping positive or negative with sentiment.

  • And, yeah, that led to the GPT series.

  • I guess one of the things that Jake Heller from Case Text...

  • I think of him as maybe, I mean, not surprisingly, a YC alum who got access to both 3, 3.5 and 4.

  • And he described getting 4 as sort of the big moment revelation.

  • Because 3.5 would still do...

  • I mean, it would hallucinate more than he could use in a legal setting.

  • And then with 4, it reached the point where if he chopped the prompts down small enough into workflow, he could get it to do exactly what he wanted.

  • And he built huge test cases around it and then sold that company for $650 million.

  • So I think of him as one of the first to commercialize GPT-4 in a relatively grand fashion.

  • I remember that conversation with him.

  • Yeah.

  • With when GPT-4...

  • Like, that was one of the few moments in that thing where I was like, okay, we have something really great on our hands.

  • When we first started trying to sell GPT-3 to founders, they would be like, it's cool.

  • It's doing something amazing.

  • It's an incredible demo.

  • But with the possible exception of copywriting, no great businesses were built on GPT-3.

  • And then 3.5 came along and people, startups, like YC startups in particular, started to do...

  • It no longer felt like we were pushing a boulder uphill.

  • It's like people actually wanted to buy the thing we were selling.

  • Totally.

  • And then 4, we kind of like got the, like, just how many GPUs can you give me?

  • Oh, yeah.

  • Moment, like, very quickly after giving people access.

  • So we felt like, okay, we got something like really good on our hands.

  • So you knew actually from your users then?

  • Totally.

  • When the model dropped itself and you got your hands on it, it was like, well, this is better.

  • We were totally impressed then, too.

  • We had all of these, like, tests that we did on it that looked great and it could just do these things that we were all super impressed by.

  • Also, like, when we were all just playing around with it and, like, getting samples back, it was like, wow, it can do this now.

  • And it can rhyme and it can, like, tell a funny joke, slightly funny joke.

  • And it can, like, you know, do this and that.

  • And so it felt really great.

  • But, you know, you never really know if you have a hit product on your hands until you, like, put it in customers' hands.

  • Yeah.

  • You're always too impressed with your own work.

  • Yeah.

  • And so we were all excited about it.

  • We were like, oh, this is really quite good.

  • But until, like, the test happens, it's like

  • The real test is

  • Yeah, yeah, the real test is users.

  • Yeah.

  • So there's some anxiety until that moment happens.

  • Yeah.

  • I wanted to switch gears a little bit.

  • So before you created, obviously, one of the craziest AI labs ever to be created, you started at 19 at YC with a company called Looped, which was basically Find My Friend's geolocation, you know, probably, what, 15 years before Apple ended up making it?

  • Too early in any case, yeah.

  • Yeah.

  • What drew you to that particular idea?

  • I was, like, interested in mobile phones, and I wanted to do something that got to, like, use mobile phones.

  • This was when, like, mobile was just starting.

  • It was, like, you know, still three years or two years before the iPhone.

  • But it was clear that carrying around computers in our pockets was somehow a very big deal.

  • I mean, it's hard to believe now that there was a moment when phones were actually literally you just

  • They were just a phone.

  • They were an actual phone, yeah.

  • Yeah.

  • I mean, I try not to use it as an actual phone ever.

  • Ever, really.

  • I still remember the first phone I got that had internet on it, and it was this horrible, like, text-based, mostly text-based browser.

  • It was really slow.

  • You could, like, you know, do, like, you could so painfully and so slowly check your email.

  • But I was, like, a—I don't know, in high school, sometime in high school, and I got a phone that could do that versus, like, just text and call.

  • And I was, like, hooked right then.

  • Yeah.

  • I was like, oh, this is not a phone.

  • This is, like, a computer we can carry, and we're stuck with a dial pad for this accident history.

  • But this is going to be awesome.

  • And, I mean, now you have billions of people who they don't have a computer.

  • Like, to us growing up, you know, that actually was your first computer.

  • Yeah.

  • Not physically, but

  • This is a replica or, like, another copy of my first computer, which is a Mac LC2.

  • Yeah.

  • So this is what a computer was to us growing up.

  • And the idea that you would carry this little black mirror, like, kind of

  • We've come a long way.

  • Unconscionable back then.

  • Yeah.

  • So, you know, even then, youlike, technology and what was going to come was sort of in your brain.

  • Yeah, I was like a real—I mean, still am a real tech nerd.

  • Yeah.

  • But I alwaysthat was what I spent my Friday nights thinking about.

  • And then one of the harder parts of it was we didn't have the App Store.

  • The iPhone didn't exist.

  • You ended up being a big part of that launch, I think.

  • A small part, but, yes, we did get to be a little part of it.

  • It was a great experience for me to have been through because I kind of, like, understood what it is like to go through a platform shift, and how messy the beginning is, and how much, like, little things you do can shape the direction it all goes.

  • I was definitely on the other side of it then.

  • Like, I was watching somebody else create the platform shift.

  • But it was a super valuable experience to get to go through and sort of just see whathow it happens, and how quickly things change, and how you adapt through it.

  • What was that experience like?

  • You ended up selling that company.

  • It was probably the first time you were managing people and, you know, doing enterprise sales.

  • All of these things were useful lessons from that first experience.

  • I mean, it obviously was not a successful company.

  • It wasand so it was a very painful thing to go through.

  • But the rate of experience and education was incredible.

  • Another thing that PG said, or quoted somebody else saying, but always stuck with me, is your 20s are always an apprenticeship, but you don't know for what, and then you do your real work later.

  • And I did learn quite a lot, and I'm very grateful for it.

  • It was, like, a difficult experience, and we never found product market fit, really.

  • And we also never, like, really found a way to get to escape velocity, which is just always hard to do.

  • There is nothing that I have ever heard of that has a higher rate of generalized learning than doing a startup.

  • So it was great in that sense.

  • You know, when you're 19 and 20, like, riding the wave of some other platform shift, this shift from, you know, dumb cell phones to smartphones and mobile.

  • And, you know, here we are many years later, and your next act was actually, you know

  • I mean, I guess two acts later, literally spawning one of the major platform shifts.

  • We all get old.

  • Yeah.

  • But that's really what's happening, you know.

  • 18-, 20-year-olds are deciding that they could get their degree, but they're going to miss the wave.

  • Like, because all of this stuff

  • That's great.

  • everything's happening right now.

  • I am proud to hear that.

  • Do you have an intuitive sense?

  • Like, speaking to even a lot of the, you know, really great billion-dollar company founders, some of them are just not that aware of what's happening.

  • Like, they're CTOs.

  • It is astonishing to me.

  • It's wild, right?

  • Yeah.

  • I think that's why I'm so excited for startups right now, is because the world is still sleeping on all of this to such an astonishing degree.

  • Yeah.

  • And then you have, like, the YC founders being like, no, no, I'm going to, like, do this amazing thing and do it very quickly.

  • Yeah.

  • It reminds me of when Facebook almost missed mobile, because they were making web software, and they were really good at it.

  • Yeah.

  • And, like, they almost—I mean, they had to buy Instagram.

  • Like, Snapchat

  • And WhatsApp.

  • Yeah, and WhatsApp.

  • So it's interesting.

  • The platform shift is always built by the people who are young with no prior knowledge.

  • It's—it is—I think it's great.

  • So there's this other aspect that's interesting in that I think you're, you know, you and Elon and Bezos and a bunch of people out there, like, they sort of start their journey as founders, you know, really, you know, whether it's Looped or Zip2 or, you know, really in maybe pure software.

  • Like, it's just a different thing that they start, and then later they, you know, sort of get to level up.

  • You know, is there a path that you recommend at this point?

  • If people are thinking, you know, I want to work on the craziest hard tech thing first, should they just run towards that to the extent they can?

  • Or is there value in, you know, sort of solving the money problem first, being able to invest your own money, like, very deeply into the next thing?

  • It's a really interesting question.

  • It was definitely helpful that I could just, like, write the early checks for OpenAI, and I think it would've been hard to get somebody else to do that at the very beginning.

  • And then Elon did it a lot at a much higher scale, which I'm very grateful for, and then other people did after that.

  • And there's other things that I've invested in that I'm really happy to have been able to support, and I don't—I think it would've been hard to get other people to do it.

  • So that's great for sure.

  • And I did, like we were talking about earlier, learn these extremely valuable lessons.

  • But I also feel like I kind of, like, was wasting my time, for lack of a better phrase, working on Looped.

  • I don't—I definitely don't regret it.

  • It's, like, all part of the tapestry of life, and I learned a ton, and whatever else, but

  • What would you have done differently?

  • Or what would you tell yourself from, like, now to in a time capin a time travel capsule that would show up on your desk at Stanford when you were 19?

  • Well, it's hard, because AI was always the thing I most wanted to do.

  • And AI justlike, I went to school to study AI.

  • But at the time I was working in the AI lab, the one thing that I— they told you is definitely don't work on neural networks.

  • We tried that, and it doesn't work.

  • That's a long time ago.

  • I think I could have picked a much better thing to work on than Looped.

  • I don't know exactly what it would have been, but it all works out. It's fine.

  • Yeah.

  • There's this long history of people building more technology to help improve other people's lives.

  • And I actually think about this a lot.

  • Like, I think about the people that made that computer, and I don't know them.

  • You know, many of them probably long retired, but I am so grateful to them.

  • Yeah.

  • And some people worked super hard to make this thing at the limits of technology.

  • I got a copy of that on my eighth birthday, and it totally changed my life.

  • Yeah.

  • And the lives of a lot of other people, too.

  • They worked super hard.

  • They never, like, got a thank you from me, but I feel it to them very deeply.

  • And it's really nice to get to, like, add our brick to that long road of progress.

  • Yeah.

  • It's been a great year for OpenAI, not without some drama.

  • Always.

  • Yeah.

  • We're good at that.

  • What did you learn from, you know, sort of the ouster last fall?

  • And how do you feel about some of the, you know, departures?

  • I mean, teams do evolve, but how are you doing, man?

  • Tired, but good.

  • Yeah.

  • It's, we've kind of, like, speed run, like, medium size or even kind of, like, pretty big size tech company arc that would normally take, like, a decade and two years.

  • Like, ChessGPT is less than two years old.

  • Yeah.

  • And there's, like, a lot of painful stuff that comes with that.

  • And there are, you know, any company as it scales goes through management teams at some rate, and you have to sort of, the people who are really good at the zero to one phase are not necessarily people that are good at the one to ten or the ten to the hundred phase.

  • We've also kind of, like, changed what we're going to be, made plenty of mistakes along the way, done a few things really right.

  • And that comes with a lot of change.

  • And I think the goal of the company, the emergent AGI or whatever, however you want to think about it, is, like, just to keep making the best decisions we can at every stage.

  • But it does lead to a lot of change.

  • I hope that we are heading towards a period now of more calm, but I'm sure there will be other periods in the future where things are very dynamic again.

  • So, I guess, how does OpenAI actually work right now?

  • You know, I mean, the quality and, like, the pace that you're pushing right now, I think, is, like, beyond world class compared to a lot of the other, you know, really established software players, like, who came before.

  • This is the first time ever where I felt like we actually know what to do.

  • Like, I think from here to building an AGI will still take a huge amount of work.

  • There are some known unknowns, but I think we basically know what to go do.

  • And it'll take a while, it'll be hard, but that's tremendously exciting.

  • I also think on the product side, there's more to figure out, but roughly we know what to shoot at and what we want to optimize for.

  • That's a really exciting time.

  • And when you have that clarity, I think you can go pretty fast.

  • If you're willing to say, we're going to do these few things, we're going to try to do them very well, and our research path is fairly clear, our infrastructure path is fairly clear, our product path is getting clearer, you can orient around that super well.

  • We, for a long time, did not have that.

  • We were a true research lab.

  • And even when you know that, it's hard to act with the conviction on it because there's so many other good things you'd like to do.

  • But the degree to which you can get everybody aligned and pointed at the same thing is a significant determinant in how fast you can move.

  • I mean, sounds like we went from level one to level two very recently, and that was really powerful.

  • And then we actually just had our O1 hackathon at YC.

  • Yeah, that was so impressive.

  • That was super fun.

  • And then weirdly, one of the people who won, I think they came in third, was Camphor.

  • And so CAD-CAM startup did YC recently, last year or two, and they were able to, during the hackathon, build something that would iteratively improve an airfoil from something that wouldn't fly to literally something that had a competitive amount of lift.

  • And I mean, that sort of sounds like level four, which is the innovator stage.

  • It's very funny you say that.

  • I had been telling people for a while I thought that the level two to level three jump was going to happen, but then the level three to level four jump waslevel two to level three was going to happen quickly.

  • And then the level three to level four jump was somehow going to be much harder and require some medium-sized or larger new ideas.

  • And that demo and a few others have convinced me that you can get a huge amount of innovation just by using these current models in really creative ways.

  • Well, yeah, I mean, what's interesting is basically Camphor already built sort of the underlying software for CAD-CAM, and then language is sort of the interface to the large language model, which then can use the software-like tool use.

  • And then if you combine that with the idea of CodeGen, that's kind of a scary, crazy idea, right?

  • Like not only can the large language model code, but it can create tools for itself and then compose those tools similar to Chain of Thoughts with O1.

  • Yeah, I think things are going to go a lot faster than people are appreciating right now.

  • Yeah. Well, it's an exciting time to be alive, honestly.

  • You know, you mentioned earlier that thing about discover all of physics.

  • I always wanted to be a physicist, wasn't smart enough to be a good one, had to like contribute in this other way.

  • But the fact that somebody else I really believe is now going to go solve all the physics with this stuff, like, I'm so excited to be alive for that.

  • Let's get to level four.

  • So happy for whoever that person is.

  • Yeah.

  • Do you want to talk about level three, four and five briefly?

  • Yeah, so we realized that AGI had become this like badly overloaded word and people in all kinds of different things.

  • And we tried to just say, okay, here's our best guess roughly of the order of things.

  • You have these level one systems, which are these chatbots.

  • There'd be level two that would come, which would be these reasoners.

  • We think we got there earlier this year with the O1 release.

  • Three is agents ability to go off and do these longer term tasks.

  • Maybe like multiple interactions with an environment, asking people for help when they need it, working together, all of that.

  • And I think we're going to get there faster than people expect.

  • Four is innovators, like that's like a scientist and that's ability to go explore like a not well understood phenomena over like a long period of time and understand what's just kind of go just figure it out.

  • And then level five, this is the sort of slightly amorphous, like do that, but at the scale of a whole company or a whole organization or whatever.

  • That's going to be a pretty powerful thing.

  • Yeah.

  • And it feels kind of fractal, right?

  • Like even the things you had to do to get to two sort of rhyme with level five.

  • And then you have multiple agents that then self-correct, that work together.

  • I mean, that kind of sounds like an organization to me, just at like a very micro level.

  • Do you think that we'll have, I mean, you famously talked about it.

  • I think Jake talks about it.

  • It's like, you will have companies that make, you know, billions of dollars per year and have like less than a hundred employees, maybe 50, maybe 20 employees, maybe one.

  • It does seem like that.

  • I don't know what to make of that other than it's a great time to be a startup founder.

  • Yeah.

  • But it does feel like that's happening to me.

  • Yeah.

  • You know, it's like one person plus 10,000 GPUs.

  • Could happen.

  • Yeah.

  • Sam, what advice do you have for people watching who, you know, either are about to start or just started their startup?

  • Bet on this tech trend, like bet on this trend.

  • It's, this is, we are not near the saturation point.

  • The models are going to get so much better so quickly.

  • What you can do as a startup founder with this versus what you could do without it is so wildly different.

  • And the big companies, even the medium-sized companies, even the startups that are a few years old, they're already on like quarterly planning cycles.

  • And Google is on a year, decade planning cycle.

  • I don't know how they even do it anymore.

  • But your advantage with speed and focus and conviction and the ability to react to how fast the technology is moving, that is the number one edge of a startup, kind of ever, but especially right now.

  • So I would definitely like build something with AI and I would definitely like take advantage of the ability to see a new thing and build something that day rather than like put it into a quarterly planning cycle.

  • I guess the other thing I would say is it is easy when there's a new technology platform to say, well, because I'm doing some of AI, the rule, the laws of business don't apply to me.

  • I have this magic technology and so I don't have to build a moat or a competitive edge or a better product.

  • It's because I'm doing AI and you're not.

  • So that's all I need.

  • And that's obviously not true.

  • But what you can get are these short-term explosions of growth by embracing a new technology more quickly than somebody else and remembering not to fall for that and that you still have to build something of enduring value.

  • I think that's a good thing to keep in mind too.

  • Everyone can build an absolutely incredible demo right now.

  • Everyone can build an incredible demo.

  • But building a business, man, that's the brass ring.

  • The rules still apply.

  • You can do it faster than ever before and better than ever before, but you still have to build a business.

  • What are you excited about in 2025?

  • What's to come?

  • AGI?

  • Yeah.

  • Excited for that?

  • What am I excited for?

  • Growing a kid, I'm more excited for that than anything I've ever been.

  • Incredible.

  • Yeah, probably that.

  • That's by far the thing I'm most excited for ever in life.

  • Yeah, it changes your life completely, so.

  • I cannot wait.

  • Well, here's to building that better world for, you know, our kids and really hopefully the whole world.

  • This was a lot of fun.

  • Thanks for hanging out, Sam.

  • Thank you.

We said from the very beginning we were going to go after AGI at a time when, in the field, you weren't allowed to say that.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it