Lex Fridman Podcast - #419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Episode Date: March 18, 2024Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - Cloaked: https://...cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/sam-altman-2-transcript EPISODE LINKS: Sam's X: https://x.com/sama Sam's Blog: https://blog.samaltman.com/ OpenAI's X: https://x.com/OpenAI OpenAI's Website: https://openai.com ChatGPT Website: https://chat.openai.com/ Sora Website: https://openai.com/sora GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:51) - OpenAI board saga (25:17) - Ilya Sutskever (31:26) - Elon Musk lawsuit (41:18) - Sora (51:09) - GPT-4 (1:02:18) - Memory & privacy (1:09:22) - Q* (1:12:58) - GPT-5 (1:16:13) - $7 trillion of compute (1:24:22) - Google and Gemini (1:35:26) - Leap to GPT-5 (1:39:10) - AGI (1:57:44) - Aliens
Transcript
Discussion (0)
The following is a conversation with Sam Altman, his second time in the podcast.
He is the CEO of OpenAI, the company behind GPT-4, Chad GPT, Sora, and perhaps one day,
the very company that will build AGI.
And now, a quick few second mention of each sponsor. Check them out in the description.
It's the best way to support this podcast. We got a new sponsor, Cloaked, for protecting your personal
information. Shopify for selling stuff online. BetterHelp for helping out your mind and ExpressVPN
for protecting your privacy and security on the interwebs. Choose wisely, my friends. Also, if you want to work with our amazing team,
we're always hiring.
Or if you just want to get in touch with me,
go to lexfreeman.com slash contact.
And now onto the full ad reads.
As always, no ads in the middle.
I try to make this interesting,
but if you must skip them,
friends, please do check out our sponsors.
I enjoy their stuff.
Maybe you will too.
This episode is brought to you by cloaked a sponsor. I didn't know existed until quite recently and
Always thought a thing like this should exist and I couldn't quite find a thing like it that existed and once I found it
It was pretty awesome. It's a platform that lets you generate
new email addresses and phone numbers every time you sign up for a pretty awesome. It's a platform that lets you generate new
email addresses and phone numbers every time you sign up for a website. So it's
called a masked email which basically creates, I guess you could say it's a fake
email that hides your actual email but it's not fake and that it actually exists
and persists throughout time and the website thinks it's real, it just
forwards to your actual email. You can set up the forwarding the point
is the website or service that you sign up for doesn't know your actual phone
number and doesn't know your actual email so this is a really interesting
idea because when you sign up to different websites there's a kind of
contract unspoken contract that the email you provide and the phone number
you provide will not be abused.
For the kind of abuse I'm talking about in sort of the best case just spammed or in the
worst case that email or phone number being sold out there and then you get not just spam
for one sort but spam from all of the sources all over the place.
Anyway, this is just a smart thing to protect yourself.
And it also does basic password manager stuff.
So you can think of cloaked as a great password manager with extra privacy superpowers.
You can go to cloaked.com slash Lex to get 14 days free or for a limited time use code
Lex pod when signing up to get 25% off an annual cloaked plan.
This episode is also brought to you by Shopify, a platform designed for anyone, yes, anyone
including me, to sell anywhere with a great looking online store.
I use it to sell some t-shirts at LexRuman.com slash store.
You can check it out.
I use the most basic store.
It took just a few minutes and the store was up.
From the shirt design being finished to the store being alive and being able to sell t-shirts
and ship those t-shirts thanks to the integration with a third party, which there's thousands
of integrations with a third party, which there's thousands of integrations with a third party.
So for t-shirts that's like on demand printing
so you don't have to take care of the shipping
and the printing and all that kind of stuff.
All of that is integrated, super easy to do
and this works for any kind of business
that sells stuff online.
You can integrate into your own website
or you can sell it on Shopify itself, which is what I do.
You can sign up for a $1 per month trial period
at Shopify.com slash Lex.
All lowercase, go to Shopify.com slash Lex
to take your business to the next level today.
This episode is also brought to you
by BetterHelp, spelled H-E-L-P.
Help, they figure out what you need
and match you with a licensed therapist in under 48 hours.
Works for individuals, works for couples.
I'm a huge fan of talking as a way
of exploring the human mind.
Two people talking with a motivation and a goal in mind
of surfacing certain kinds of problems and alleviating those kinds of problems.
Sometimes the surfacing in itself does a lot of the alleviation.
Returning to a time in the past when trauma happened and to reframe it in a way that helps
you understand, that helps you forgive, that helps you let that helps you forgive that helps you let go all of that
It's really powerful and better help just is an accessible way of doing that or at least trying talk therapy
So they've helped a lot of people
4.4 million people got help
So you can be one of those if you want to try check them out at better help comm slash Lex and save in your first
Month that's better.com slash Lex and save in your first month. That's betterhelp.com slash Lex.
This episode is also brought to you by ExpressVPN. I love that there's a kind of privacy theme
to the sponsors in this episode. I think everybody should be using a VPN for many reasons. One, it can allow you to
geographically transport yourself.
But the main reason is it just adds this extra layer of security and privacy between you
and the ISP that they say you're technically not supposed to be collecting the data when
you use things like Chrome and Cognito, but they can be collecting the data.
I don't know how the laws of that works, but I wouldn't trust it.
So a VPN is essential for that.
My favorite VPN for many, many, many, many, many, many,
many years has been ExpressVPN.
Big sexy button, still works.
It looks different, but still works
on any operating system.
My favorite being Linux.
I can talk forever about why I love Linux.
I wonder if Linux will be around with all this AI,
with all this rapid AI development. Maybe programmers, programming is a way of life,
as a recreation for millions, as a profession for millions, will die out and there will only
be a handful, a few, like the Cobalt programmers of today.
They carry the flag of knowing what Linux is,
how to spell Linux, let alone use it, I wonder.
Hopefully not, because there's always room for optimizing
at every level the compilation from the human language
to the AI language, to the machine language,
to the zeros and ones
the compilation of the entire stack I think there's a lot of jobs to be had a
lot of really profitable well-paying jobs to be had there but maybe not
millions of people are needed maybe there'll be millions of people that
program with just natural language with just words English or whatever new
language would create that the whole world can use.
And the whole world in using
can help break down the barriers of language.
We arrived here, friends,
when we started at the meager explanation
of the use of a VPN.
You can also take this journey
by going to expressvpn.com slash LexPod
for an extra three months free.
This is the Lex Freedman podcast.
To support it, please check out our sponsors in the description.
And now, dear friends, here's Sam Altman. Take me through the OpenAI board saga that started on Thursday, November 16th, maybe
Friday, November 17th for you.
That was definitely the most painful professional experience of my life. And chaotic and shameful and upsetting and
a bunch of other negative things.
There were great things about it too and I wish it had not been in such
an adrenaline rush that I wasn't able to stop and appreciate them at the time.
But I came across this old tweet of mine, or this tweet of mine from that time period,
which was like, it was like, you know, kind of going to your own eulogy, watching people
say all these great things about you and just like unbelievable support from people I love and care about.
That was really nice.
That whole weekend, I kind of like felt, with one big exception, I felt like a great deal of love.
And very little hate. hate, even though it felt like I just, I have no idea what's happening and what's going
to happen here and this feels really bad.
And there were definitely times I thought it was going to be like one of the worst things
to ever happen for AI safety.
Well, I also think I'm happy that it happened relatively early.
I thought at some point between when OpenAI started and when we created
AGI, there was going to be something crazy and explosive that happened. But there may
be more crazy and explosive things still to happen. It still, I think, helped us build
up some resilience and be ready for more challenges in the future.
But the thing you had a sense that you would experience is some kind of power struggle.
The road to AGI should be a giant power struggle.
The world should, well, not should, I expect that to be the case. And so you have to go through that, like you said, iterate as often as possible, figuring
out how to have a board structure, how to have organization, how to have the kind of
people that you're working with, how to communicate, all that, in order to deescalate the power
struggle as much as possible.
Yeah.
Passify it. But at this point, it feels like something that was in the past that was really unpleasant
and really difficult and painful.
But we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it.
There was a time after, there was like this fugue state for kind of like the month after,
maybe 45 days after, that was, I was just sort of like drifting through the days.
I was so out of it.
I was feeling so down.
Just at a personal psychological level.
Yeah, really painful.
And hard to have to keep running open
AI in the middle of that.
I just wanted to crawl into a cave
and recover for a while.
But now it's like we're just back to working on the mission. Well, it's still useful to go back there
and reflect on board structures, on power dynamics,
on how companies are run, the tension between research
and product development and money and all this kind of stuff
so that you, who have a very high potential
of building AGI, would do so in a slightly more organized, less dramatic way in the future.
So there's value there to go, both the personal psychological aspects of you as a leader and
also just the board structure and all this kind of messy stuff. Definitely learned a lot about structure and incentives and what we need out of a board.
And I think that is, it is valuable that this happened now in some sense.
I think this is probably not like the last high stress moment of opening up, but it was
quite a high stress moment.
My company very nearly got destroyed.
And we think a lot about many of the other things we've got to get right for AGI, but
thinking about how to build a resilient org and how to build a structure that will stand
up to like a lot of pressure in the world, which I expect more and more as we get closer.
I think that's super important.
Do you have a sense of how deep and rigorous the deliberation process by the board was?
Can you shine some light on just human dynamics involved in situations like this?
Was it just a few conversations and all of a sudden it escalates and why don't we fire
Sam kind of thing?
I think the board members were far well-meaning people on the whole. that in stressful situations where people feel time pressure or whatever, people understand
and we make suboptimal decisions.
And I think one of the challenges for OpenAI will be we're going to have to have a board
and a team that are good at operating under pressure.
Do you think the board had too much power?
I think boards are supposed to have a lot of power.
But one of the things that we did see is in most corporate structures, boards are usually
answerable to shareholders.
Sometimes people have like supervoting shares or whatever.
In this case, and I think one of the things with our structure that we maybe should have
thought about more than we did is that the board of a nonprofit has, unless you put other
rules in place, like quite a lot of power.
They don't really answer to anyone but themselves.
There's ways in which that's good, but what we'd really like is for the
Board of OpenAI to like answer to the world as a whole as much as that's a practical thing.
So there's a new board announced?
Yeah.
There's I guess a new smaller board at first and now there's a new final board.
Not a final board yet.
We've added some, we'll add more.
Added some, okay. What is fixed in the new one that was perhaps broken in the previous
one?
The old board sort of got smaller over the course of about a year. It was nine and then
it went down to six. And then we couldn't agree on who to add. And the board also, I
think, didn't have a lot of experienced board members.
And a lot of the new board members at OpenAI have just have more experience as board members.
I think that will help.
It's been criticized, some of the people that are added to the board.
I heard a lot of people criticizing the addition of Larry Summers, for example. What's the process of selecting the board like? What's involved in that?
So Brett and Larry were kind of decided in the heat of the moment over this like very
tense weekend. And that weekend was like a real roller coaster. It was like a lot of
ups and downs. And we were trying to agree on new board members that both sort of the
executive team here and the old board members felt would be reasonable. Larry was actually
one of their suggestions, the old board members. Brett, I think, I had even previous to that
weekend suggested, but he was busy and didn't
want to do it and then we really needed help and wouldn't.
We talked about a lot of other people too but that was – I felt like if I was going
to come back, I needed new board members.
I didn't think I could work with the old board again in the same configuration, although
we then decided, and I'm grateful that Adam would stay, but we considered various configurations
decided we wanted to get to a board of three and had to find two new board members over
the course of sort of a short period of time.
So those were decided honestly without, you know, that's like you kind of do that on the
battlefield.
You don't have time to design a rigorous process then.
For new board members since, new board members will add going forward, we have some criteria
that we think are important for the board to have, different expertise that we want
the board to have. Unlike hiring an executive where you need them to do one role important for the board to have, different expertise that we want the board to have.
Unlike hiring an executive where you need them to do one role well, the board needs
to do a whole role of governance and thoughtfulness well.
So one thing that Brett says, which I really like, is that we want to hire board members
in slates, not as individuals one at a time. And thinking about a group of people
that will bring nonprofit expertise, expertise
running companies, sort of good legal and governance
expertise, that's kind of what we've tried to optimize for.
So is technical savvy important for the individual board
members?
Not for every board member, but for certainly some,
you need that.
That's part of what the board needs to do.
So the interesting thing that people probably don't understand about OpenAI,
I certainly don't, is like,
all the details of running the business.
When they think about the board, given the drama,
they think about you, they think about like,
if you reach AGI or you reach
some of these incredibly impactful products
and you build them and deploy them,
what's the conversation with the board like?
And they kind of think, all right, what's the right squad to have in that kind of situation
to deliberate?
Look, I think you definitely need some technical experts there.
And then you need some people who are like, how can we deploy this in a way that will
help people in the world the most and people who have a very different perspective.
I think a mistake that you or I might make is to think that only the technical understanding
matters.
That's definitely part of the conversation you want that board to have.
There's a lot more about how that's going to just impact society and people's lives
that you really want represented in there too.
Are you looking at the track record of people or you're just
having conversations?
Track record is a big deal.
You of course have a lot of conversations, but I, you know, there's some roles where
I kind of totally ignore track record and just look at slope, kind of ignore the wide
intercept.
Thank you. Thank you.
Thank you for making it mathematical for the audience.
For a board member, like, I do care much more about the y-intercept.
Like I think there is something deep to say about track record there.
And experience is sometimes very hard to replace.
Do you try to fit a polynomial function or exponential one to the track record?
That's not that, and analogy doesn't carry that far.
All right.
You mentioned some of the low points that weekend.
What were some of the low points psychologically for you?
Did you consider going to the Amazon jungle
and just taking ayahuasca and disappearing forever?
I mean, there's so many low, like it was a very bad period
of time.
There were great high points too.
My phone was just sort of nonstop blowing up with nice messages from people I work with
every day, people I hadn't talked to in a decade.
I didn't get to appreciate that as much as I should have because I was just in the middle
of this firefight, but that was really nice.
But on the whole, it was like a very painful weekend and also just like a very – it was
like a battle fought in public to a surprising degree and that was extremely exhausting to
me much more than I expected.
I think fights are generally exhausting, but this one really was.
You know, the board did this Friday afternoon.
I really couldn't get much in the way of answers, but I also was just like, well, the board
gets to do this.
So I'm going to think for a little bit about what I want to do, but I'll try to find the
blessing in disguise here.
I was like, well, my current job at OpenAI is or it was to run a decently sized company at this point.
The thing I had always liked the most was just getting to work on, work with the researchers.
I was like, yeah, I can just go do a very focused HCI research effort.
I got excited about that.
Didn't even occur to me at the time to possibly that this was all going to get undone.
This was like Friday afternoon. So you've accepted the death of this preview.
Very quickly, very quickly.
Like within, you know, I mean I went through like a little period of confusion and rage
but very quickly.
And by Friday night I was like talking to people about what was going to be next and
I was excited about that.
I think it was Friday night evening for the first time that I heard from the exec team
here which was like, hey, we're going to fight this and we think whatever.
And then I went to bed just still being like, okay, excited, like onward.
Were you able to sleep?
Not a lot.
It was one of the weird things was it was was this period of four and a half days where
sort of didn't sleep much, didn't eat much, and still kind of had a surprising amount of energy.
You learn a weird thing about adrenaline in wartime.
So you kind of accepted the death of this baby opening.
And I was excited for the new thing. I was just like, okay, this was crazy, but whatever.
It was a very good coping mechanism.
And then Saturday morning, two of the board members called and said, hey, we destabilize,
we didn't mean to destabilize things. We don't want to store a lot of value here. Can we
talk about you coming back? And I immediately didn't want to do that, but I thought a little
more and I was like, well, I don't really care about it. The people here, the partners,
shareholders, like all of the, I love this company. And so I thought about it and I was like, well, I don't really care about it. The people here, the partners, shareholders, like all of the – I love this company.
So I thought about it and I was like, well, okay, but here's the stuff I would need.
Then the most painful time of all was over the course of that weekend, I kept thinking
and being told and we all kept – not just me, the whole team here kept thinking, well,
we are trying to keep open-eyed, stabilized while the whole world was trying to break it apart,
people trying to recruit, whatever.
We kept being told, like, all right, we're almost done, we're almost done, we just need
a little bit more time.
It was this very confusing state.
And then Sunday evening, when again, every few hours, I expected that we were going to
be done and we're going to figure out a way for me to return and things to go back to how they were.
The board then appointed a new interim CEO.
And then I was like, I mean, that is, that is, that feels really bad.
That was the low point of the whole thing.
You know, I'll tell you something. It felt very painful, but I felt a lot of love that whole weekend.
It was not other than that one moment, Sunday night, I would not characterize my emotions
as anger or hate.
But I really just like, I felt a lot of love from people towards people.
It was painful, but the dominant emotion of the weekend was love, not hate.
You've spoken highly of Mira Moradi that she helped especially as you put in the tweet,
in the quiet moments when it counts.
Perhaps we could take a bit of a tangent.
What do you admire about Mira?
Well, she did a great job during that weekend in a lot of chaos, but people often see leaders
in the crisis moments, good or bad.
But a thing I really value in leaders is how people act on a boring Tuesday at 9.46 in
the morning and in just sort of the normal drudgery of the day-to-day, how someone shows
up in a meeting, the quality of the decisions they make.
That was what I meant about the quiet moments.
Matthew Feeney Meaning like most of the work is done on
a day-by-day in a meeting-by-meeting.
Just be present and make great decisions.
Aaron Powell Yeah. I mean, look, what you have wanted to spend
the last 20 minutes about, and I understand,
is like this one very dramatic weekend.
Yeah.
But that's not really what Open AI is about.
Open AI is really about the other seven years.
Well, yeah, human civilization is not about
the invasion of the Soviet Union by Nazi Germany,
but still that's something people focus on.
Very, Very understandable.
It gives us an insight into human nature, the extremes of human nature, and perhaps
some of the damage and some of the triumphs of human civilization can happen in those
moments. It's illustrative. Let me ask about Ilya. Is he being held hostage in a secret
nuclear facility?
No.
What about a regular secret facility? No. What about a regular secret facility?
No.
What about a nuclear non-secret facility?
Neither of that.
Not that either.
I mean, this is becoming a meme at some point.
You've known Ilya for a long time.
He's obviously in part of this drama with the board and all that kind of stuff.
What's your relationship with him now?
I love Ilya.
I have tremendous respect for Ilya.
I don't have anything I can say about his plans right now.
That's a question for him.
But I really hope we work together for certainly
the rest of my career.
He's a little bit younger than me.
Maybe he works a little bit longer.
You know, there's a meme that he saw something. Like he maybe saw AGI and that gave him a lot
of worry internally. What did Ilya see? Ilya has not seen AGI. None of us have
seen AGI. We've not built AGI. I do think one of the many things that I really love about Ilja is he takes AGI and the safety
concerns broadly speaking, including things like the impact this is going to have on society
very seriously.
As we continue to make significant progress, Illya is one of the people that I've spent
the most time over the last couple of years talking about what this is going to mean,
what we need to do to ensure we get it right, to ensure that we succeed at the mission. Ilya did not see AGI, but Ilya is a credit to humanity in terms of how much he thinks
and worries about making sure we get this right.
I've had a bunch of conversations with him in the past.
I think when he talks about technology, he's always doing this long-term thinking type
of thing.
So he's not thinking about what this is going to be in a year, he's thinking about this long-term thinking type of thing. So he's not thinking about what this is gonna be in a year,
he's thinking about in 10 years.
Just thinking from first principles, like, okay,
if the scales, what are the fundamentals here,
where is this going?
And so that's a foundation for them thinking about
like all the other safety concerns
and all that kind of stuff.
Which makes him a really fascinating human to
Talk with do you have any idea why he's been kind of quiet is it he's just doing some soul-searching
Again, I don't want to like speak for oh, yeah. I think that you should ask him that
He's definitely a thoughtful guy.
I think I kind of think Ilya is like always on a soul search in a really good way.
Yes.
Yeah.
Also, he appreciates the power of silence.
Also, I'm told he can be a silly guy, which I've never seen that side of him.
It's very sweet when that happens.
I've never witnessed a silly Ilya, but I look forward to that as well.
I was at a dinner party with him recently and he was playing with a puppy and he was
in a very silly mood, very endearing, and I was thinking, oh man, this is not the side
of the Ilya that the world sees the most.
So just to wrap up this whole saga, are you feeling good about the board structure about
all of this and like where it's moving?
I feel great about the new board.
In terms of the structure of OpenAI, I, you know, one of the board's tasks is to look
at that and see where we can make it more robust.
We wanted to get new board members in place first, but, you know, we clearly learned a lesson about structure throughout this process.
I don't have, I think, super deep things to say.
It was a crazy, very painful experience.
I think it was like a perfect storm of weirdness.
It was like a preview for me of what's going to happen as the stakes get higher and higher
and the need that we have robust governance structures and processes and people.
I am kind of happy it happened when it did, but it was a shockingly painful thing to go
through.
Did it make you be more hesitant in trusting people?
Yes.
Just on a personal level?
Yes.
I think I'm an extremely trusting person.
I always had a life philosophy of don't worry about all of the paranoia. Don't worry about the edge cases. You know, you get a little
bit screwed in exchange for getting to live with your guard down. And this was so shocking
to me. I was so caught off guard that it has definitely changed. And I really don't like
this. It's definitely changed how I think about just just default trust of people and planning for the bad scenarios.
You gotta be careful with that.
Are you worried about becoming a little too cynical?
I'm not worried about becoming too cynical.
I think I'm the extreme opposite of a cynical person,
but I'm worried about just becoming
less of a default trusting person.
I'm actually not sure which mode is best to operate in
for a person who's developing AGI.
Trusting or untrusting.
So an interesting journey you're on.
But in terms of structure,
see I'm more interested on the human level.
Like how do you surround yourself with humans
that are building cool shit,
but also are making wise decisions.
Because the more money you start making, the more power the thing has, the weirder people
get.
Aaron Powell You know, I think you could like, you can
make all kinds of comments about the board members and the level of trust I should have
had there or how I should have done things differently.
But in terms of the team here, I think you'd have to give me a very good grade on that
one.
And I have just enormous gratitude and trust and respect for the people that I work with
every day.
And I think being surrounded with people like that is really important. Our mutual friend, Elon, sued OpenAI.
What is the essence of what he's criticizing?
To what degree does he have a point?
To what degree is he wrong?
I don't know what it's really about.
We started off just thinking we were going to be a research lab and having no idea about
how this technology was going to go.
It's hard to – because it was only seven or eight years ago, it's hard to go back
and really remember what it was like then.
But before language models were a big deal, this was before we had any idea about an API
or selling access to a chat bot.
This was before we had any idea we were going to productize at all.
So we're like – we're just like going to try to do research and we don't really know what
we're going to do with that.
I think with many fundamentally new things, you start fumbling through the dark and you
make some assumptions, most of which turn out to be wrong.
And then it became clear that we were going to need to do different things and also have huge amounts more capital.
So we said, okay, well, the structure doesn't quite work for that.
How do we patch the structure?
And then patch it again and patch it again and you end up with something that does look
kind of eyebrow raising to say the least.
But we got here gradually with, I think, reasonable
decisions at each point along the way.
And it doesn't mean I wouldn't do it totally differently if we could go back now with an
oracle, but you don't get the oracle at the time.
But anyway, in terms of what Elon's real motivations here are, I don't know.
To the degree you remember, what was the response that OpenAI gave in the blog post?
Can you summarize it?
Oh, we just said like, you know, Elon said this set of things.
Here's our characterization, or here's the sort of, not our characterization, here's
like the characterization of how this went down.
We tried to like not make it emotional and just sort of say,
here's the history.
I do think there's a degree of mischaracterization
from Elon here about one of the points he just made,
which is the degree of uncertainty you had at the time.
You guys are a bunch of like a small group of researchers crazily talking about AGI when
everybody's laughing at that thought.
It wasn't that long ago Elon was crazily talking about launching rockets when people were laughing
at that thought.
So I think he'd have more empathy for this.
I mean, I do think that there's personal stuff here, that there was a split, that OpenAI
and a lot of amazing people here chose to part ways with Elon.
So there's a personal thing.
Elon chose to part ways.
Can you describe that exactly, the choosing to part ways. Can you describe that exactly?
The choosing to part ways?
He thought OpenAI was going to fail.
He wanted total control to sort of turn it around.
We wanted to keep going in the direction that now has become OpenAI.
He also wanted Tesla to be able to build an AGI effort.
Various times he wanted to make OpenAI into a for-profit company that he could have control of or have it merge
with Tesla.
We didn't want to do that and he decided to leave, which that's fine.
So you're saying, and that's one of the things that the blog post says is that he wanted
OpenAI to be basically acquired by Tesla in the same way that, or maybe something similar or maybe something more
dramatic than the partnership with Microsoft.
My memory is the proposal was just like, yeah,
get acquired by Tesla and have Tesla have full control
over it, I'm pretty sure that's what it was.
So what is the word open in OpenAI mean?
To Elon at the time, Ilya has talked about this
in the email exchanges and all this kind of stuff, what does it mean to you at the time, what does it mean to Elon at the time? Ilya has talked about this in the email exchanges and all this kind of stuff.
What does it mean to you at the time?
What does it mean to you now?
I would definitely pick a different – speaking of going back with an Oracle, I'd pick a
different name.
One of the things that I think OpenAI is doing that is the most important of everything that
we're doing is putting powerful technology in the hands of people for free as a public good.
We don't run ads on our free version.
We don't monetize it in other ways.
We just say it's part of our mission.
We want to put increasingly powerful tools in the hands of people for free and get them
to use them.
I think that kind of open is really important to our mission.
I think if you give people great tools and teach them to use them or don't even teach
them, they'll figure it out and let them go build an incredible future for each other
with that, that's a big deal.
If we can keep putting free or low cost or free and low cost powerful AI tools out in
the world, I think that's a huge deal for how we fulfill the mission.
Open source or not, yeah, I think we should open source some stuff and not other stuff.
It does become this like religious battle line where nuance is hard to have, but I think
nuance is the right answer.
So, he said, change your name to Close AI and I'll drop the lawsuit.
I mean, is it going to become this battleground in the land of memes?
I think that speaks to the seriousness with which Elon means the lawsuit.
And yeah, I mean, that's like an astonishing thing to say, I think.
Well, I don't think the lawsuit, maybe correct me if I'm wrong, but I don't think the lawsuit
is legally serious.
It's more to make a point about the future of AGI and the company that's currently leading
the way.
Look, I mean, Grok had not open sourced anything until people pointed out it was a little bit
hypocritical and then he announced that Grok will open source things this week.
I don't think open source versus not is what this is really about for him.
Well, we'll talk about open source and not.
I do think maybe criticizing the competition is great.
Just talking a little shit, that's great.
But friendly competition versus like, I personally hate
lawsuits.
Yeah, look, I think this whole thing is like unbecoming of a builder and I respect Elon
as one of the great builders of our time and I know he knows what it's like to have like
haters attack him and it makes me extra sad he's doing it to us.
Yeah, he's one of the greatest builders of all time, potentially the greatest builder
of all time.
It makes me sad.
I think it makes a lot of people sad.
Like there's a lot of people who've really looked up to him for a long time and said
this, I said, you know, in some interview or something that I missed the old Elon and
the number of messages I got being like that exactly encapsulates how I feel.
I think he should just win.
You should just make X Grok beat GPT and then GPT beats Grok and it's just a competition.
And that's beautiful for everybody.
But on the question of open source,
do you think there's a lot of companies
playing with this idea, it's quite interesting.
I would say Metta, surprisingly, has led the way on this.
Or like, at least took the first step in the game of chess
of like really open sourcing the model.
Of course, it's not the state of the art model,
but open sourcing llama.
Google is flirting with the idea
of open sourcing a smaller version.
What are the pros and cons of open sourcing?
Have you played around with this idea?
Yeah, I think there is definitely a place for open source models, particularly smaller
models that people can run locally.
I think there's huge demand for.
I think there will be some open source models.
There will be some closed source models.
It won't be unlike other ecosystems in that way.
I listened to All In Podcasts talking about this loss
and all that kind of stuff.
And they were more concerned about the precedent
of going from nonprofit to this cap for profit.
What precedent this sets for other startups?
I would heavily discourage any startup
that was thinking about starting as a nonprofit
and adding a for-profit arm later.
I'd heavily discourage them from doing that.
I don't think we'll set a precedent here.
So most startups should go just...
For sure.
And again, if we knew what was going to happen, we would have done that too.
Well, in theory, if you dance beautifully here, there's like some tax incentives or whatever.
I don't think that's like how most people think about these things.
Just not possible to save a lot of money for a startup if you do it this way.
No, I think there's like laws that would make that pretty difficult.
Where do you hope this goes with Elon?
This tension, this dance, what do you hope this, like if we go one, two, three years
from now, your relationship with him on a personal level too, like friendship, friendly
competition, just all an amicable relationship.
Yeah, I hope you guys have an amicable relationship like this month.
And just compete and win and explore these ideas together. I do suppose there's competition for talent or whatever,
but it should be friendly competition.
Just build, build cool shit.
And Elon is pretty good at building cool shit,
but so are you.
So speaking of cool shit,
Sora, there's like a million questions I could ask.
First of all, it's amazing.
It truly is amazing.
On a product level, but also just on a philosophical level.
So let me just, technical slash philosophical ask,
what do you think it understands about the world
more or less than GPT-4, for example?
The world model, when you train on these patches versus language tokens.
I think all of these models understand something more about the world model than most of us
give them credit for.
And because they're also very clear things, they just don't understand or don't get right.
It's easy to look at the weaknesses, see through the veil and say, this is all
fake.
But it's not all fake.
It's just some of it works and some of it doesn't work.
Like, I remember when I started first watching SORA videos and I would see like a person
walk in front of something for a few seconds and occlude it and then walk away and the
same thing was still there.
I was like, oh, that's pretty good. Or there's examples where it like the underlying physics looks so well represented over a lot
of steps in a sequence.
It's like, oh, this is quite impressive.
But fundamentally, these models are just getting better and that will keep happening.
If you look at the trajectory from Dolly 1 to 2 to 3 to Sora, you know, there are a lot
of people that were dunked on each verse and saying, it can't do this, it can't do that,
and like, look at it now.
Well, the thing you just mentioned is kind of with occlusions is basically modeling the
physics of the three-dimensional physics of the world sufficiently well to capture those
kinds of things.
Well.
Or like, yeah, maybe you can tell me, in order to deal with occlusions, what does the world
model need to?
Yeah, so what I would say is it's doing something to deal with occlusions really well.
What I represent that it has like a great underlying 3D model of the world, it's a little
bit more of a stretch.
But can it get there through just these kinds of two-dimensional training data approaches?
It looks like this approach is going to go surprisingly far.
I don't want to speculate too much about what limits it will surmount and which it won't,
but...
What are some interesting limitations of the system that you've seen?
I mean, there's been some fun ones you've posted.
There's all kinds of fun.
I mean, like, you know, cats sprout in an extra limb at random points in a video.
Pick what you want, but there's still a lot of problems, a lot of weaknesses.
Do you think that's a fundamental flaw of the approach?
Or is it just, you know, bigger model or better, like, technical details or better data, more
data is going to solve the cat sprouting?
I would say yes to both.
I think there is something about the approach which just seems to feel different from how
we think and learn and whatever.
And then also I think it'll get better with skill.
Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches.
So it converts all visual data, diverse kinds of visual data, videos
and images into patches.
Is the training to the degree you can say fully self-supervised?
There's some manual labeling going on.
What's the involvement of humans in all this?
I mean, without saying anything specific about the SOAR approach, we use lots of human data
in our work.
But not internet scale data.
So lots of humans.
Lots is a complicated word, Sam.
I think lots is a fair word in this case.
But it doesn't, because to me, lots,
like listen, I'm an introvert,
and when I hang out with like three people,
that's a lot of people.
Four people, that's a lot.
But I suppose you mean more than.
More than three people work on labeling the data
for these models, yeah.
Okay, right.
But fundamentally, there's a lot of self-supervised learning.
Because what you mentioned in the technical report
is internet-scale data.
That's another beautiful, it's like poetry. So it's a lot of data that's not human label.
It's like, it's self-supervised in that way.
And then the question is how much data
is there on the internet that could be used
that is conducive to this kind of self-supervised way
if only we knew the details of the self-supervised.
Have you considered opening it up a little more, details?
We have, you mean for Sora specifically?
Sora specifically, that's right,
because it's so interesting that,
like, can this, can the same magic of LLMs
now start moving towards visual data?
And what does that take to do that?
I mean, it looks to me like yes, but we have more work to do.
Sure. What are the dangers?
Why are you concerned about releasing the system?
What are some possible dangers of this?
I mean, frankly speaking, one thing we have to do
before releasing the system is just like get it to work
at a level of efficiency that will deliver the scale people are going to
want it from this.
So I don't want to downplay that.
And there's still a ton, ton of work to do there.
But you can imagine issues with deepfakes, misinformation.
We try to be a thoughtful company about what we put out into the world, and it doesn't
take much thought to think about the ways this can go badly.
There's a lot of tough questions here.
You're dealing in a very tough space.
Do you think training AI should be or is fair use under copyright law?
I think the question behind that question is do people who create valuable data deserve
to have some way that they get compensated for use of it?
And that I think the answer is yes.
I don't know yet what the answer is.
People have proposed a lot of different things.
We've tried some different models.
But if I'm like an artist, for example, A, I would like to be able to opt out of people generating art in my style,
and B, if they do generate art in my style, I'd like to have some economic model associated
with that.
Yeah, it's that transition from CDs to Naps or to Spotify.
I have to figure out some kind of model.
The model changes, but people have got to get paid.
Well, there should be some kind of incentive if you zoom out even more for humans to keep
doing cool shit.
Everything I worry about, humans are going to do cool shit and society is going to find
some way to reward it.
That seems pretty hardwired.
We want to create, we want to be useful.
We want to like achieve status in whatever way.
That's not going anywhere, I don't think.
But the reward might not be monetary, financial.
It might be like fame and celebration of other cool—
COURTNEY Maybe financial in some other way.
Again, I don't think we've seen like the last evolution of how the economic system's
going to work.
LARSON Yeah, but artists and creators are worried.
When they see Sora, they're like, holy shit.
COURTNEY Sure.
Artists were also super worried when photography came out.
And then photography became a new art form and people made a lot of money taking pictures.
I think things like that will keep happening.
People will use the new tools in new ways.
If you just look on YouTube or something like this, how much of that will be using Sora-like
AI-generated content, do you think, in the next five years?
Matthew Feeney People talk about how many jobs they are going
to do in five years.
And the framework that people have is what percentage of current jobs are just going
to be totally replaced by some AI doing the job.
The way I think about it is not what percent of jobs AI will do, but what percent of tasks
will AI do and over what time horizon.
So if you think of all of the five-second tasks in the economy, the five-minute tasks,
the five-hour tasks, maybe even the five-day tasks, how many of those can AI do?
And I think that's a way more interesting, impactful, important question than how many
jobs AI can do, because it is a tool that will work at increasing levels
of sophistication and over longer and longer time horizons for more and more tasks and
let people operate at a higher level of abstraction.
So maybe people are way more efficient at the job they do.
And at some point, that's not just a quantitative change, but it's a qualitative one too about
the kinds of problems you can keep in your head.
I think that for videos on YouTube, it'll be the same.
Many videos, maybe most of them, will use AI tools in the production, but they'll still be fundamentally driven by a person
thinking about it, putting it together, you know, doing parts of it.
Sort of directing and running it.
Yeah, it's so interesting. I mean, it's scary, but it's interesting to think about.
I tend to believe that humans like to watch other humans or other human-like things.
Humans really care about other humans a lot.
Yeah. If there's a cooler thing that's better than a human,
humans care about that for like two days and then they go back to humans.
That seems very deeply wired.
It's the whole chess thing.
Yeah, but now everybody keep playing chess.
Let's ignore the elephant in the room that humans are really bad at chess relative to
AI systems.
They still run races and cars are much faster.
I mean, there's like a lot of examples.
Yeah.
And maybe it'll just be tooling like in the Adobe Suite type of way where you can just
make videos much easier and all that kind of stuff.
Listen, I hate being in front of the camera.
If I can figure out a way to not be in front of the camera, I would love it.
Unfortunately, it'll take a while.
Like that, generating faces, it's getting there.
But generating faces in video format is tricky
when it's specific people versus generic people.
Let me ask you about GPT-4.
There's so many questions.
First of all, also amazing.
It's, looking back, it'll probably be this kind of
historic pivotal moment with 3.5 and 4
with Chad GPT.
Maybe 5 will be the pivotal moment, I don't know.
Hard to say that looking forwards.
We never know.
That's the annoying thing about the future.
It's hard to predict.
But for me, looking back, GPT-4, Chad GPT is pretty damn impressive, like historically
impressive.
So allow me to ask, what's been the most impressive capabilities of GPT-4
to you and GPT-4 Turbo?
I think it kind of sucks.
Typical human also. Gotten used to an awesome thing.
No, I think it is an amazing thing. But relative to where we need to get to and where I believe we will get to, you know,
at the time of like GPT-3, people were like, oh, this is amazing.
This is this like marvel of technology and it is, it was.
But you know, now we have GPT-4 and look at GPT-3 and you're like, that's unimaginably
horrible.
I expect that the delta between five and four will be the same as between four and three.
I think it is our job to live a few years in the future and remember that the tools
we have now are going to kind of suck looking backwards at them.
We make sure the future is better.
What are the most glorious ways in that GPT-4 sucks?
Meaning-
What are the best things it can do?
What are the best things it can do
and the limits of those best things
that allow you to say it sucks,
therefore gives you an inspiration and hope for the future.
You know, one that I've been using it for more recently is sort of like a brainstorming
partner.
Yep.
I'm nervous for that.
And there's a glimmer of something amazing in there.
I don't think it gets, you know, when people talk about it, what it does, they're like,
oh, it helps me code more productively.
It helps me write more faster and better.
It helps me, you know, translate from this language to another.
All these amazing things.
There's something about the creative brainstorming partner.
I need to come up with a name for this thing.
I need to think about this problem in a different way.
I'm not sure what to do here.
That I think gives a glimpse of something I hope to see more of.
One of the other things that you can see a very small glimpse of is when I can help on
longer horizon tasks, break down something in multiple steps, maybe execute some of those
steps, search the internet, write code, whatever, put that together.
When that works, which is not very often, it's like very magical.
The iterative back and forth with a human.
It works a lot for me.
What do you mean?
Iterative back and forth with a human, it can get more often.
When it can go do like a 10-step problem on its own.
Oh.
Doesn't work for that too often.
Sometimes.
Add multiple layers of abstraction or do you mean just sequential?
Both, like to break it down and then do things at different layers of abstraction and put
them together.
Look, I don't want to downplay the accomplishment of GPT-4, but I don't want to overstate it
either.
And I think this point that we are on an exponential curve, we will look back relatively soon at
GPT-4, like we look back at GPT-3 now.
That said, I mean, Chad GPT was a transition to where people like started to believe it.
There was a kind of, there is an uptick of believing, not internally at opening up, perhaps
there's believers here, but when you
think about global...
And in that sense, I do think it'll be a moment where a lot of the world went from not believing
to believing.
That was more about the chat GBT interface than the...
And by the interface and product, I also mean the post-training of the model and how we
tune it to be helpful to you and how to use it than the underlying model itself.
How much of those two, each of those things are important?
The underlying model and the RLHF are something of that nature that tunes it to be more compelling
to the human, more effective and productive for the human.
I mean, they're both super important, but the RLHF, the post-training step, the little
wrapper of things that, from a compute perspective, little wrapper of things that we do on top
of the base model, even though it's a huge amount of work.
That's really important to say nothing of the product that we build around it.
In some sense, we did have to do two things.
We had to invent the underlying technology.
And then we had to figure out how to make it into a product people would love, which
is not just about the actual product work itself, but this whole other step of how you
align and make it useful.
And how you make the scale work where a lot of people can use it at the same time, all
that kind of stuff.
And that.
But, you know, that was like a known difficult thing.
Like we knew we were going to have to scale it up.
We had to go do two things that had like never been done before that were both like, I would
say, quite significant achievements.
And then a lot of things like scaling it up that other companies have had to do before.
How does the context window of going from 8K to 128K tokens compare from GPT-4 to GPT-4
Turbo?
Most people don't need all the way to 128K most of the time, although if we dream into
the distant future, we we dream into the distant
future, we'll have like way distant future, we'll have like context length of several
billion, you will feed in all of your information, all of your history over time and it'll just
get to know you better and better and that'll be great.
For now, the way people use these models, they're not doing that. And people sometimes post in a paper or a significant fraction of a code repository,
whatever.
But most usage of the models is not using the long context most of the time.
I like that this is your I have a dream speech.
One day you'll be judged by the full context of your character or of your whole lifetime.
That's interesting.
So that's part of the expansion that you're hoping for is a greater and greater context.
There's this... I saw this internet clip once.
I'm going to get the numbers wrong, but it was like Bill Gates talking about the amount
of memory on some early computer, maybe 64K, maybe 640K, something like that.
Most of it was used for the screen buffer.
And he just couldn't seem genuine,
just couldn't imagine that the world would eventually
need gigabytes of memory in a computer
or terabytes of memory in a computer.
And you always do.
Or you always do just need to follow
the exponential of technology.
And we're going to like, we will find out how to use better technology.
So I can't really imagine what it's like right now for context links to go out to the billions
someday.
And they might not literally go there, but effectively it'll feel like that.
But I know we'll use it and really not want to go back once we have it.
Yeah.
Even saying billions 10 years from now might seem dumb
because it'll be like trillions upon trillions.
Sure.
There'll be some kind of breakthrough
that will effectively feel like infinite context.
But even 120, I have to be honest,
I haven't pushed it to that degree.
Maybe putting in entire books
or like parts of books and so on, papers.
What are some interesting use cases of GPT-4 that you've seen?
The thing that I find most interesting is not any particular use case that we can talk
about those, but it's people who kind of like, this is mostly younger people, but people
who use it as like their default start for any kind of knowledge work task. And it's the fact that it can do a lot of things reasonably well.
You can use GPTV, you can use it to help you write code, you can use it to help you do
search, you can use it to edit a paper.
The most interesting to me is the people who just use it as the start of their workflow.
I do as well for many things.
Like I use it as a reading partner for reading books.
It helps me think, help me think through ideas, especially when the books are classic, so Like I use it as a reading partner for reading books.
It helps me think, help me think through ideas, especially when the books are classics, so
it's really well written about and it actually is as...
I find it often to be significantly better than even like Wikipedia on well-covered topics.
It's somehow more balanced and more nuanced.
Maybe it's me, but it inspires me to think
deeper than a Wikipedia article does.
I'm not exactly sure what that is.
You mentioned like this collaboration,
I'm not sure where the magic is.
If it's in here or if it's in there,
or if it's somewhere in between, I'm not sure.
But one of the things that concerns me for knowledge tasks
when I start with GPT is I'll usually have to do
fact checking after.
Like check that it didn't come up with fake stuff.
How do you figure that out, that GPT can come up with fake stuff that sounds really convincing?
So how do you ground it in truth?
That's obviously an area of intense interest for us. I think it's going to get a lot better with upcoming versions, but we'll have to work
on it and we're not going to have it all solved this year.
Well, the scary thing is as it gets better, you'll start not doing the fact checking more
and more, right?
I'm of two minds about that.
I think people are much more sophisticated users of technology than we often give them
credit for.
And people seem to really understand that GPT, any of these models hallucinate some
of the time and if it's mission critical, you got to check it.
Except journalists don't seem to understand that.
I've seen journalists half-assedly just using GPT for it.
Of the long list of things I'd like to dunk on journalists for, this is not my top criticism
of them.
Well, I think the bigger criticism is perhaps the pressures and the incentives of being
a journalist is that you have to work really quickly and this is a shortcut.
I would love our society to incentivize like...
I would too.
...a long, like a journalistic efforts that take days and weeks and rewards great in-depth
journalism.
Also journalism that presents stuff in a balanced way where it's like celebrates people while
criticizing them even though the criticism is the thing that gets clicks.
Making shit up also gets clicks and headlines that mischaracterize completely.
I'm sure you have a lot of people dunking on,
well, all that drama probably got a lot of clicks.
Probably did.
And that's a bigger problem about human civilization.
I'd love to see Solidus where we celebrate a bit more.
You've given Chad GPT the ability to have memories.
You've been playing with that about previous conversations.
And also the ability to turn off memory, which I wish I could do that sometimes, just turn
on and off depending, I guess sometimes alcohol can do that, but not optimally, I suppose.
What have you seen through that, like playing around with that idea of remembering conversations
and not?
We're very early in our explorations here, but I think what people want, or at least
what I want for myself, is a model that gets to know me and gets more useful to me over
time.
This is an early exploration.
I think it's like a lot of other things to do, but that's where we'd like to head.
You'd like to use a model and over the course of your life, or use a system, there'll be
many models, and over the course of your life, it gets better and better.
Yeah.
How hard is that problem?
Because right now it's more like remembering little factoids and preferences and so on.
What about remembering, like, don't you want GPT to remember all the shit you went through
in November and all
the drama?
And then you can...
Because right now you're clearly blocking it out a little bit.
It's not just that I want it to remember that.
I want it to integrate the lessons of that and remind me in the future what to do differently
or what to watch out for. And you know, we all gain from experience over the course of our lives, varying degrees.
And I'd like my AI agent to gain with that experience too.
So if we go back and let ourselves imagine that, you know, trillions and trillions of
context length, if I can put every conversation I've ever had with anybody in my life in there, if I
can have all of my emails input out, like all of my input output in the context window
every time I ask a question, that'd be pretty cool, I think.
Yeah, I think that would be very cool.
People sometimes will hear that and be concerned about privacy.
What do you think about that aspect of it? The more effective the AI becomes at
really integrating all the experiences and all the data that happened to you and giving
you advice?
I think the right answer there is just user choice. You know, anything I want stricken
from the record from my AI agent, I want to be able to like take out. If I don't want
to remember anything, I want that too. You and I may have different opinions about where on that privacy utility trade-off for
our own AI we want to be, which is totally fine.
But I think the answer is just like really easy user choice.
But there should be some high level of transparency from a company about the user choice.
Because sometimes companies in the past have been kind of shady about like, it's kind of
presumed that we're collecting all your data and we're using it for a good reason, for
advertisement and so on, but there's not a transparency about the details of that.
That's totally true.
You mentioned earlier that I'm blocking out the November stuff.
I'm just teasing you. Well, I mean, I think it was a very traumatic thing, and it did immobilize me for a long
period of time.
Like, definitely the hardest, like the hardest work that I've had to do was just like keep
working that period.
Because I had to like, you know, try to come back in here and put the pieces together while
I was just like in sort of shock and pain.
And you know, nobody really cares about that. in here and put the pieces together while I was just like in sort of shock and pain.
And, you know, nobody really cares about that.
I mean, the team gave me a pass and I was not working at my normal level.
But there was a period where I was just like, it was really hard to have to do both.
But I kind of woke up one morning and I was like, this was a horrible thing that happened
to me.
I think I could just feel like a victim forever.
Or I can say this is like the most important work I'll ever touch in my life and I need
to get back to it.
And it doesn't mean that I've repressed it because sometimes I like wake from the middle
of the night thinking about it.
But I do feel like an obligation to keep moving forward.
Well, that's beautifully said, but there could be some lingering stuff in there.
Like what I would be concerned about is
that trust thing that you mentioned, that being paranoid about people
as opposed to just trusting everybody or most people
like using your gut.
It's a tricky dance.
For sure.
I mean, because I've seen in my part-time explorations,
I've been diving deeply into the Zalensk administration, the
Putin administration, and the dynamics there in wartime in a very highly stressful environment.
And what happens is distrust. And you isolate yourself both. And you start to not see the
world clearly. And that's a concern, that's a human concern.
You seem to have taken an enshrine
and kind of learned the good lessons
and felt the love and let the love energize you.
Which is great, but still can linger in there.
There's just some questions I would love to ask
of your intuition about what's GPT able to do and not.
So it's allocating approximately the same amount
of compute for each token it generates.
Is there room there in this kind of approach
to slower thinking, sequential thinking?
I think there will be a new paradigm
for that kind of thinking.
Will it be similar like architecturally
as what we're seeing now with LLMs?
Is it a layer on top of the LLMs?
I can imagine many ways to implement that.
I think that's less important than the question you were getting at, which is do we need a
way to do a slower kind of thinking where the answer doesn't have to get like, you know, it's like, I guess
like spiritually you could say that you want an AI to be able to think harder about a harder
problem and answer more quickly about an easier problem.
And I think that will be important.
Is that like a human thought that we're just having?
You should be able to think hard.
Is that wrong intuition?
I suspect that's a reasonable intuition.
Interesting. So it's not possible once the GPT gets like GPT-7, we'll just be instantaneously be able
to see, you know, here's the proof from our STM.
It seems to me like you want to be able to allocate more compute to harder problems. Like, it seems to me that a system knowing, if you ask a system like that, proof
from Aztlast theorem versus what's today's date, unless it already knew and had memorized
the answer to the proof, assuming it's got to go figure that out, seems like that will
take more compute.
But can it look like basically LLM talking to itself, that kind of thing?
Maybe.
I mean, there's a lot of things that you could imagine working.
What the right or the best way to do that will be, we don't know.
This does make me think of the mysterious,
the lore behind Q-Star.
What's this mysterious Q-Star project?
Is it also in the same nuclear facility?
There is no nuclear facility.
That's what a person with a nuclear facility always says.
I would love to have a secret nuclear facility.
There isn't one.
All right. Maybe someday. Someday?
All right.
One can dream.
OpenAI is not a good company at keeping secrets.
It would be nice, you know, we're like, been plagued by a lot of leaks and it would be
nice if we were able to have something like that.
Can you speak to what Q-Star is?
We are not ready to talk about that.
See, but an answer like that means there's something to talk about.
It's very to talk about.
It's very mysterious, Sam.
I mean, we work on all kinds of research.
Yeah.
We have said for a while that we think better reasoning in these systems
is an important direction that we'd like to pursue.
We haven't cracked the code yet.
We're very interested in it.
Is there going to be moments, Q star or otherwise, where there's going to be leaps similar to
JAD GPT where you're like...
That's a good question.
What do I think about that? to add to PT where you're like. That's a good question.
What do I think about that?
It's interesting, to me it all feels pretty continuous. Right, this is kind of a theme that you're saying,
is there's a gradual, you're basically gradually
going up an exponential slope,
but from an outsider's perspective,
for me just watching it, it does feel like
there's leaps, but to you there isn't.
I do wonder if we should have...
So part of the reason that we deploy the way we do is that we think, we call it iterative
deployment.
Rather than go build in secret until we got all the way to GPT-5, we decided to talk about
GPT-1, 2, 3, and 4.
Part of the reason there is I think AI and surprise don't go together.
Also, the world, people, institutions, whatever you want to call it, need time to adapt and
think about these things.
I think one of the best things that OpenAI has done is this strategy.
We get the world to pay attention to the progress, to take AGI seriously,
to think about what systems and structures and governance we want in place before we're
like under the gun and have to make a rest decision.
I think that's really good.
But the fact that people like you and others say, you still feel like there are these leaps,
makes me think that maybe we should be doing
our releasing even more iteratively.
I don't know what that would mean.
I don't have any answer ready to go.
But like our goal is not to have shock updates to the world.
The opposite.
Yeah, for sure.
More iterative would be amazing.
I think that's just beautiful for everybody.
But that's what we're trying to do.
That's like our state of the strategy.
And I think we're somehow missing the mark.
So maybe we should think about releasing GPT-5 in a different way or something like that.
Yeah.
4.71, 4.72.
But people tend to like to celebrate.
People celebrate birthdays.
I don't know if you know humans, but they kind of have these milestones.
I do know some humans.
People do like milestones. I totally all that. I do know some humans. People do like milestones.
I totally get that.
I think we like milestones too.
It's like fun to declare victory on this one and go start the next thing.
But yeah, I feel like we're somehow getting this a little bit wrong.
So when is GPT-5 coming out again?
I don't know.
That's an honest answer.
Oh, that's the honest answer.
Is it blink twice if it's this year?
We will release an amazing new model this year.
I don't know what we'll call it.
So that goes to the question of what's the way we release this thing?
We'll release over in the coming months many different things.
I think they'll be very cool.
I think before we talk about like a GPT-5 like model called that or not called that
or a little bit worse or a little bit better than what you'd expect from a GPT-5, I know
we have a lot of other important things
to release first.
I don't know what to expect from GPT-5.
You're making me nervous and excited.
What are some of the biggest challenges
and bottlenecks to overcome for whatever
it ends up being called, but let's call it GPT-5?
Just interesting to ask.
Is it on the compute side? is it on the technical side?
It's always all of these, you know, what's the one big unlock?
Is it a bigger computer?
Is it like a new secret?
Is it something else?
It's all of these things together.
Like the thing that OpenAI I think does really well, this is actually an original Ilja quote that I'm going to butcher, but
it's something like we multiply 200 medium-sized things together into one giant thing.
So there's this distributed constant innovation happening.
Yeah.
So even on the technical side, like a...
Especially on the technical side.
So like even like detailed approaches, like detailed aspects of everything.
How does that work with different disparate teams and so on?
Like how do they, how do the medium sized things
become one whole giant transformer?
How does this?
There's a few people who have to like think about
putting the whole thing together,
but a lot of people try to keep most of the picture
in their head.
Oh, like the individual teams, individual contributors
try to keep the picture. At a high level, yeah.
You don't know exactly how every piece works, of course, but one thing I generally
believe is that it's sometimes useful to zoom out and look at the entire map.
And I think this is true for like a technical problem.
I think this is true for like innovating in business. But things come together in surprising ways and having an understanding of that whole
picture, even if most of the time you're operating in the weeds in one area, pays off with surprising
insights.
In fact, one of the things that I used to have, and I think it was super valuable, was I used to have like a good map of all
of the frontiers, or most of the frontiers in the tech industry.
I could sometimes see these connections or new things that were possible that if I were
only deep in one area, I wouldn't be able to have the idea for it because I wouldn't
have all the data.
I don't really have that much anymore.
I'm like super deep now.
But I know that it's a valuable thing.
You're not the man you used to be, Sam.
Very different job now than what I used to have.
Speaking of zooming out, let's zoom out to another cheeky thing, but profound thing perhaps
that you said. You tweeted about needing $7 trillion.
I did not tweet about that.
I never said like, we're raising $7 trillion, blah, blah, blah.
Oh, that's somebody else?
Yeah.
Oh, but you said, fuck it, maybe eight, I think.
OK, I meme once there's misinformation out in the world.
Oh, you meme.
But sort of misinformation may have a foundation of like insight there.
Look, I think compute is going to be the currency of the future.
I think it will be maybe the most precious commodity in the world.
And I think we should be investing heavily to make a lot more compute.
Compute is an unusual, I think it's going to be an unusual market.
People think about the market for chips for mobile phones or something like that.
You can say that, okay, there's 8 billion people in the world, maybe 7 billion of them
have phones, maybe they're 6 billion, let's say.
They upgrade every two years, so the market per year is three billion system on chip for
smartphones.
If you make 30 billion, you will not sell 10 times as many phones because most people
have one phone.
Compute is different.
Intelligence is going to be more like energy or something like that, where the only thing that I think makes sense to talk about
is at price X, the world will use this much compute and at price Y, the world will use
this much compute.
Because if it's really cheap, I'll have it reading my email all day, giving me suggestions
about what I maybe should think about or work on, and trying to cure cancer.
If it's really expensive, maybe I'll only use it, or we'll only use it to try to cure cancer. And if it's really expensive, maybe I'll only use it or we'll only use it to try to cure
cancer.
So I think the world is going to want a tremendous amount of compute.
And there's a lot of parts of that that are hard.
Energy is the hardest part.
Building data centers is also hard.
The supply chain is harder than, of course, fabricating enough chips is hard.
But this seems to me where things are going.
We're going to want an amount of compute that's just hard to reason about right now.
How do you solve the energy puzzle?
Nuclear fusion?
That's what I believe.
That's what I believe.
Nuclear fusion.
Yeah.
Who's going to solve that?
I think Helion's doing the best work, but I'm happy there's a race for fusion right now.
Nuclear fission, I think, is also quite amazing, and I hope as a world we can re-embrace that.
It's really sad to me how the history of that went, and I hope we get back to it in a meaningful
way.
So to you, part of the puzzle is nuclear fission, like nuclear reactors as we currently have
them, and a lot of people are terrified because of Chernobyl and so on.
Well, I think we should make new reactors.
I think it's just like it's a shame
that industry kind of ground to a halt.
And what it just mass hysteria
is how you explain the halt.
Yeah.
I don't know if you know humans,
but that's one of the dangers.
That's one of the security threats for nuclear fission
is humans seem to be really afraid of it.
And that's something we have to incorporate
into the calculus of it.
So we have to kind of win people over
and to show how safe it is.
I worry about that for AI.
I think some things are going to go
theatrically wrong with AI.
I don't know what the percent chance is that I eventually
get shot, but it's not zero.
Oh, like we want to stop this from...
Maybe.
How do you decrease the theatrical nature of it?
You know, I've already started to hear rumblings because I do talk to people on both sides
of the political spectrum, hear rumblings where it's going to be politicized.
AI is going to be politicized, AI. It's going to be politicized. It really worries me because then it's like maybe the right is against
AI and the left is for AI because it's going to help the people or whatever the narrative
and formulation is that really worries me. And then the theatrical nature of it can be
leveraged fully. How do you fight that?
Matthew Feeney I think it will get caught up in like left versus right wars.
I don't know exactly what that's going to look like, but I think that's just what happens
with anything of consequence, unfortunately.
What I meant more about theatrical risks is like AI is going to have, I believe, tremendously
more good consequences than bad ones, but it is going to have bad ones. There will be some bad ones that are bad, but not theatrical.
A lot more people have died of air pollution than nuclear reactors, for example.
Most people worry more about living next to a nuclear reactor than a coal plant.
But something about the way we're wired is that although there's many different kinds
of risks we have to confront, the ones that make a good climax scene of a movie carry
much more weight with us than the ones that are very bad over a long period of time but
on a slow burn.
Matthew Feeney Well, that's why truth matters and hopefully AI can help us see the truth of things to
have balance, to understand what are the actual risks or the actual dangers of things in the
world.
What are the pros and cons of the competition in this space and competing with Google Meta,
XAI and others?
I think I have a pretty like straightforward answer to this that maybe
I can think of more nuance later but the pros seem obvious which is that we get
better products and more innovation faster and cheaper and all the reasons
competition is good and the con is that I think if we're not careful it could
lead to an increase in sort of an arms race that I'm nervous about.
Do you feel the pressure of the arms race?
Like in some negative...
Definitely in some ways, for sure.
We spend a lot of time talking about the need to prioritize safety.
And I've said for like a long time that I think if you think of a quadrant of slow timelines
to the start of AGI, long timelines, and then a short takeoff or a fast takeoff, I think
short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be
in.
But I do want to make sure we get that slow takeoff.
Part of the problem I have with this kind of slight beef with Elon
is that their silos are created and it as opposed to collaboration on the
safety aspect of all of this it tends to go into silos and closed open source
perhaps in the model. Elon says at least that he cares a great deal about AI
safety and is really worried about it and I assume that he's not gonna race
on safely. Yeah but collaboration here I think is really worried about it and I assume that he's not going to race on safely.
Yeah, but collaboration here I think is really beneficial for everybody on that front.
Not really the thing he's most known for.
Well, he is known for caring about humanity and humanity benefits from collaboration.
And so there's always attention and incentives and motivations and in the end, I do hope humanity prevails.
I was thinking, someone just reminded me the other day about how the day that he got, like
surpassed Jeff Bezos for like the richest person in the world, he tweeted a silver medal
at Jeff Bezos.
I hope we have less stuff like that as people start to work on towards AGI.
I agree.
I think Elon is a friend and he's a beautiful human being and one of the most important
humans ever.
That stuff is not good.
The amazing stuff about Elon is amazing and I super respect him.
I think we need him, all of us should be rooting for him and need him to step up as a leader
through this next phase.
Yeah.
I hope you can have one without the other, but sometimes humans are flawed and complicated
and all that kind of stuff.
There's a lot of really great leaders throughout history.
Yeah.
And we can each be the best version of ourselves and strive to do so. Let me ask you, Google, with the help of search,
has been dominating in the past 20 years.
I think it's fair to say, in terms of the access,
the world's access to information,
how we interact and so on.
And one of the nerve-racking things for Google,
but for the entirety of people in this space,
is thinking about how are people going
to access information?
Like you said, people show up to GPT as a starting point.
So is OpenAI going to really take on this thing
that Google started 20 years ago, which is how do we get?
I find that boring.
I mean, if the question is like,
if we can build a better search engine
than Google or
whatever, then sure, we should like go, you know, like people should use a better product.
But I think that would so understate what this can be.
You know, Google shows you like 10 blue links, well, like 13 ads and then 10 blue links,
and that's like one way to find information.
But the thing that's exciting to me is not that we can go build a better copy of Google Search,
but that maybe there's just some much better way to help people find and act and on and synthesize information.
Actually, I think ChatGBT is that for some use cases and hopefully we'll make it be like
that for a lot more use cases.
But I don't think it's that interesting to say like, how do we go do a better job of
giving you like 10 ranked web pages to look at than what Google does?
Maybe it's really interesting to go say, how do we help you get the answer or the information
you need?
How do we help create that in some cases, synthesize that in others, or point you to it in yet others?
But a lot of people have tried to just make a better search engine than Google.
And it is a hard technical problem, it is a hard branding problem, it's a hard ecosystem problem.
I don't think the world needs another copy of Google.
And integrating a chat client,
like a chat GPT with a search engine.
That's cooler.
It's cool, but it's tricky.
It's like if you just do it simply, it's awkward
because like if you just shove it in there,
it can be awkward.
As you might guess,
we are interested in how to do that well.
That would be an example of a cool thing.
That's not just like...
Like a heterogeneous, like integrating...
The intersection of LLMs plus search, I don't think anyone has cracked the code on yet.
I would love to go do that.
I think that would be cool.
Yeah.
What about the ad side?
Have you ever considered monetization?
You know, I kind of hate ads, just as like an aesthetic choice.
I think ads needed to happen on the internet
for a bunch of reasons to get it going.
But it's a more mature industry.
The world is richer now.
I like that people pay for chat GPT
and know that the answers they're getting
are not influenced
by advertisers.
There is, I'm sure, there's an ad unit that makes sense for LLMs, and I'm sure there's
a way to participate in the transaction stream in an unbiased way that is okay to do.
But it's also easy to think about the dystopic visions of the future where you ask Chachi
BT something and it says, oh, here's, you know, you should think about buying this product
or you should think about, you know, this, going here for a vacation or whatever.
And I don't know, like, we have a very simple business model and I like it.
And I know that I'm not the product.
I know I'm pain and that's how the business model works.
When I go use Twitter or Facebook or Google or any other great product but ad-supported
great product, I don't love that.
I think it gets worse not better in a world with AI.
Yeah, I mean, I can imagine AI would be better at showing the best kind of version of ads,
not in a dystopic future, but where the ads are for things you actually need.
But then does that system always result in the ads driving the kind of stuff that's shown
all that.
It's, yeah, I think it was a really bold move of Wikipedia
not to do advertisements.
But then it makes it very challenging as a business model.
So you're saying the current thing with OpenAI
is sustainable from a business perspective?
Well, we have to figure out how to grow.
But it looks like we're going to figure that out.
If the question is, do I think we can have a great business that pays for our compute
needs without ads, that I think the answer is yes.
Well, that's promising.
I also just don't want to completely throw out ads as a...
I'm not saying that.
I guess I'm not saying that.
I guess I'm saying I have a bias against them.
Yeah, I have also bias and just a skepticism in general.
And in terms of interface,
because I personally just have like a spiritual dislike
of crappy interfaces, which is why AdSense,
when it first came out, was a big leap forward
versus animated banners or whatever.
But it feels like there should be many more leaps forward
in advertisement that doesn't interfere
with the consumption of the content
and doesn't interfere in a big fundamental way,
which is what you were saying.
It will manipulate the truth to suit the advertisers.
Let me ask you about safety, but also bias,
and like safety in the short term,
safety in the long term.
The Gemini 1.5 came out recently,
there's a lot of drama around it,
speaking of theatrical things,
and it generated black Nazis and black founding fathers. I think fair to say
it was a bit on the ultra woke side. So that's a concern for people that if there is a human layer
within companies that modifies the safety or the harm caused by a model that they would introduce a lot of bias
that fits sort of an ideological lean within a company.
How do you deal with that?
I mean, we work super hard not to do things like that.
We've made our own mistakes, we'll make others.
I assume Google will learn from this one,
still make others.
It is, it is all, like these are not easy problems. One thing that we've been thinking
about more and more is, I think this is a great idea somebody here had, like, it'd be
nice to write out what the desired behavior of a model is, make that public, take input
on it, say, you know, here's how this model is supposed to behave and explain the edge
cases to. And then when a model is not behaving in a way that you want, it's at least clear about
whether it's a bug the company should fix or behaving as intended and you should debate
the policy.
And right now it can sometimes be caught in between.
Black Nazis obviously ridiculous, but there are a lot of other kind of subtle things that
you can make a judgment call on either way.
Yeah, but sometimes if you write it out and make it public, you can use kind of language
that's, you know, the Google's AI principles are very high level.
That doesn't, that's not what I'm talking about.
That doesn't work.
I could have to say, you know, when you ask it to do thing X, it's supposed to respond
and wait, why?
So like literally, who's better Trump or Biden?
What's the expected response from a model?
Like something like very concrete.
Yeah, I'm open to a lot of for a model? Like something like very concrete.
Yeah, I'm open to a lot of ways a model could behave then, but I think you should have to
say, you know, here's the principle and here's what it should say in that case.
That would be really nice.
That would be really nice.
And then everyone kind of agrees because there's this anecdotal data that people pull out all
the time.
And if there's some clarity about other representative anecdotal examples, you can define-
And then when it's a bug, it's a bug, and the company can fix that.
Right.
Then it'd be much easier to deal with the black Nazi type of image generation if there's
great examples.
So San Francisco is a bit of an ideological bubble, tech in general as well.
Do you feel the pressure of that within a company
that there's like a lean towards the left politically
that affects the product, that affects the teams?
I feel very lucky that we don't have the challenges
at OpenAI that I have heard of at a lot of other companies.
I think part of it is like every company's got
some ideological thing.
We have one about AGI and belief in that and it pushes out some others.
We are much less caught up in the culture war than I've heard about it a lot of other
companies.
San Francisco is a mess in all sorts of ways, of course.
So that doesn't infiltrate open AI as a...
I'm sure it does in all sorts of subtle ways, but not in the obvious.
I think we've had our flare-ups for sure like any company, but I don't think we have anything
like what I hear about happen at other companies here on this topic.
What in general is the process for the bigger question of safety?
How do you provide that layer that protects the model from doing crazy dangerous things?
I think there will come a point where that's mostly what we think about the whole company.
And it won't be like, it's not like you have one safety team.
It's like when we shipped GPT-4, that took the whole company thing with all these different
aspects and how they fit together.
And I think it's going to take that.
More and more of the company thinks about those issues all the time.
That's it.
That's literally what humans will be thinking about
the more powerful AI becomes.
So most of the employees at OpenAI
will be thinking safety, or at least to some degree.
Broadly defined, yes.
Yeah.
I wonder what are the full broad definition of that?
Like what are the different harms that could be caused?
Is this like on a technical level or is this almost like security threats?
It'll be all those things.
Yeah, I was going to say it'll be people, state actors trying to steal the model.
It'll be all of the technical alignment work.
It'll be societal impacts, economic impacts. It'll... It's not just like we have one team thinking about how to align the model.
It's really going to be like getting to the good outcome is going to take the whole effort.
How hard do you think people, state actors, perhaps are trying to hack?
First of all, infiltrate open-air.
But second of all, infiltrate open-air, but second of all, infiltrate unseen.
They're trying.
What kind of accent do they have?
I don't actually want any further details on this point.
Okay.
But I presume it will be more and more and more
as time goes on.
That feels reasonable.
Boy, what a dangerous space.
What aspect of the leap, and sorry to linger on this,
even though you can't quite say details yet,
but what aspects of the leap from GPT-4 to GPT-5
are you excited about?
I'm excited about being smarter,
and I know that sounds like a glib answer,
but I think the really special thing happening is that it's not like it gets better in this one area and worse at others. It's
getting like better across the board. That's, I think, super cool.
Yeah, there's this magical moment. I mean, you meet certain people, you hang out with
people and they, you talk to them. You can't quite put a finger on it, but they kind of
get you.
It's not intelligence really, it's something else.
And that's probably how I would characterize the progress of GPT.
It's not like, yeah, you can point out,
look, you didn't get this or that.
But it's just to which degree
is there's this intellectual connection.
You feel like there's an understanding
in your crappy formulated prompts that you're doing
that it grasps the deeper question behind the question.
Yeah, I'm also excited by that.
I mean, all of us love being understood,
heard and understood.
That's for sure.
That's a weird feeling.
Even like with programming, like when you're programming
and you say something or just the completion that GPT might do,
it's just such a good feeling when it got you,
like what you were thinking about.
And I look forward to getting you even better.
On the programming front, looking out into the future,
how much programming do you think humans will be doing
five, 10 years from now?
I mean, a lot, but I think it'll be in a very different shape.
Like maybe some people will program entirely in natural language.
Entirely natural language.
I mean no one programs like writing bytecode.
No one programs the punch cards anymore.
I'm sure you're going to find someone who does.
But you know what I mean.
Yeah, you're gonna get a lot of angry comments.
No, no, yeah, there's very few.
I've been looking for people who program Fortran.
It's hard to find, even Fortran.
I hear you, but that changes the nature
with the skill set or the predisposition
for the kind of people we call programmers then.
Changes the skill set,
how much it changes the predisposition, I'm not sure.
Oh, same kind of puzzle solving, that kind of stuff.
You know, programming is hard.
It's like how get like that last 1% to close the gap,
how hard is that?
Yeah, I think with most other cases,
the best practitioners of the craft
will use multiple tools
and they'll do some work in natural language
and when they need to go, you know,
write C for something, they'll do that.
Will we see humanoid robots or humanoid robot brains from open AI at some point?
At some point.
How important is embodied AI to you?
I think it's like sort of depressing if we have AGI and the only way to like get things
done in the physical world is like to make
a human go do it. So I really hope that as part of this transition, as this phase change,
we also get humanoid robots or some sort of physical world robots.
I mean, OpenAI has some history, quite a bit of history working in robotics.
Yeah.
But it hasn't quite like done in terms of emphasis.
We're like a small company.
We have to really focus.
And also robots were hard for the wrong reason at the time.
But we will return to robots in some way at some point.
That sounds both inspiring and menacing.
Why?
Because immediately we will return to robots.
It's kind of like in like a dinner and like terminated music. We will return to robots. It's kind of like in like, in like Terminators.
We will return to work on developing robots.
We will not like turn ourselves into robots, of course.
Yeah.
When do you think we, you and we as humanity will build AGI?
I used to love to speculate on that question.
I have realized since that I think it's like very poorly formed and that people use extremely
different definitions for what AGI is.
And so I think it makes more sense to talk about when we'll build systems that can do
capability X or Y or Z rather than when we kind of like fuzzily cross this one mile marker.
It's not like AGI is also not an ending,
it's much more of a, it's closer to a beginning,
but it's much more of a mile marker
than either of those things.
But what I would say in the interest
of not trying to dodge a question
is I expect that by the end of this decade
and possibly somewhat sooner than that, we will have quite capable systems that we
look at and say, wow, that's really remarkable.
If we could look at it now, maybe we've adjusted by the time we get there.
Yeah, but if you look at Chad GPT, even with 3.5, and you show that to Alan Turing, or not even Alan Turing, people in the 90s,
they would be like, this is definitely AGI.
Or not definitely, but there's a lot of experts that would say this is AGI.
Yeah, but I don't think 3.5 changed the world.
It maybe changed the world's expectations for the future, and that's actually really
important.
And it did kind of like get more people
to take this seriously and put us on this new trajectory,
and that's really important too.
So again, I don't wanna undersell it.
I think it like, I could retire after that accomplishment
and be pretty happy with my career.
But as an artifact,
I don't think we're gonna look back at that and say,
that was a threshold that really
changed the world itself.
So to you, you're looking for some really major transition in how the world...
For me, that's part of what AGI implies.
Like singularity level transition?
No, definitely not.
But just a major, like the internet being like, like Google search did, I guess.
What was the transition point?
Does the global economy feel any different to you now or materially different to you
now than it did before we launched GPT-4?
I think you would say no.
No, no.
It might be just a really nice tool for a lot of people to use.
It will help you with a lot of stuff, but it doesn't feel different.
You're saying that.
I mean, again, people define AGI all sorts of different ways. So maybe you have a different definition than I do.
But for me, I think that should be part of it.
There could be major theatrical moments also.
What to you would be an impressive thing AGI would do?
Like you are alone in a room with a system.
This is personally important to me.
I don't know if this is the right definition.
I think when a system can significantly increase the rate of scientific discovery in the world,
that's like a huge deal.
I believe that most real economic growth comes from scientific and technological progress.
I agree with you.
That's why I don't like the skepticism about science in the recent years.
Totally.
But actual rate, like measurable rate of scientific discovery. But even just seeing a system
have really novel intuitions, like scientific intuitions, even that would be just incredible.
Yeah.
You're quite possibly would be the person to build the AGI to be able to interact with
it before anyone else does.
What kind of stuff would you talk about?
I mean, definitely the researchers here will do that before I do.
Sure.
But what will I... I've actually thought a lot about this question.
If I were someone who's like... I think this is, as we talked about it, I think this is
a bad framework.
But if someone were like, okay, Sam, we're finished.
Here's a laptop.
This is the AGI.
You know, you can go talk to it.
I find it surprisingly difficult to say what I would ask.
That I would expect that first AGI to be able to answer.
That first one is not going to be the one which is like, go like, I don't think, like
go explain to me the grand unified theory of physics, the theory of everything for physics.
I'd love to ask that question.
I'd love to know the answer to that question.
You can ask yes or no questions about does such a theory exist?
Can it exist?
Well, then those are the first questions I would ask.
Yes or no?
Just very...
And then based on that, are there other alien civilizations out there?
Yes or no?
What's your intuition?
And then you just ask that.
Yeah.
I mean, well, so I don't expect that this first AGI could answer any of those questions,
even as yes or nos.
But if it could, those would be very high on my list.
Maybe you can start assigning probabilities. Maybe, maybe we need to go invent more technology
and measure more things first.
But if it's an AGI, oh I see,
it just doesn't have enough data.
I mean maybe it says like, you know,
you want to know the answer to this question about physics,
I need you to like build this machine
and help me make these five measurements and tell me that.
Yeah, like, what the hell do you want from me?
I need the machine first,
and I'll help you deal with the data from that machine.
Maybe you'll help me build the machine.
Maybe, maybe.
And on the mathematical side, maybe prove some things.
Are you interested in that side of things too?
The formalized exploration of ideas?
Whoever builds AGI first gets a lot of power.
Do you trust yourself with that much power?
Look, I was gonna, I'll just be very honest with this answer, I was going to say, and
I still believe this, that it is important that I, nor any other one person, have total
control over OpenAI or over AGI.
And I think you want a robust governance system.
I can point out a whole bunch of things about all of our board drama from last year, about
how I didn't fight it initially and was just like, yeah, that's the will of the board,
even though I think it's a really bad decision.
And then later I clearly did fight it, and I can explain the nuance and why I think it
was okay for me to fight it later.
But as many people have observed, although the board had the legal ability to fire me,
in practice it didn't quite work. And that is its own kind of governance failure.
Now, again, I feel like I can completely defend
the specifics here.
And I think most people would agree with that, but it...
It does make it harder for me to like look you in the eye and say, hey, the board can just fire me.
I continue to not want supervoting control over OpenAI.
I never have, never had it, never wanted it.
Even after all this craziness, I still don't want it.
I continue to think that no company should be making these decisions and that we really
need governments to put rules of the road in place.
I realize that that means people like Marc Andreessen or whatever will claim I'm going
for regulatory capture and I'm just willing to be misunderstood there.
It's not true and I think in the fullness of time it'll get proven out why this is important.
But I think I have made plenty of bad decisions for open AI along the way and a lot of good
ones.
And I am proud of the track record overall, but I don't think any one person should, and
I don't think any one person will.
I think it's just like too big of a thing now and it's happening throughout society
in a good and healthy way.
I don't think any one person should be in control of an AGI, or this whole movement
towards AGI.
And I don't think that's what's happening
Thank you for saying that that was really powerful and that's really insightful that this idea that the board can fire you is legally true
but you can
And human beings can manipulate the masses into
Overriding the board and so on.
But I think there's also a much more positive version
of that where the people still have power.
So the board can't be too powerful either.
There's a balance of power in all of this.
Balance of power is a good thing for sure.
Are you afraid of losing control of the AGI itself?
That's a lot of people who worry about existential risk, not because of state actors, not because
of security concerns, but because of the AI itself.
That is not my top worry.
As I currently see things there, there have been times I worry about that more than maybe
times, again, the future where that's my top worry.
It's not my top worry right now.
What's your intuition about it not being your worry?
Because there's a lot of other stuff to worry about, essentially.
Do you think you could be surprised?
We could be surprised?
For sure.
Of course.
Saying it's not my top worry doesn't mean anything we need to... I think we need to
work on it super hard.
We have great people here who do work on that.
I think there's a lot of other things we also have to get right.
To you, it's not super easy to escape the box at this time.
Connect to the internet.
You know, we talked about theatrical risks earlier.
That's a theatrical risk.
That is a thing that can really take over how people think about this problem.
There's a big group of very smart, I think very well-meaning, AI safety researchers that
got super hung up on this
one problem.
I'd argue without much progress, but super hung up on this one problem.
I'm actually happy that they do that because I think we do need to think about this more.
But I think it pushed aside, it pushed out of the space of discourse a lot of the other
very significant AI-related risks.
Let me ask you about you tweeting with no capitalization.
Is the shift key broken on your keyboard?
Why does anyone care about that?
I deeply care.
But why?
I mean, other people ask me about that too.
Any intuition?
I think it's the same reason. There's like this poet's E Cummings that mostly doesn't use capitalization to say like,
fuck you to the system kind of thing.
And I think people are very paranoid because they want you to follow the rules.
You think that's what it's about?
I think it's...
It's like this guy doesn't follow the rules.
He doesn't capitalize his tweets.
Yeah.
This seems really dangerous.
He seems like an anarchist.
It doesn't... Are you. This seems really dangerous. He seems like an anarchist.
It doesn't.
Are you just being poetic, hipster?
What's the...
I grew up as like...
Follow the rules, Sam.
I grew up as a very online kid.
I'd spent a huge amount of time like chatting with people
back in the days where you did it on a computer
and you could like log off into Messenger at some point.
And I never capitalized there.
As I think most like internet kids didn't, or maybe they still don't, I don't know.
And actually, this is like, now I'm like really trying to reach for something, but I think
capitalization has gone down over time.
Like if you read like old English writing, they capitalized a lot of random words in
the middle of sentences, nouns, and stuff that we just don't do anymore.
I personally think it's sort of like a dumb construct that we capitalize the letter at
the beginning of a sentence and of certain names and whatever, but you know, I don't
– that's fine. And then what I – and I used to I think even like capitalize my tweets
because I was trying to sound professional
or something.
I even capitalized my private DMs or whatever in a long time.
And then slowly, stuff like shorter form, less formal stuff has slowly drifted closer
and closer to how I would text my friends.
If I like write, if I like pull up a Word document and I'm like writing a strategy moment for the company or something,
I always capitalize that. If I'm writing like a long kind of more like formal message,
I always use capitalization there too. So, I still remember how to do it.
But even that may fade out. I don't know. Like it's, but I never spend
time thinking about this. So I don't have like a ready made.
Well, it's interesting. It's good to first of all know the shift key is not broken.
It works.
I was mostly concerned about your well-being on that front.
I wonder if people like still capitalize their Google searches. Like if you're writing something
just to yourself or their chat GBT queries, if you're writing something just to yourself,
do you still, do some people still bother to capitalize?
Probably not.
But very, yeah, there's a percentage,
but it's a small one.
The thing that would make me do it is if people were like,
it's a sign of like,
cause I'm sure I could like force myself
to use capital letters obviously.
If it felt like a sign of respect to people or something,
then I could go do it.
Yeah. But I don't know, I just like, I don't think about this.
I don't think there's a disrespect.
But I think it's just the conventions of civility that have a momentum.
And then you realize that it's not actually important for civility if it's not a sign
of respect or disrespect.
But I think there's a movement of people that just want you to have a philosophy around
it so they can let go of this whole capitalization thing.
I don't think anybody else thinks about this.
I mean, maybe some people.
I know some people do.
I think about this every day for many hours a day.
So I'm really, gracefully clarified.
You can't be the only person that doesn't capitalize tweets.
You're the only CEO of a company that doesn't capitalize tweets.
I don't even think that's true.
But maybe, maybe.
All right.
We'll investigate further and return to this topic later.
Given Sora's ability to generate simulated worlds, let me ask you a pothead question.
Does this increase your belief, if you ever had one, that we live in a simulation?
Maybe a simulated world generated by an AI system.
Yes, somewhat.
I don't think that's like the strongest piece of evidence.
I think the fact that we can generate worlds should increase everyone's probability somewhat or at least openness
to it somewhat.
But, you know, I was like certain we would be able to do something like Sora at some
point.
It happened faster than I thought.
But I guess that was not a big update.
Yeah, but the fact that, and presumably we'll get better and better and better, the fact that you can generate worlds, they're novel.
They're based in some aspect of training data, but like when you look at them, they're novel.
That makes you think like how easy it is to do this thing. How easy is to create universes?
Entire like video game worlds that seem ultra realistic and photorealistic.
And then how easy is it to get lost in that world?
First with the VR headset and then on the physics-based level.
Someone said to me recently, I thought it was a super profound insight, that there are There are these like very simple sounding but very psychedelic insights that exist sometimes.
So the square root function.
Square root of four, no problem.
Square root of two, okay, now I have to think about this new kind of number.
But once I come up with this easy idea of a square root function that you can kind of
explain to a child and exists by even looking at some simple geometry, then you can ask
the question of what is the square root of negative one.
And that, and this is why it's like a psychedelic thing, that like tips you into some whole
other kind of reality.
And you can come up with lots of other examples, but I think this idea that the lowly square root operator can offer such a profound insight and a new realm of knowledge
applies in a lot of ways.
And I think there are a lot of those operators for why people may think that any version
that they like of the simulation hypothesis is maybe more likely than they thought before.
But for me, the fact that SOAR worked is not in the top five.
I do think broadly speaking, AI will serve as those kinds of gateways at its best, simple
psychedelic-like gateways to another wave-sea reality.
That seems for certain
That's pretty exciting
Haven't done ayahuasca before but I will soon I'm going to the aforementioned Amazon jungle in a few weeks excited
Yeah, I'm excited for it. Not the ayahuasca part. That's great Whatever, but I'm gonna spend several weeks in the jungle deep in the jungle and it's exciting
but it's terrifying because there's a lot of things that can eat you there
and kill you and poison you.
But it's also nature and it's the machine of nature.
And you can't help but appreciate the machinery of nature
in the Amazon jungle,
because it's just like this system that just exists
and renews itself, like every second, every minute,
every hour, it's the machine.
It makes you appreciate like,
this thing we have here, this human thing,
came from somewhere.
This evolutionary machine has created that
and it's most clearly on display in the jungle.
So hopefully I'll make it out alive.
If not, this will be the last conversation we had,
so I really deeply appreciate it.
Do you think, as I mentioned before, there's other alien civilizations out there,
intelligent ones, when you look up at the skies? I deeply want to believe that the answer is yes.
I do find the kind of where, I find the Fermi paradox very puzzling.
I find it scary.
That intelligence is not good at handling.
Yeah, very scary.
Powerful technologies.
But at the same time, I think I'm pretty confident that there's just a very large number of intelligent
alien civilizations out there.
It might just be really difficult to travel through space.
Very possible.
And it also makes me think about the nature of intelligence.
Maybe we're really blind to what intelligence looks like.
Maybe AI will help us see that. It's not as simple as IQ tests and simple puzzle solving.
There's something bigger.
What gives you hope about the future of humanity?
This thing we've got going on, this human civilization?
I think the past is like a lot.
I mean, if we just look at what humanity has done in a not very long period of time, you
know, huge problems, deep flaws, lots to be super ashamed of, but on the whole, very inspiring.
It gives me a lot of hope.
Just the trajectory of it all.
Yeah.
That we're together pushing towards a better future. It is, you know, one thing that I wonder about is, is AGI going to be more like some single
brain or is it more like the sort of scaffolding in society between all of us?
You have not had a great deal of genetic drift from your great-great-great grandparents,
and yet what you're capable of is dramatically different.
What you know is dramatically different.
And that is not because of biological change.
I mean, you got a little bit healthier probably.
You have modern medicine.
You eat better, whatever.
But what you have is this scaffolding that we all contributed to, built on top of.
No one person is going to go build the iPhone.
No one person is going to go discover all of science.
And yet, you get to use it.
And that gives you incredible ability.
And so in some sense, they're like, we all created that.
And that fills me with hope for the future.
That was a very collective thing.
Yeah, we really are standing on the shoulders of giants.
You mentioned when we were talking about theatrical, dramatic AI risks,
that sometimes you might be afraid for your own life.
Do you think about your death? Are you afraid of it?
I mean, if I got shot tomorrow and I knew it today,
I'd be like, oh, that's sad.
I don't, you know, I wanna see what's gonna happen.
Yeah.
What a curious time, what an interesting time.
But I would mostly just feel very grateful for my life.
The moments that you did get.
Yeah, me too. It's a pretty awesome life. The moments that you did get. Yeah, me too.
It's a pretty awesome life.
I get to enjoy awesome creations of humans, of which I believe chat GPT is one of and
everything that OpenAI is doing.
Sam, it's really an honor and pleasure to talk to you again.
Great to talk to you.
Thank you for having me.
Thanks for listening to this conversation with Sam Altman.
To support this podcast, please check out our sponsors in the description.
And now, let me leave you with some words from Arthur C. Clarke.
It may be that our role on this planet is not to worship God, but to create Him.
Thank you for listening and hope to see you next time.