The Daily - 'Hard Fork': An Interview With Sam Altman
Episode Date: November 24, 2023It was a head-spinning week in the tech world with the abrupt firing and rehiring of OpenAI’s chief executive, Sam Altman. The hosts of “Hard Fork,” Kevin Roose and Casey Newton, interviewed Al...tman only two days before he was fired. Over the course of their conversation, Altman laid out his worldview and his vision for the future of A.I. Today, we’re bringing you that interview to shed light on how Altman has quickly come to be seen as a figure of controversy inside the company he co-founded.“Hard Fork” is a podcast about the future of technology that's already here. You can search for it wherever you get your podcasts. Visit nytimes.com/hardfork for more.Hear more of Hard Fork's coverage of OpenAI’s meltdown:Emergency Pod: Sam Altman Is Out at Open AIYet Another Emergency Pod: Sam Altman Is Back
Transcript
Discussion (0)
Hey, it's Michael. I hope you're having a wonderful Thanksgiving holiday. If you didn't
catch us yesterday, let me just say, in addition to you and everyone who listens to The Daily,
one of the things we are so grateful for here at the show is our amazing colleagues, reporters
and editors throughout the newsroom, and also throughout the Times audio department. So
yesterday and today, we're doing something a little bit different.
We're turning the stage over to those colleagues in the audio department
to showcase their terrific work.
Today, it's our friends at Hard Fork.
If you're not familiar, Hard Fork is a weekly tech conversation
hosted by Kevin Roos and Casey Newton.
It's excellent.
Case in point, for today's show,
we're going to play you an interview that Kevin and Casey did with Sam Altman, the CEO at OpenAI,
just two days before Altman was abruptly ousted and later reinstated by his board.
If you're a daily listener, earlier this week, we covered the entire saga in our episode
listener, earlier this week, we covered the entire saga in our episode, Inside the Coup at OpenAI.
Anyway, here's Kevin and here's Casey, who are going to say a little bit more about their interview with Allman and about their show, Hard Fork. Take a listen.
Hello, daily listeners. Hope you had a good Thanksgiving. I'm Kevin Roos,
a tech columnist for The New York Times.
I'm Casey Newton for Platformer.
And as you just heard, we are the hosts of Hard Fork.
It's a weekly podcast from The New York Times
about technology, Silicon Valley, AI, the future,
all that stuff.
Casey, how would you describe our show, Hard Fork,
to the daily audience?
I would say if you're somebody
who is curious about the future,
but you're also the sort of person
who likes to get a drink at the end of the week
with your friends, we are the show for you, okay? We're gonna tell you what is going on in this crazy world, but you're also the sort of person who likes to get a drink at the end of the week with your friends, we are the show for you. Okay, we're going to tell you what is
going on in this crazy world. But we're also going to try to make sure you have a good time while we
do it. Yeah. So this week, for example, we have been talking about the never ending saga at Open
AI that Michael just mentioned. If you haven't been following this news, let's just summarize
what's going on in the quickest way possible. So last Friday, Sam Altman,
the CEO of OpenAI, and arguably one of the most important people in the tech industry,
was fired by his board. This firing shocked everyone, investors, employees, seemingly Sam
Altman himself, who seemed not to know what was coming. Then over the next few days, there was a
wild campaign by investors, employees,
and eventually some of the board members to bring back Sam Altman as CEO. And late on Tuesday night,
that campaign was successful. The company announced that Sam was coming back and that
the board was going to be reconstituted and basically back to normal.
Yeah. On one hand, Kevin, a shocking turn of events. And on the other,
by the time we got here, basically the only turn of events possible, I think.
Yeah. So it's been a totally insane few days. We've done several emergency podcasts about this.
And today we are going to bring you something that I think is really important, which is an interview with Sam Altman.
Now, this interview predates Sam Altman's firing.
We recorded it on Wednesday of last week, just two days before he was fired.
We obviously weren't able to ask him he's thinking about the way that AI is developing and how it's going to influence the future.
happened because what we were curious about was how are you going to be leading OpenAI into the future? And as of Tuesday evening, he now will once again be leading OpenAI into the future.
Totally. So when we come back, our conversation with Sam Altman,
the CEO of OpenAI, recorded just two days before all of this drama started.
Sam Altman, welcome back to Hardfork.
Thank you.
Sam, it has been just about a year since ChachiPT was released,
and I wonder if you have been doing some reflecting over the past year
and kind of where it has brought us in the development of AI.
Frankly, it has been such a busy year,
there has not been a ton of time for reflection.
Well, that's why we brought you in.
We want you to reflect here.
Great, I can do it now.
I mean, I definitely think this was the year so far.
There will be maybe more in the future, but the year so far where the general average
tech person went from taking AI not that seriously to taking it pretty seriously.
Yeah.
And the sort of recompiling of expectations given that.
So I think in some sense,
that's like the most significant update of the year.
I would imagine that for you,
a lot of the past year has been watching the world
catch up to things that you have been thinking about
for some time.
Does it feel that way?
Yeah, it does.
You know, we kind of always thought
on like the inside of OpenAI
that it was strange that the rest of the
world didn't take this
more seriously like
it wasn't more excited
about it
I mean I think if
five years ago you
had you would explain
like what chat GPT
was going to be I
would have thought
wow that like that
sounds pretty cool but
and you know
presumably I could
have just looked into
it more and I would
have smartened myself
up but I think until
I actually used it as
is often the case it
was just hard to know
what it was yeah I actually think we could have explained it and it wouldn't have made
that much of a difference. We tried. People are busy with their lives. They don't have a lot of
time to sit there and listen to some tech people prognosticate about something that may or may not
happen. But you ship a product that people use, get real value out of of and then it's different. Yeah. I remember reading
about the early days of the run
up to the launch of ChatGPT
and I think you all have said that
you did not expect it to be a hit
when it launched. No, we thought it would be a hit.
We didn't think it would be like this. We did it because we thought it was going to be
a hit. We didn't think it was going to be like this big of a
hit. Right. As we're sitting here today, I believe
it's the case that you can't actually sign up for ChatGPT
Plus right now. Is that right? Correct.
Yeah. So what's that all about?
We have
not enough capacity always, but
at some point it gets really bad. So over
the last
10 days or so, we have done
everything
we can. We've rolled out new optimizations, we've
disabled some features,
and then people just keep
signing up it keeps getting slower and slower and there's like a limit at some point to what you can
do there and you can't we just don't want to offer like a bad quality of service and so it gets like
slow enough that we just say you know what until we can make more progress either with more gpus
or more optimizations we're gonna put this on hold Not a great place to be in, to be honest, but it was like the least of several bad options. Sure. And I feel like in the history of tech
development, there often is a moment with really popular products where you just have to close
signups for a little while, right? The thing that's different about this than others is it's
just, it's so much more compute intensive than the world is used to for internet services. So
you don't usually have to do this. Usually usually by the time you're at this scale,
you've like solved your scaling bottlenecks.
Yeah.
One of the interesting things for me
about covering all the AI changes over the past year
is that it often feels like journalists and researchers
and companies are discovering properties of these systems
sort of at the same time altogether.
I mean, I remember when we had you and Kevin Scott
from Microsoft on the show earlier this year
around the Bing relaunch.
And you both said something to the effect of,
well, to discover what these models are
or what they're capable of,
you kind of have to put them out into the world
and have millions of people using them.
Then we saw, you know, all kinds of crazy
but also inspiring things.
You had Bing Sydney,
but you also had people starting to use these things in their lives.
So I guess I'm curious what you feel like you have learned about language models and your language models specifically from putting them out into the world.
What we don't want to be surprised by is the capabilities of the model.
That would be bad.
And we're not.
With GPT-4, for example, we took a long time between finishing the model, that would be bad. And we were not. You know, with GPT-4, for example,
we took a long time between finishing the model
and releasing it.
Red team did heavily, really studied it,
did all of the work internally, externally.
And there's, I'd say there's at least so far,
and maybe now it's been long enough that we would have,
we have not been surprised by any capabilities
the model had that we just didn't know about at all
in a way that we were for GPT-3, frankly,
sometimes people found stuff.
But what I think you can't do in the lab is understand how technology and society are going to co-evolve so you can say here's what the model can do and not do but you can't say like and here's
exactly how society is going to progress given that and that's where you just have to see what people are doing uh how they're using
it and that like well one thing is they use it a lot like that's that's one takeaway that we did
not clearly we did not appropriately plan for um but more interesting than that is the way in which
this is transforming people's productivity um personal lives, how they're learning, and how, like, you know, one example that I think is instructive because it was the first and the loudest is what happened with ChatGPT in education.
Days, at least weeks, but I think days after the release of ChatGPT,
school districts were like falling all over themselves to ban ChatGPT.
And that didn't really surprise us.
Like that we could have predicted and did predict.
The thing that happened after that quickly was, know like weeks to months um was school districts
and teachers saying hey actually we made a mistake and this is really important part of the future of
education and the benefits far with the downside and not only are we unbanning it we're encouraging
our teachers to make use of it in the class classroom uh we're encouraging our students to
get really good at this tool because it's going to be part of the way people live. And then there was a big discussion about what the kind of
path forward should be. And that is just not something that could have happened without
releasing. And part, can I say one more thing? Part of the decision that we made with the chat gpt release um the original
plan had been to do the chat interface and gpt4 together in march and we really believe in this
idea of iterative deployment and we had realized that chat the chat interface plus gpt4 was a lot
i don't think we realized quite how much it was. Like too much for society to take in. So we split it. And we put it out with GPT 3.5 first,
which we thought was a much weaker model.
It turned out to still be powerful enough
for a lot of use cases.
But I think that in retrospect
was a really good decision
and helped with that process
of gradual adaptation for society.
Looking back,
do you wish that you had done more
to sort of, I don't know, give people
some sort of a manual to say, here's how you can use this at school or at work?
Two things.
One, I wish we had done something intermediate between the release of 3.5 in the API and
chat GPT.
Now, I don't know how well that would have worked because I think there was just going
to be some moment where it went like viral in the mind of society. And I don't know how incremental
that could have been. That's sort of a like, either it goes like this or it doesn't kind of
thing. And I think, I have reflected on this question a lot. I think the world was going to
have to have that moment. It was better sooner than later. It was good we did it when we did.
it was better sooner than later it was good we did it when we did
maybe we should have tried to push it
even a little earlier
but it's a little
chancy about when it hits and I think
only a consumer product could have done
what happened there
now the second thing is should we have released
more of a
how-to manual
and
I honestly don't
know.
I think we could have done
some things there that would have been helpful, but
I really believe
that it's not optimal for tech companies
to tell people, like, here is how to use
this technology, and here's how to
do whatever. And the organic thing that happened there
actually was pretty good.
I'm curious about the thing that you just said about, we thought it was important to get
this stuff into folks' hands sooner rather than later. Say more about what that is.
More time to adapt for our institutions and leaders to understand, for people to think about
what the next version of the model should do, what they'd like, what would be useful,
what would not be useful, what would be really bad, how society and the economy need to co-evolve.
Like the thing that many people in the field or adjacent to the field have advocated or used to
advocate for, which I always thought was super bad was, you know, this is so disruptive, such a big
deal. It's got to be done in secret by the small group of us that can understand it.
And then we will fully build the AGI
and push a button all at once
when it's ready.
And I think that'd be quite bad.
Yeah, because it would just be
way too much change too fast.
Yeah, again, society and technology
have to co-evolve
and people have to decide
what's going to work for them and not
and how they want to use it.
And we're, you know,
you can criticize OpenAI
about many, many things,
but we do try to like really listen to people and adapt it in ways
that make it better or more useful. And I think we're able to do that, but we wouldn't get it
right without that feedback. Yeah. I want to talk about AGI and the path to AGI later on,
but first I want to just define AGI and have you talk about sort of where we are on the continuum.
So I think it's a ridiculous and meaningless term.
Yeah.
I'm sorry.
I apologize, but I keep using it.
It's like deep in the muscle memory.
I mean, I just never know what people are talking about when they're talking about it.
They mean like really smart AI.
Yeah.
So it stands for artificial general intelligence.
And you could probably ask a hundred different AI researchers and they would give you a hundred
different definitions of what AGI is.
Researchers at Google DeepMind just released a paper this month that sort of offers a framework.
They have five levels, or I guess they have levels ranging from level zero, which is no AI,
all the way up to level five, which is superhuman. And they suggest that currently ChatachiPT, Bard, Llama2 are all at level one, which is sort of equal to or slightly better than an unskilled human.
Would you agree with that?
Like, where are we?
If you would, if you'd say this is a term that means something and you sort of define it that way, how close are we?
I think the thing that matters is the curve and the rate of progress.
And there's not going to be some milestone that we all agree like, okay, we've passed it and now it's called AGI.
Like what I would say is we currently have systems that are like, there will be researchers who will write papers like that.
And, you know, academics will debate it and people in the industry will debate it.
And I think most of the world just cares like, is this thing useful to me or not?
And we currently have systems that are somewhat useful, clearly.
Like, and, you know, whether we want to say, like, it's a level one or two, I don't know.
But people use it a lot and they really love it.
There's huge weaknesses in the current systems.
But it doesn't mean that...
Like, I I'm you know
a little embarrassed
by GPTs
but people still like them
and
that's
good
like it's nice to do
useful stuff for people
so yeah
call it a level one
doesn't bother me at all
I am embarrassed by it
we will make them
much better
but
at their current state
they're still like
delighting people
and being useful to people
yeah I also think it underrates them slightly to say that they're still delighting people and being useful to people.
Yeah. I also think it underrates them slightly to say that they're just better than unskilled humans. When I use ChatGPT, it is better than skilled humans for some things.
And worse than any human and many other things.
But I guess this is one of the questions that people ask me the most, and I imagine ask you,
is what are today's AI systems useful and not useful for doing i would say the main thing
that they're bad at well many things but one that is on my mind a lot is they're bad at reasoning
and a lot of the valuable human things require some degree of complex reasoning. But they're good at a lot of other things. Like, GPT-4 is vastly superhuman
in terms of its world knowledge. It knows there's a lot of things in there. And it's very different
than how we think about evaluating human intelligence. So it can't do these basic
reasoning tasks. On the other hand, it knows more
than any human has ever known. On the other hand, again, sometimes it like totally makes stuff up in
a way that a human would not. But, you know, if you're using it to be a coder, for example,
it can hugely increase your productivity. And there's value there, even though it has all of
these other weak points. If you were a student
you can learn
a lot more
than you could
without using this tool
in some ways.
Value there too.
Let's talk about GPTs
which you announced
at your recent
developer conference.
For those who haven't
had a chance to use one yet
Sam, what's a GPT?
It's like a custom
version of chat GPT
that you can
get to behave
in a certain way.
You can give it
limited ability
to do actions.
You can give it
knowledge to refer to. You can say like act this way. You can give it limited ability to do actions. You can give it knowledge to refer to.
You can say, like, act this way.
But it's super easy to make,
and it's a first step
towards more powerful AI systems and agents.
We've had some fun with them on the show.
There's a hard fork bot that you can sort of ask
about anything that's happened on any episode of the show.
It works pretty well, we found, when we did some testing.
But I want to talk about where this is
going. What is the GPTs that you've
released a first step toward?
AIs that can accomplish
useful tasks.
I think we need to move
towards this with great care.
I think it would be a bad idea to put
turn powerful agents free
on the internet.
But AIs that can act on your behalf to do something with a company that can access your data,
that can help you be good at a task, I think that's going to be an exciting way we use computers.
that's going to be an exciting way we use computers.
Like, we have this belief that we're heading towards a vision where there are new interfaces, new user experiences possible
because finally the computer can understand you and think.
And so the sci-fi vision of a computer that you just, like,
tell what you want and it figures out how to do it,
this is a step towards that.
Right now, I think what's holding a lot of people back in, a lot of companies and organizations back
in sort of using this kind of AI in their work is that it can be unreliable. It can make up things,
it can give wrong answers, which is fine if you're doing creative writing assignments,
but not if you're a hospital or a law firm or something else with big stakes.
but not if you're a hospital or a law firm or something else with big stakes.
How do we solve this problem of reliability?
And do you think we'll ever get to the sort of low fault tolerance that is needed for these really high stakes applications?
So first of all, I think this is like a great example of people understanding the technology,
making smart decisions with it,
society and the technology co-evolving together.
Like what you see is that people are using it
where appropriate and where it's helpful
and not using it where you shouldn't.
And for all of the sort of like fear that people have had,
like both users and companies
seem to really understand the limitations and are making
appropriate decisions about where to roll it out it the kind of controllability reliability
whatever you want to call it that is going to get much better um i think we'll see a big step
forward there over the coming years and and i think that there will be a time. I don't know if it's like 2026, 2028,
2030, whatever, but there will be a time where we just don't talk about this anymore.
Yeah. It seems to me though, that that is something that becomes very important to get right
in the, as you build these more powerful GPTs, right? Once I tell, like, I would love to have a GPT be my assistant,
go through my emails,
hey, don't forget to respond to this before the end of the day.
The reliability has got to be way up before that happens.
Yeah, yeah.
That makes sense.
You mentioned as we started to talk about GPTs
that you have to do this carefully.
For folks who haven't spent as much time reading about this explain what are
some things that could go you know you guys are obviously going to be very careful with this other
people are going to build gpt like things might not put the same kind of controls in place so
what can you imagine other people's doing that like you as the ceo would say your folks hey
it's not gonna be able to do that well that example that you just gave like if you let it
That example that you just gave, like if you let it act as your assistant and go like, you know, send emails, do financial transfers for you, like it's very easy to imagine how that could go wrong.
But I think most people who would use this don't want that to happen on their behalf either.
And so there's more resilience to this sort of stuff than people think. I think that's, I mean, for what it's worth on the whole, on the hallucination thing, which it does feel like has maybe been the longest conversation that we've had about ChachiPT in general since it launched, I just always think about Wikipedia
as a resource I use all the time. And I don't want Wikipedia to be wrong, but 100% of the time,
it doesn't matter if it does. I am not relying on it for life-saving information, right? ChachiPT
for me is the same, right? It's like, hey, you know, it's, I mean, it's, it's like great and just kind of bar trivia, like, hey, you know,
what's like the history of this conflict in the world? Yeah. I mean, we want to get that a lot
better and we, we will, like, I think the next model will just hallucinate much less. Is there,
is there an optimal level of hallucination in an AI model? Because I've heard researchers say,
well, you actually don't want it to never hallucinate
because that would mean
making it not creative.
That new ideas
come from making stuff up
that's not necessarily tethered to...
This is why I tend to use
the word controllability
and not reliability.
You want it to be reliable
when you want.
You want it to...
Either you instruct it
or it just knows
based off of the context
that you are asking
a factual query and you want the a hundred percent black and white answer.
But you also want it to know when you want it to hallucinate or you want it to make stuff
up.
As you just said, like new discovery happens because you come up with new ideas, most of
which are wrong.
And you discard those and keep the good ones and sort of add those to your understanding
of reality.
Or if you're telling a creative story, you want that.
So if these models didn't hallucinate at all, ever,
they wouldn't be so exciting.
They wouldn't do a lot of the things that they can do.
But you only want them to do that when you want them to do that.
And so the way I think about it is like model capability,
personalization, and controllability.
And those are the three axes we have to push on.
And controllability means no hallucinations when you don't want, lots of it when you're trying to invent something new.
Let's maybe start moving into some of the debates that we've been having about AI over the past year.
And I actually want to start with something that I haven't heard as much, but that I do bump into when I use your products, which is like, they can be quite restrictive in how you use them. I think mostly
for great reasons, right? Like, I think you guys have learned a lot of lessons from the past era
of tech development. At the same time, I feel like I've tried to ask ChatGPT a question about
sexual health. I feel like it's going to call the police on me, right? So I'm just curious how you've
approached that subject. Yeah, look, one thing, no one wants to be scolded by a computer ever. Like
that is not a good feeling. And so you should never feel like you're going to have the police
call it. It's more like horrible, horrible, horrible. We have started very conservative,
which I think is a defensible choice. Other people may have made a different one. But again,
that principle of controllability, what we'd like to get to is a world where if you want some of the guardrails relaxed a lot and
that's a like you know you're not like a child or something then fine we'll relax the guardrails it
should be up to you um but i think starting super conservative here, although annoying is a defensible decision and I wouldn't have gone back and made it differently.
We have relaxed it already.
We will relax it much more, but we want to do it in a way where it's user controlled.
Yeah.
Are there certain red lines you won't cross things that you will never let your models be used for other than things that are like obviously illegal or dangerous?
Um, yeah, certainly things that are like obviously illegal or dangerous um
yeah certainly things that are illegal and dangerous we won't uh there's there's like a
lot of other things that i could say but they so depend like where those red lines will be so
depend on how the technology evolves that it's hard to say right now like here's the exhaustive
set these like we really try to just study the models and predict capabilities as we go but we evolves that it's hard to say right now, like, here's the exhaustive set.
Like, we really try to just study the models and predict capabilities as we go, but we get,
you know, if we learn something new, we change our plans.
Yeah. One other area where things have been shifting a lot over the past year is in AI regulation and governance. I think a year ago, if you'd asked, you know, the average
congressperson, what do you think of AI? They would have said, what's that?
Get out of my office. Right. You know, we just recently saw the Biden White House
put out an executive order about AI. You have obviously been meeting a lot with lawmakers and
regulators, not just in the U.S., but around the world. What's your view of how AI regulation is
shaping up? It's a really tricky point to get across. What we believe is that on the frontier systems,
there does need to be proactive regulation there.
But heading into overreach and regulatory capture
would be really bad.
And there's a lot of amazing work
that's going to happen with smaller models,
smaller companies, open source efforts.
And it's really important that regulation
not strangle that.
So it's like,
I've sort of become a villain for this,
but I think there was-
You have.
Yeah.
How do you feel about this?
Like annoyed, but have bigger problems in my life right now.
Right.
But this message of like, regulate us,
regulate the really capable models
that can have significant consequences,
but leave the rest of the industry alone.
It's just a hard message to get across.
Here is an argument that was made to me by a high-ranking executive at a major tech company as some of this debate was playing out.
This person said to me that there is essentially no harms that these models can have that the internet itself doesn't enable, right? And that
to do any sort of work like is proposed in this executive order to have to inform the Biden
administration is just essentially pulling up the ladder behind you and ensuring that the folks who
have already raised the money can, you know, sort of reap all of the profits of this new world, and will leave the little people behind.
So I'm curious what you make of that argument.
I disagree with it on a bunch of levels.
First of all, I wish the threshold
for when you do have to report
was set differently and based off of like,
you know, evals and capability thresholds.
Not flops?
Not flops.
Okay.
But there's no small company
trained with that many flops anyway,
so that's a little bit...
Yeah.
For the listener who maybe
didn't listen to our last episode
about this...
Listen to our flops episode!
The flops are the measure
of the amount of computing
that is used to train these models.
The executive order says
if you're above a certain
computing threshold,
you have to tell the government
that you're training a model that big.
Yeah.
But no small effort is training
at 10 to the 26th slops.
Currently, no big effort is either.
So that's like a dishonest comment.
Second of all,
the burden of just saying like,
here's what we're doing is not that great.
But third of all,
the underlying thing there, there's nothing you can do here
that you couldn't already do on the internet um that's the real
either dishonesty or lack of understanding um i you could maybe say with gpt4 you can't do anything
you can't do on the internet but i don't think that's really true even at GPT-4. Like, there are some new things.
And GPT-5 and 6, there will be, like, very new things.
And saying that we're going to be, like, cautious and responsible
and have some testing around that,
I think that's going to look more prudent in retrospect
than it maybe sounds right now.
I have to say, for me,
these seem like the absolute gentlest regulations you could imagine.
It's like, tell the government and report on any safety testing you did.
Seems reasonable. Yeah. I mean, people are not just saying that these fears are unjustified of
AI and sort of existential risk. Some people, some of the more vocal critics of open AI have said
that open AI, that you are specifically lying about the risks of human extinction from AI, creating fear
so that regulators will come in and make laws or give executive orders that prevent smaller
competitors from being able to compete with you. Andrew Ng, who's I think one of your professors
at Stanford, recently said something to this effect. What's your response to that? I'm curious if you have thoughts about that.
Yeah, like, I actually don't think we're all going to go extinct.
I think it's going to be great.
I think we're like heading towards the best world ever.
But when we deal with a dangerous technology as a society,
we often say that we have to confront and successfully navigate the risks to get to
enjoy the benefits. And that's like a pretty consensus thing. I don't think that's like a
radical position. I can imagine that if this technology stays on the same curve there are systems that are capable of
significant harm in the future and you know like andrew also said not that long ago that he thought
it was like totally irresponsible to talk about agi because it was just never happening i think
he compared it to worrying about overpopulation on mars and i think now he might say something different. So like it's humans are very bad at having intuition for exponentials.
Um,
again,
I think it's going to be great.
Like I wouldn't work on this if I didn't think it was going to be great.
Um,
people love it already.
I think they're going to love it a lot more,
but that doesn't mean we don't need to be responsible and accountable and thoughtful about
what the downsides could be. And in fact, I think the tech industry often has only talked about the
good and not the bad. And that doesn't go well either. The exponential thing is real. I have
dealt with this. I've talked about the fact that I was only using GPT 3.5 until a few months ago,
and finally, at the urging of a friend, upgraded.
And I thought, oh.
I would have given you a free account.
I'm sorry you waited.
I wish I should have asked.
But it's a real improvement.
It is a real improvement.
And not just in the sense of, oh, the copy that it generates is better.
It actually transformed my sense of how quickly the industry was moving.
It made me think, oh, the next generation of things is going to be sort of radically better. And so I think that part of
what we're dealing with is just that it has not been widely distributed enough to get people to
reckon with the implications. I disagree with that. I mean, I think that like, you know,
maybe the tech experts say like, oh, this is like, you know, not a big deal, whatever. Like
that like you know maybe the tech experts say like oh this is like you know not a big deal whatever like most of the world is like who has used even the free version is like oh man they got real ai
yeah yeah and you you went around the world this year talking to people in a lot of different
countries i'd be curious what you know to what extent that informed what you just said
significantly i mean i was i had a little bit of a sample bias right because the people that wanted
to meet me were probably like pretty excited but you do get a sense and there's
like quite a lot of excitement, maybe more excitement in the rest of the world than the US.
Sam, I want to ask you about something else that people are not happy about when it comes to
these language and image models, which is this issue of copyright. I think a lot of people view what OpenAI and other
companies did, which is sort of, you know, hoovering up work from across the internet,
using it to train these models that can, in some cases, output things that are similar to the work
of living authors or writers or artists. And they just think, like, this is the original sin of the
AI industry, and we are never going to forgive them for doing this. What do you think about that? And what would you say to
artists or writers who just think that this was a moral lapse? Forget about the legal, whether
you're allowed to do it or not, that it was just unethical for you and other companies to do that
in the first place. Well, we block that stuff. Like, you can't go to like Dolly and generate
something. I mean, you could, speaking of being annoyed, like we may be too
aggressive on that, but I think, um, I think it's the right thing to do until we figure out some
sort of economic model that works for people. Um, and you know, we're doing some things there now,
but we've got more to do. Other people in the industry like do allow quite a lot of that.
And I get why artists are annoyed.
I guess I'm talking less about the output question than just the act of taking all of this work, much of it copyrighted, without the explicit permission of the people who created it and using it to train these models.
Do you think the people, what would you say to the people who just say that, Sam, that was the wrong move. You should have asked, and we will never forgive you for it. Well, first of all, always have empathy for people who are
like, hey, you did this thing, and it's affecting me, and we didn't talk about it first, or it was
just like a new thing. I do think that in the same way humans can
read the internet and learn
AI should be allowed to read the internet and learn
shouldn't be regurgitating
shouldn't be violating any
copyright laws but if we're really going to
say that like
AI doesn't get to read the internet and learn
and
if you read a physics textbook and
learn how to do a physics calculation not
every time you do that in the rest of your life like you got to like figure out how to like
uh that seems like not a good solution to me but on individuals private work um
under yeah we we try not to train on that stuff we we really don't want to be here upsetting people
again i think other people in the industry have taken different approaches and we've also done
some things that i think now that we understand more we will do differently in the future um
like what like what we do differently we want to figure out new economic models so that, say, if you're an artist, we don't just totally block you.
We don't just not train on your data, which a lot of artists also say, no, I want this in here.
I want like whatever.
But we have a way to like help share revenue with you.
GPTs are maybe going to be an interesting first example of this because people will be able to put private data in there and say, hey, use this version, and there could be a revenue share
around it. Well, I had one question about the future that kind of came out of what we were
talking about, which is what is the future of the internet as ChatGPT rises? And the reason I ask
is I now have a hotkey on my computer that I type when I want to know something, and it just
accesses ChatGPT directly through software called Raycast. And because of this, I am using Google Search, not nearly as much.
I am visiting websites, not nearly as much.
That has implications for all the publishers and for, frankly, just the model itself,
because presumably if the economics change, there'll be fewer web pages created.
There's less data for ChatGPT to access.
So I'm just curious what you have thought about the internet in a world
where your product succeeds in the way you want it to.
I do think if this all works, it should really change how we use the internet.
There's a lot of things that the interface for is perfect.
If you want to mindlesslylessly watch tiktok videos perfect um but if you're trying to like get information or get a task a task accomplished
it's actually like quite bad relative to what we should all aspire for and you can totally imagine
a world where you have a task that right now takes like hours of stuff clicking around the internet
and bringing stuff together.
And you just ask chat GPT to do one thing and it goes off and computes and
you get the answer back.
And,
uh,
I,
I'll be disappointed if,
if we don't use the internet differently.
Yeah.
Um,
do you think that the economics of the internet as it is today are robust enough to withstand
the challenge that AI poses? Probably. Okay. What do you think? Well, I worry in particular
about the publishers. The publishers have been having a hard time already for a million other
reasons. But to the extent that they're driven by advertising and visits to webpages, and to the
extent that the visits to the webpages are by google search in particular a world where web search is just no longer the front page to most of the internet
i think does require a different kind of web economics i think it does require a shift but
i think the value is so what i thought you were asking about was like is there not going to be
enough value there for some economic model to work and i think that's definitely going to be the case
yeah the model may have to shift.
I would love it if ads become
less a part of the internet.
Like I was thinking the other day,
like I just had this like,
for whatever reason,
like this thought in my head
as I was like browsing
around the internet being like,
there's more ads than content everywhere.
I was reading a story today,
scrolling on my phone,
and I managed to get it to a point
where between all of the ads
on my relatively large phone screen, there was to a point where between all of the ads on my relatively
large phone screen, there was one line of text
from the article visible. You know, one of the
reasons I think people like
ChatGPT, even if they can't articulate
it, is we don't do ads.
As an
intentional choice, because there's plenty of ways
you could imagine us putting ads. Totally.
But we made the choice that
ads plus AI can get a little dystopic.
We're not saying never.
We do want to offer a free service.
But a big part of our mission fulfillment, I think, is if we can continue to offer ChatGPT for free at a high quality of service to anybody who wants it.
And just say, hey, here's free AI.
And good free AI.
And no ads.
Because I think that really does,
especially as the AI like gets really smart,
that really does get a little strange. Yeah, yeah.
I know we talked about AGI and it not being your favorite term,
but it is a term that people in the industry use
as sort of a benchmark or a milestone
or something that they're aiming for.
And I'm curious what you think the barriers as sort of a benchmark or a milestone or something that they're aiming for.
And I'm curious what you think the barriers between here and AGI are.
Maybe let's define AGI as sort of a computer
that can do any cognitive task that a human can.
Let's say we make an AI that is really good,
but it can't go discover novel physics.
Would you call that AGI?
I probably would, yeah.
You would, okay would you uh
well again i don't like the term but i wouldn't call that we're done with the mission i'd say we
still got a lot more work to do the vision is to create something that is better at humans than
doing original science that can invent can discover well i am a believer that all real
sustainable human progress comes from scientific and technological progress.
And if we can have a lot more of that, I think it's great.
And if the system can do things that we unaided on our own can't, just even as a tool that helps us go do that, then I will consider that a massive triumph and happily, you know, I can happily retire at that point but before that i can imagine that we do
something that creates incredible economic value but is not the kind of agi super intelligence
whatever you want to call it thing that we should aspire to right what are some of the barriers to
getting to that place where we're doing novel physics research um and keep in mind kevin i
don't know anything about technology
that seems unlikely to be true
well
if you start talking about retrieval
augmented generation or anything
you might lose me
but he'll follow
we talked earlier about
just the model's
limited ability to reason
and I think that's one thing that needs to be better the
model needs to be better at reasoning um like gbt4 an example of this that my co-founder ilia
uses sometimes that's really stuck in my mind is there there was like a time in newton's life
where the right thing for him to do you're talking of course about isaac newton not my life isaac
newton okay well maybe you but maybe my life we'll find out stay tuned um where the right thing for him to do... You're talking, of course, about Isaac Newton, not my life. Isaac Newton. Yeah, okay. Well, maybe you do. But maybe my life.
We'll find out.
Stay tuned.
Where the right thing for him to do
is to read every math textbook he could get his hands on.
He should talk to every smart professor,
talk to his peers, do problem sets, whatever.
And that's kind of what our models do today.
And at some point,
he was never going to invent calculus doing that,
what didn't exist in any textbook. And at some point, he had to go think of new ideas and then
test them out and build on them, whatever else. And that phase, that second phase, we don't do yet.
And I think you need that before. It's something I want to call an AGI.
Yeah. One thing that I hear from AI researchers is that a lot of the progress that has been made over the past, call it five years, in this type of AI has been just the result of just things getting bigger, right? Bigger models, more compute.
build these things that makes them more useful. But there hasn't really been a shift on the architectural level of the systems that these models are built on. Do you think that that is
going to remain true? Or do you think that we need to invent some new process or new
mode or new technique to get through some of these barriers?
We will need new research ideas, and we have needed them.
I don't think it's fair to say there haven't been any here.
I think a lot of the people who say that are not the people building GPT-4,
but they're the people sort of opining from the sidelines.
But there is some kernel of truth to it.
And the answer is, OpenAI has a a philosophy of we will just do whatever works
like if it's time to scale the models and work on the engineering challenges we'll go do that
if now we need a new algorithm breakthrough we'll go work on that if now we need a different kind
of data mix we'll go work on that so like we just do the thing in front of us and then the next one
and then the next one and the next one and there are a lot of other people who want to write papers about, you know, level one, two, three, and whatever.
And there are a lot of other people who want to say, well, it's not real progress.
They just made this like incredible thing that people are using and loving, and it's not real science.
But our belief is like, we will just do whatever we can to usefully drive the progress forward. And
we're kind of open-minded about how we do that. What is super alignment? You all just recently
announced that you are devoting a lot of resources and time and computing power to
super alignment. And I don't know what it is. So can you help me understand?
That's alignment that comes with with sour cream and guacamole
at a San Francisco taco shop.
That's a very San Francisco-specific joke,
but it's pretty good.
I'm sorry. Go ahead, Sam.
Can I leave it at that?
I don't really want to follow.
I mean, that was such a good answer.
No.
So alignment is how you sort of get these models to behave
in accordance with the human who's using them, what they want.
And super alignment is how you do that for super capable systems.
So we know how to align GPT-4 pretty well, but better than people thought we were going to be able to do.
Now there's this like, when we put out GPT-2 and 3, people were like, oh, it's irresponsible research because this is always gonna just like spew toxic shit you're never gonna get it and it actually turns out like
we're able to align gpt4 reasonably well maybe too well yeah it's i mean good luck getting it
to talk about sex is my official comment about gpt4 um but that's you know in some sense that's
an alignment failure because that's it's not doing what you wanted there um so but now we have that
now we have like the social part of the problem.
We can technically do it.
But we don't yet know
what the new challenges will be
for much more capable systems, and so that's what that team
research is. So what kinds of
questions are they investigating
or what research are they doing?
Because I confess I
lose my grounding in
reality when you start talking about super capable systems and the problems that can emerge with them.
Is this sort of a theoretical future forecasting team?
Well, they try to do work that is useful today, but for the theoretical systems of the future.
So they'll have their first result coming out,
I think, pretty soon.
But yeah, they're interested in these questions
of as the systems get more capable than humans,
what is it going to take to reliably solve
the alignment challenge?
Yeah, and I mean, this is the stuff
where my brain does feel like it starts to melt
as I ponder the implications, right?
Because you've made something
that is smarter than every human,
but you, the human,
have to be smart enough
to ensure that it always acts in your interest,
even though by definition
it is way smarter than you.
Yeah, we need some help there.
Yeah.
I do want to stick on this issue
of alignment or super alignment
because I think there's an unspoken assumption
in there that,
well, you just put it as
alignment is sort of what the user wants
it to behave like. And obviously, there are a lot of users with good intentions.
No, no. Yeah, it has to be like what society and the user can intersect on. There are going to
have to be some rules here. And I guess, where do you derive those rules? Because, you know, if you're anthropic, you use, you know, the UN Declaration of Human Rights and the Apple Terms of Service.
The two most important documents in rights governance.
If you're not just going to borrow someone else's rules, how do you decide which values these things should align themselves to. So we're doing this thing. We've been doing this thing.
We've been doing these like
democratic input governance grants
where we're giving different research teams
money to go off and set different proposals.
There's some very interesting ideas in there
about how to kind of fairly decide that.
The naive approach to this
that I have always been interested in,
maybe we'll try at some point,
is what if you had hundreds of millions
of ChatGPT users spend an hour,
a few hours a year,
answering questions about what they thought
the default setting should be,
what the wide bound should be.
Eventually, you need more than just ChatGPT users.
You need the whole world represented in some way
because even if you're not using it,
you're still impacted by it.
But to start, what if you literally just had ChatGPT chat with its users?
I think it's very important.
It would be very important in this case to let the users make final decisions, of course.
But you could imagine it saying like, hey, you answered this question this way.
Here's how this would impact other users in a way you might not have thought of.
If you want to stick with your answer, that's totally up to you, but are you sure given this new data?
And then you could imagine like GPT-5 or whatever, just learning that collective preference set.
And I think that's interesting to consider. Better than the Apple terms of service, let's say.
I want to ask you about this feeling.
Kevin and I call it AI vertigo.
I don't know.
Is this a widespread term that people use?
No, I think you invented this.
It's just sort of us.
So there is this moment when you contemplate
even just kind of the medium AI future.
You start to think about what it might mean
for the job market, your own job,
your daily life for society.
And there is this kind of dizziness that I find sets in.
This year, I actually had a nightmare about AGI. And then I sort of asked around, and I feel like
people who work on this stuff, like that's not uncommon. I wonder for you, if you have had these
moments of AI Vertigo, if you continue to have them, or is there at some point where you think
about it long enough that you feel like you get your legs underneath you at that i think you i used to have i mean there were some
you can like point to these moments but there were some like very strange like extreme vertigo moments
um particularly around the launch of gpt3 uh but you do get your legs under you yeah what
and i think the future will somehow be less different than we think. It's this amazing
thing to say, right? We invent AGI and it matters less than we think. It doesn't sound like a
sentence that parses. And yet it's what I expect to happen. Why is that? There's a lot of inertia
in society and humans are remarkably adaptable to any amount of change. One question I get a lot that I imagine you do too is from people who want to know what they can do.
You mentioned adaptation as being necessary on the societal level. I think for many years,
the conventional wisdom was that if you wanted to adapt to a changing world, you should learn
how to code, right? That was like the classic advice. May not be such good advice anymore. Exactly.
So now, you know, AI systems can code pretty well.
For a long time, the conventional wisdom
was that creative work was sort of untouchable by machines.
If you were a factory worker,
you might get automated out of your job.
But if you were an artist or a writer,
that was impossible for computers to do.
Now we see that's no longer safe.
So where is the sort of high ground here? Like,
where can people focus their energy if they want skills and abilities that AI is not going to be
able to replace? My answer is, my meta answer is you always, it's always the right bet to just
get good at the most powerful new tools, most capable new tools. And so when computer programming
was that, you did want to become a programmer programmer and now that ai tools like totally change what one person can do you want to get
really good at using ai tools and and so like having a sense for how to work with chat gpt
and other things that is the high ground um and that's like that's we're not going back. Like that's going to be part of the world and you can use it in all sorts of ways,
but getting fluent at it, I think is really important.
I want to challenge that
because I think you're partially right
in that I think there is an opportunity
for people to embrace AI
and sort of become more resilient to disruption that way.
But I also think if you look back through history, it's not like we learn how to do something new and then the old way just goes away,
right? We still make things by hand. There's still an artisanal market. So do you think there's going
to be people who just decide, you know what, I don't want to use this stuff. Totally. And they're
going to, like, there's going to be something valuable in their sort of, I don't know, non-AI-assisted work.
I expect that if we look forward to the future, things that we want to be cheap can get much cheaper.
And things that we want to be expensive are going to be astronomically expensive.
Like what?
Real estate, handmade goods, art.
And so totally, there'll be a huge premium on things like that and there'll be many people who like really you know there's
always been like a even when machine-made products have been much better there has always been a
premium on handmade products and i'd expect that to intensify um this is also a bit of a curveball
very curious to get your thoughts.
Where do you come down on the idea of AI romances? Are these net good for society?
I don't want one personally. You don't want one. Okay.
But it's clear that there's a huge demand for this, right? Yeah. I think that, I mean,
Replica is building these. They seem like they're doing very well. I would be shocked if this is
not a multi-billion dollar company, right? Someone will make a multi-billion dollar company.
Yeah, somebody will. For sure.
Do you, like, I just
personally think we're going to have a big culture war. Like, I think
Box News is going to be doing segments about the generation
lost to AI girlfriends or boyfriends, like, at some point
within the next few years. But at the same time, you look at
all the data on loneliness,
and it seems like, well, if we can give people
companions that make them happy during the day,
it could be a net good thing.
It's complicated.
Yeah.
You know, I have misgivings, but I don't, this is not a place where I think I get to, like, impose what I think is good on other people.
Totally.
Okay, but it sounds like this is not at the top of your product roadmap is building the boyfriend API.
No.
All right.
All right.
You recently posted on X that you expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes.
Can you expand on that?
Like, what are some things that AI might become very good at persuading us to do?
And what are some of those strange outcomes you're worried about? The thing I was thinking about at that moment was the upcoming election.
There's a huge focus on the U.S. 2024 election. There's a huge focus on the US 2024 election.
There's a huge focus on deep fakes and the impact of AI there.
And I think that's reasonable to worry about, good to worry about.
But we already have some societal antibodies towards people seeing like doctored photos
or whatever.
And yeah, they're going to get more compelling.
It's going to be more, but we kind of know those are there.
There's a lot of discussion about that. There's almost no discussion about what are like the new things AI can going to get more compelling. There's going to be more, but we kind of know that those are there. There's a lot of discussion about that.
There's almost no discussion about what are like the new things AI can do to influence an election.
AI tools can do to influence an election.
And one of those is to like carefully, you know, one-on-one persuade individual people.
Tailored messages.
Tailored messages.
That's like a new thing that the content farms couldn't quite do.
Right. And that's not AGI, but that could still be pretty harmful. Iored messages. That's like a new thing that the content farms couldn't quite do. Right.
And that's not AGI, but that could still be pretty harmful.
I think so, yeah.
I know we are running out of time, but I do want to push us a little bit further into the future than the sort of, I don't know, maybe five-year horizon we've been talking about. If you can imagine a good post-AGI world, a world in which we have reached this threshold, whatever it is, what does that world look like?
Does it have a government?
Does it have companies?
What do people do all day?
Like a lot of material abundance.
People continue to be very busy, but the way we define work always moves.
Our jobs would not have seemed like real jobs to people several hundred years ago, right?
This would have seemed like incredibly silly entertainment.
It's important to me.
It's important to you.
And hopefully it has some value to other people as well.
There will be, and the jobs of the future may seem, I hope they seem even sillier to us, but I hope the people get even more fulfillment and I hope society gets even more fulfillment out of them.
even sillier to us, but I hope the people get even more fulfillment and I hope society gets even more fulfillment out of them. But everybody can have a really great quality of life, like to
a degree that I think we probably just can't imagine now. Of course, we'll still have governments.
Of course, people will still squabble over whatever they squabble over. You know, less
different in all of these ways than someone would think. And then like unbelievably different in terms of what you
can get a computer to do for you. One fun thing about becoming a very prominent person in the
tech industry as you are, is that people have all kinds of theories about you. One fun one that I
heard the other day is that you have a secret Twitter account where you are way less measured
and careful. I don't anymore. I did for a while.
I decided I just couldn't keep up with the OPSEC. It's so hard to lead a double life.
What was your secret Twitter account? Obviously, I can't. I mean, I had a good alt. A lot of people have good alts, but you know. Your name is literally Sam Altman. I mean, it would have
been weird if you didn't have one. But I think I just got, yeah, like too well-known or something to be doing that.
Yeah.
Well, and the sort of theory that I heard attached to this
was that you are secretly an accelerationist,
a person who wants AI to go as fast as possible.
And then all this careful diplomacy that you're doing
and asking for regulation,
this is really just the sort of polite face
that you put on for society.
But deep down, you just think we should go
all gas, no brakes toward the future. No, I think we should go all gas, no brakes toward the future.
No, I certainly don't think all gas, no brakes to the future,
but I do think we should go to the future.
And that probably is what differentiates me
than like most of the AI companies.
I think AI is good.
Like I don't secretly hate what I do all day.
I think it's going to be awesome.
Like I want to see this get built.
I want people to benefit from this.
So all gas, no brake, certainly not.
And I don't even think like
most people who say it mean it,
but I am a believer
that this is a tremendously
beneficial technology
and that we have got to find a way
safely and responsibly
to get it into the hands of the people
to confront the risk
so that we get to enjoy the huge rewards.
And like, you know, maybe relative to the prior of most people to confront the risk so that we get to enjoy the huge rewards. And like, you know,
maybe relative to the prior
of most people who work on AI,
that does make me an accelerationist.
But compared to those
like accelerationist people,
I'm clearly not them.
So, you know, I'm like somewhere,
I think you like want the CEO
of this company to be somewhere.
Your accelerationist adjacent.
Somewhere in the middle,
which I think I am.
Your gas and brakes.
I believe
that this will be
the most important
and beneficial technology
humanity has ever
has yet invented.
And I also believe
that if we're not careful about it,
it can be quite disastrous.
And so we have to
navigate it carefully.
Yeah.
Sam, thanks so much
for coming on Hard Fork.
Thank you guys. Thank you. Our audience editor is Nel Galogli. Video production by Ryan Manning and Dylan Bergeson.
Special thanks to Paula Schumann,
Pui Wing Tam,
Kate Lepresti,
and Jeffrey Miranda.
You can email us at heartfork at nytimes.com. Thank you.