The Daily - Inside the Coup at OpenAI
Episode Date: November 22, 2023The board of OpenAI, the maker of the ChatGPT chatbot and one of the world’s highest-profile artificial intelligence companies, reversed course late last night and brought back Sam Altman as chief e...xecutive.Cade Metz, a technology reporter for The Times, discusses a whirlwind five days at the company and analyzes what the fallout could mean for the future of the transformational technology.Guest: Cade Metz, a technology reporter for The New York Times.Background reading: With Mr. Altman’s return, OpenAI’s board of directors will be overhauled, jettisoning several members who had opposed him.Before the ouster, OpenAI’s board was already divided and feuding.For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.
Transcript
Discussion (0)
From New York Times, I'm Michael Barbaro.
This is The Daily.
Today, the surprise ouster of Sam Altman as CEO of OpenAI, the fallout that it triggered,
and what it all means for the future of the transformational technology at the center of the boardroom drama.
I speak with my colleague, technology reporter Cade Metz.
It's Wednesday, November 22nd.
Well, Cade, good morning.
Morning.
Thank you for making time for us.
We are now several days into a corporate drama
that has consumed the world that you cover, Silicon Valley,
and that remains pretty head-spinning and a very fluid situation.
Absolutely.
I mean, corporate dramas can be interesting, but never this interesting.
They're not supposed to be this interesting.
They're supposed to be a bit more boring.
That's right.
So in the simplest possible terms, what happened?
That's right.
So in the simplest possible terms, what happened?
Sam Altman, the chief executive of OpenAI, the company that wowed millions of people late last year with the debut of ChatGPT, an online chatbot that can answer questions,
generate term papers, write poetry, even write its own computer code, has been ousted from
that company by the company's own board.
A boardroom coup is how we used to refer to that when I was a business reporter.
Yes.
And the stakes are artificial intelligence of a level that most people, even in the industry,
did not expect to arrive this soon. This is powerful technology that is already changing
the way disinformation is spread across the internet. It's starting to take away jobs.
And it is being improved at an incredibly fast rate that has caused great optimism in parts of Silicon Valley and great concern in other parts.
And those two things are clashing.
And the clash is encapsulated in this very human story that is ultimately about ego and power and money.
story that is ultimately about ego and power and money.
Okay, Sam Altman, that name, is not a name we've ever really examined in great detail.
We've talked a lot about this technology.
We've talked a lot about this company, OpenAI, but not a ton about Altman himself.
So help us understand really who Sam Altman is, why his firing is, as you just said,
such a titanic moment for this technology and how he fits into this large clash that you just outlined. In many ways, Sam Altman is the classic Silicon Valley archetype. He dropped out of
Stanford as a sophomore. He founds his own company.
And then he winds up as the president of a well-known startup incubator.
One of the most important things that we try to teach at Y Combinator.
Called Y Combinator.
Is that it's more important to build something that a few users love
than something that a lot of users like.
An organization that helps other startups get off the ground.
If you can build a product that is so good,
people spontaneously tell their friends about it,
you have done 80% of the work that you need
to be a really successful startup.
And through that job,
he becomes one of the most well-connected people
in Silicon Valley.
Welcome to How to Build the Future.
Today, our guest is Mark Zuckerberg.
Today, we have Elon Musk. Elon, thank you for joining us. Yeah, thanks for having me.
And in 2015, he and others, including Elon Musk, found this company, OpenAI,
to do artificial intelligence. But he is not the main name in the headline.
Musk is the main name in the headline. And Sam, he's not an AI researcher.
He will tell you that he studied it briefly when he was at Stanford, but he is ambitious.
And as he sees this technology rising, he is among those who go after it. Google was in the lead,
so to speak, at that point in this push towards artificial intelligence.
They were building increasingly powerful technologies that could recognize objects the way a driverless car does, that could recognize your voice the way Siri does, that could translate between language instantly as Google Translate does. And Sam and Elon Musk and a group
of researchers got together and decided they needed to challenge Google. There was a concern
that Google was going to build this technology on its own. It would become increasingly powerful
and Google would control the universe. And so they felt they would be a
counterweight to Google, and they would develop this technology in the open. They wouldn't bottle
this thing up and keep it to themselves. They would develop it in a way that would benefit humanity.
So from the start, you're saying Altman approached artificial intelligence with a sense of altruism and a little bit of worry that in the wrong hands, this could be dangerous.
That may or may not have been his main motivation, okay?
But it was part of how those around him thought.
And when they debuted in 2015, that was part of the ethos.
To that end, they created OpenAI not as a company, but as a non-profit. The idea was that they would
be free of the corporate pressures that were dangerous at places like Google. They would not be beholden to the
stock market. They would be beholden to the people. But a couple years later, Musk, in a huff,
leaves OpenAI because he feels like they're not pushing forward fast enough. And Elon is not only gone, Elon's money is gone. Elon was the main donor to this operation.
And Sam realizes if this operation is going to survive, he needs money.
And I need to underscore just how much money goes into the creation of this technology.
You need billions of dollars to build this stuff. Billions of dollars of computing power
needed to train these AI systems. Sam recognizes this. He creates a new company, a for-profit
company, because he needs to give investors a reason to invest. He needs to give them profits.
So he creates this for-profit and bolts it on to the nonprofit. He does that and immediately
raises $1 billion from one of the tech giants here in the U.S., Microsoft.
billion from one of the tech giants here in the U.S., Microsoft.
Okay. Well, we're going to return to that structure you just explained of a corporation basically being bolted to a nonprofit because that becomes important to this drama later on. But
what does this company that Altman bolts onto this nonprofit end up doing with this big infusion of cash? It ends up building an increasingly powerful
technology called GPT, a technology that can take in vast amounts of text from across the internet,
Wikipedia articles, digital books, chat logs, even computer programs, and it can learn to generate text on its own. And Sam Altman is the person who
is helping to push this forward because he continues to raise vast sums of money from
Microsoft. The initial $1 billion that Microsoft invested grew to $3 billion. All that goes into the creation of this
technology. And for people following the AI field, this technology was increasingly impressive.
But the general public, for the most part, did not wake up to what was happening until the end of last year when OpenAI and Sam Altman released ChatGPT.
A new artificial intelligence tool is going viral for cranking out entire essays in a matter of seconds.
It literally shows you how to do it, says how to do the math, and at the end, the answer is B. What the heck? This is insane.
Chat GPT is poised to change the way we interact with computers and AI. In fact,
Chat GPT wrote everything I just said when we asked it to write an introduction to this piece.
Right, this is when everyone, the daily included, start covering the heck out of this new technology because it is so insanely powerful and compelling and, let much attention also goes to Sam Altman. He suddenly rises from a well-known
figure in Silicon Valley to a well-known figure across the world. He is the face of this technology
which so many millions of people have been wowed by.
so many millions of people have been wowed by.
So, what have you done?
Like ever?
No, I mean, what have you done with AI?
Sam's on every podcast.
I think it's going to be a great thing,
but I think it's not going to be all a great thing. Then he's in front of Congress.
OpenAI was founded on the belief that artificial intelligence
has the potential to improve nearly every aspect of our lives. And then... Sam Altman, co-founder and CEO of OpenAI,
visited us all last week. He's on a global tour. I would be surprised if Australia does not build
great AI companies. Talking with world leaders in Europe and in Asia. The thoughtfulness, the focus, the urgency on figuring out how we mitigate these very huge risks that are coming so that we get to enjoy the benefits of this technology.
Promoting this technology, but also acknowledging that it could be dangerous.
There's always this nod to the dangers and the concerns,
and that's part of who Sam is. We've got to be cautious here. And also, I think it doesn't work
to do all this in a lab. You've got to get these products out into the world. He will tell you one
thing and then immediately nod to the opposite thing.
And this happens throughout a conversation.
He is optimistic.
This is going to be a good thing.
Always saying, yeah, but there are concerns.
People should be happy that we're a little bit scared of this.
I think people should be a little bit scared.
A little bit.
You personally.
I think if I said I were not, you should either not trust me or be very unhappy I'm in this job.
So in some ways it sounds like he is embodying the tensions all around this new technology.
He absolutely is.
And those tensions are real. And in some ways, this is a
new thing for Silicon Valley. The thing you have to realize about Silicon Valley is that it's very
much about or has been about optimism. People believing that things were possible that most
people didn't. They, as Silicon Valley entrepreneurs, were going to make the world
better. That's been the trope. What happened with artificial intelligence in particular over the
last decade is that you also have a group of people who are looking into the future. And they don't necessarily see an optimistic picture. They see a concerning
picture. And as this technology that Mr. Altman and his lab are building gets more powerful,
that becomes part of the Silicon Valley ethos. You have your optimists and you have your pessimists.
Valley ethos. You have your optimists and you have your pessimists. And then you've got Sam Altman,
who's so good at balancing things, embodying both of those two sides of the equation.
Can you just define these two camps? I think the optimist case is pretty straightforward, right?
That something like chat GPT is going to enhance our lives and there can be safeguards that protect us. What exactly are the pessimists making the case for? Well, the pessimists will acknowledge that this could be a powerful
technology. They'll even say this could be used to cure cancer or solve climate change.
But they are also fundamentally worried that if the right safeguards are not put in place, this could
destroy humanity. It is a deeply held, sometimes strange to understand belief, but this is a real
force in the valley. People are worried about this, and some of those people were sitting on the board of that not-for-profit that Sam Altman created back in
2015 when he initially put together OpenAI. So there are pessimists about this technology who
sit on the board of the nonprofit that basically employs Altman. Right. And by November of 2023, that board is just six people. And because
of the unusual structure of OpenAI, they have an incredible amount of power. Those six people
control the for-profit company alone.
They are not beholden to shareholders, investors who have put money into the company.
Even Microsoft with its billions in the company?
Even Microsoft, which has by this time put $13 billion into the company,
does not have control over the situation.
into the company does not have control over the situation.
Six people control whether Mr. Altman is at the head of that company or not.
And a few days ago, Friday morning,
while I was on the phone with another OpenAI employee,
I was told I better look out for an email around noon.
And then just after noon, that tiny board announces to the world that Sam Altman is no longer the CEO of OpenAI and no one can believe it. No one at OpenAI, no one at Microsoft,
none of the investors in OpenAI, nobody saw this coming, and no one understands
how all this is going to play out over the next 72 hours.
We'll be right back. into this divide we've been discussing of the AI optimists and the AI pessimists, and what exactly has been the fallout? One of the many remarkable things of this whole soap opera is that we don't
really know why they ousted Sam Altman. What the board said was that they could no longer trust him to run this company and build AI for the benefit of
humanity. But we still don't know why ultimately he was removed.
I mean, is it safe to assume that it does fit into the schism that you mentioned between those who think that chat GPT is ultimately very good or
very scary, and that somehow in trying to straddle those two universes, that Altman somehow ran afoul
of a board that seems pretty full of pessimists. That tension is fundamentally part of this situation.
You have essentially a board that is split in half. You have three founders of OpenAI, including Sam Altman on the board, and you have three
other people, some of whom are very concerned about the future of AI.
And Sam and his leadership team thought that that balance would
work out. But one of the co-founders, Ilya Sutskovor, a very important AI researcher over
the past decade, has grown increasingly concerned about the dangers of the technology, and he is among those who ousted Mr. Altman.
He kind of broke the tie, as it were.
Ilya broke the tie.
Okay, so let's turn to the fallout
of this board doing what it just did
in getting rid of Altman.
We have major breaking news related to OpenAI.
Sam Altman is out as CEO of OpenAI. The fallout is immediate.
You know, the minute this thing started to filter out, Microsoft shares started to fall.
Investors across the world, the people and companies who have put billions of dollars
into this AI company, they don't understand the decision.
They are blindsided by it.
Even key investors, they were only supposedly notified
just a few minutes before that missive went out.
And on top of it all, they're powerless to do anything.
Obviously, this just puts a spotlight on this nonprofit board.
They have no say in what this tiny little board does.
And that includes Satya Nadella, the CEO of Microsoft.
We really want to partner with OpenAI
and we want to partner with Sam.
And so irrespective of where Sam is...
He and other investors start to put pressure
on this board to take Altman back.
We were very confident in Sam and his leadership team.
I've not been told about anything.
You know, they published internally.
And the walls start to cave in, so to speak.
The company's president and co-founder, Greg Brockman,
has quit after the CEO and fellow co-founder.
Especially when other talent from inside OpenAI starts to leave.
There's this support spilling out across the internet.
Everyone is trying to put pressure on these four people on the board.
And pretty soon, Altman is inside the offices of OpenAI in San Francisco trying to convince them to take him back.
And the board, late on Sunday, they put out a note that says, we are standing firm by our decision.
Sam Altman is out and out of the blue.
Microsoft announcing that it has hired Sam Altman to lead its artificial intelligence group.
Microsoft and Satya Nadella say, OK, we're essentially going to rebuild what you were doing at OpenAI and we're creating a competitor.
competitor. And when that happens, other employees at OpenAI start to sign a letter threatening to leave OpenAI and join this new venture. Latest number we have right now, at least 700 of OpenAI
employees threatening to leave the startup for Microsoft. By the way, that's out of about 770 employees total. Those demands include...
And still, the board stands firm.
And if that wasn't enough for you, Ilya Sutskovor, the guy who switched sides and joined the board in ousting Mr. Altman, he switches sides again.
And he goes back to Team Sam and he says,
I deeply regret what I have done. And he puts his name on the letter that the 700
employees have sent out threatening to join the Microsoft venture.
Microsoft venture. So one of the board members who is responsible for Altman's ouster,
once the ouster's repercussions become clear, says, I really regret that, and signs a letter saying, I will leave the company unless Altman returns as CEO. That is pretty head-spinning.
We're still trying to figure out what was inside his head and what is inside his head now.
How do you understand the depth of loyalty to Altman?
Why does everyone decide that if Altman leaves, they're going to leave too?
Well, there are many reasons here.
For one, they were on top of the world.
They had a good thing going.
They have a stake in the company.
They can make money. They're also, for the most part, on the optimistic side. Many people who were concerned
about dangers have left the company over the years because of this type of disagreement,
just on a smaller scale. So you've got people who are more aligned with Altman at the company,
You've got people who are more aligned with Altman at the company for the most part.
So in that sense, this board misunderstood where most people inside OpenAI do that because that technology has just up and gone over to Microsoft, which would seem to have even fewer safeguards in place?
What you have to realize ultimately is that this technology is going to happen one way or another. This is a story that shows that the genie is out of the bottle in many ways, and the technology is going
to push forward. Bottling the technology up in the way that the board seems to want to do may not be
the best way forward. There are a lot of arguments that say
we don't want to bottle it up inside these companies. We want more people to be aware of
what is being built so we can understand it, so we can find the flaws, so we can find where the
dangers might be. And there are a lot of arguments about the future of this technology, but you can be sure this technology has a future.
Okay, does that mean that in this case, and when we think about the future of this powerful technology, that the optimists have prevailed and that the safeguards have kind of lost?
Not necessarily.
This is an argument
that will continue
across Silicon Valley.
There are optimists
and there are pessimists.
This battle
has only just begun.
Well, Cade,
thank you very much.
We appreciate it.
Thank you.
On Wednesday morning, OpenAI announced that Sam Altman would be reinstated as CEO.
The reversal reflected the enormous pressure placed on the company by allies of Altman, including investors and employees,
nearly all 770 of whom had threatened to leave OpenAI by Tuesday night.
As part of Altman's return, OpenAI's board of directors will be overhauled
and several members who voted to fire Altman, will be forced out.
Just before all of this unfolded,
my colleague, Times technology columnist Kevin Roos,
interviewed Altman for his podcast, Hard Fork. If you're curious to hear that conversation with the man at the center of all of this, search for Hard Fork wherever you listen.
We'll be right back.
Here's what else you need to know today.
Here's what else you need to know today. On Wednesday morning, the Israeli government approved a hostage deal with Hamas
that could produce the longest pause in fighting since the war began 46 days ago.
Under the terms of the deal, Hamas will release at least 50 hostages held captive in Gaza,
while Israel will release 150 Palestinian prisoners
from Israeli jails.
Those exchanges, which could start as early as tomorrow,
are expected to occur over four days,
during which fighting will pause.
And in the latest blow to the beleaguered cryptocurrency market, the founder of Binance,
the world's largest crypto exchange, has pleaded guilty to violating U.S. money laundering
laws, and Binance itself said it would pay a $4.3 billion fine.
U.S. prosecutors had accused Binance and its founder, Chengpeng Zhao, of engaging in
outlawed financial transactions, including with customers in countries under U.S. sanctions.
The guilty plea comes shortly after the conviction of Sam Bankman-Fried,
the founder of another crypto exchange, FTX.
the founder of another crypto exchange, FTX.
Today's episode was produced by Olivia Nat and Will Reed.
It was edited by Lisa Chow and Brendan Klinkenberg,
contains original music by Marion Lozano,
Rowan Namisto, and Dan Powell,
and was engineered by Chris Wood.
Our theme music is by Jim Brunberg and Ben Lanthorn of Wonderland.
That's it for The Daily.
I'm Michael Barbaro.
See you tomorrow.