Moonshots with Peter Diamandis - EP #7 Eric Schmidt (Ex-Google CEO): How to Run a Tech Giant
Episode Date: October 20, 2022In this episode, Peter and Eric discuss how to successfully run a tech giant like Google, the arms race of AI, and how Quantum tech will change the world. You will learn about: 09:37 | AI will exp...and our knowledge of biology 15:19 | Creating algorithms that digitize everything 38:57 | USA v China: who will win the race to AI? 41:20 | The mindset needed to run a company like Google Eric Schmidt is best known as the CEO of Google from 2001-2011, including Executive Chairman of Google, Alphabet, and later on their Technical Advisor until 2020. He was also on the board of directors at Apple during 2006-2009 and is currently the Chairman of the board of directors at the Broad Institute. From 2019 to 2021, Eric chaired the National Security Commission on Artificial Intelligence. __________________________ Resources Levels: Real-time feedback on how diet impacts your health. levels.link/peter Consider a journey to optimize your body with LifeForce. Learn more about Abundance360. Eric's foundation, Schmidt Futures Listen to other episodes of Moonshots & Mindsets Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Sasquatch here. You know, I get a lot of attention wherever I go.
Hey Sasquatch, over here!
So, when I need a judgment-free zone, I go to Planet Fitness.
Get started for $1 down and then only $15 a month.
Offer ends April 12th. $49 annual fee applies. See Home Club for details.
That's the sound of unaged whiskey transforming into Jack Daniel's Tennessee Whiskey in Lynchburg, Tennessee.
into Jack Daniel's Tennessee whiskey in Lynchburg, Tennessee.
Around 1860, Nearest Green taught Jack Daniel how to filter whiskey through charcoal for a smoother taste, one drop at a time.
This is one of many sounds in Tennessee with a story to tell.
To hear them in person, plan your trip at tnvacation.com.
Tennessee sounds perfect.
Fifteen years ago, we embarked on an experiment in social media, which I didn't pay that much attention to. We were busy doing our own,
Facebook beat us, so forth and so on. That model, which was a linear feed, right, made sense to me.
Today's model, which is an amped up to engage to increase engagement and make you outraged
right it's not what was on the program 10 years ago and a massive transform to purpose is what
you're telling the world it's like this is who i am this is what i'm going to do this is the
dent i'm going to make in the universe here's a conversation i had back in April of 2022 at my executive summit called Abundance 360 with
the one, the only, the extraordinary Eric Schmidt. You may know Eric Schmidt as the head of the Schmidt
Futures Foundation. Most people know him as the past CEO and chairman of first Google and then
Alphabet. He's an extraordinary technologist and philanthropist and entrepreneur and thinker.
And here we talk about a wide range of conversations from the world's biggest problems to cleaning
up the oceans to the X prizes he's been involved with.
Please join me for an extraordinary conversation with one of the most brilliant investors and
entrepreneurs, someone who's built one of the most impactful companies on the planet,
Eric Schmidt.
I was recounting our trip to Russia for the Sawyer's launch.
It's actually true.
So it's one of the best memories ever, hiding in the bus.
Yes.
Escaping Russian guards.
To get through the gates in the bus on the way to the
underground bunker, we had to
duck down so we wouldn't be seen.
Peter said that if it blew up, we would be dead.
And we said, it's not going to blow up.
This is the care that we take.
I always thought I could
make money by putting
puts on Google
and telling people that you're half a kilometer from a Soyuz launch.
Or the time that we took...
One of the things that's interesting is
we had some fun on some zero-g flights, too.
I don't know if you know, if you remember.
So the first time we did a zero-g flight,
I followed all the rules.
And the second time, I observed the photographer,
who basically, the moment it started, started flipping himself.
And so I thought, well, I'm a veteran now.
I've done it once.
And so this is an airplane full of Google customers.
And so you can see where this is going.
And at the time, we didn't have video phones on our phone, but I had my camera
in my hand, strapped to my hand, my
digital camera, video camera. And so the moment we start floating,
I immediately flip myself. Flip one, flip two, flip three,
boom! Everything comes out. And when you throw up in space, okay.
And by the way, so we're clear, I have this on video.
And our customers are flying through this.
So Peter has,
Peter's smart, he knows that there's idiots like me.
So there's a guy whose job is to retrieve you and strap you down,
and there's another fella whose job is to clean it up while it's floating.
We paid the other guy double.
I strongly recommend you do this flight.
I strongly recommend you follow the instructions.
Matt Goad's in the back of the room.
He's our new CEO for Zero-G.
Hi, Matt.
Matt.
Before we get started, I was prepping for our time together,
and I'm grateful for our friendship, Eric,
and for all of your support over the years.
But you are the hardest bio on the planet.
I mean, is this stuff true?
So check this out.
Dr. Eric Schmidt served as Google's CEO and Chairman from 2001 to 2011, remained Chairman
of Google through 2015, next served as Executive Chairman of Alphabet from 2015 to 2018, and
Technical Advisor through 2020.
I know that stuff.
to 18, and Technical Advisor through 2020. I know that stuff. Prior to Google, you were Chairman and CEO of Novell, 14 years at Sun prior to that, and then you were at Xerox
Park, which is legendary. Eric was elected the National Academy of Engineering, the American
Academy of Arts and Sciences, on the board of trustees of Carnegie Mellon University,
Princeton University, the board of visitors of UC Berkeley, and the board of trustees of Carnegie Mellon University, Princeton University, the board of visitors of UC Berkeley,
and the board of the Institute for Advanced Studies in Princeton,
which is the coolest one for me.
But those are extraordinary boards.
Board of directors at Apple from 2006 to 2009.
Now, I need to hear about the conflict of interest stories
in Apple and Google.
I mean, guys were like dueling phones
around the end of that time period.
Today, he's chairman of the Broad Institute
of Board of Directors at MIT and Harvard.
The Broad is the most extraordinary biotech engine
in the US.
The board in the Mayo Clinic, Cornell Tech Board of Overseers, became the chairman of
the Department of Defense's Innovation Board in 2016, held a position for four years, currently
chairman of the National Security Commission on AI, has authored many books, including
New Digital Age, How Google Works, The Trillion Dollar Coach.
By the way, I was trying to catch up to you.
No, no, you wrote the book on abundance, which I followed.
It worked. I recommend it. He has three more books that say
the same thing. Buy them all.
He was right, he is right, and he will be right.
This is why we work with him.
Thank you, Phil.
Eric co-founded Schmidt Futures in 2017
and bets on early exceptional people
who are making the world better.
And it's an interesting thing, Eric.
We should talk about this.
The thing of it is smartest talent on the hardest problems.
And we've got lots of hard problems, and there's good news. We have lots and lots of smart talent globally. Right. This is a straightforward formula. You just have to
solve that formula. Yeah. In 2019, he and his wife, Wendy, announced a billion-dollar
philanthropic commitment to identify and support talent across disciplines and around the globe to
serve others and help the world address its most pressing issues.
Eric, you know, I get pissed that there are so many people on the planet not doing stuff
that are hoarding wealth or talent or treasures, and you're not one of them.
You're someone like Elon and many others who's changing the world, and I'm grateful for you.
So thank you.'m grateful for you.
So thank you.
Well, thank you.
We're gonna have a conversation for just a few minutes.
I'm gonna open it up.
So one philosophical answer.
If you look at the history of science,
scientists, when they're kind of spent,
end up working on public policy
and trying to make the world a better place.
And I decided I was spent, you know,
that I had done
the obvious technical stuff, but that because of my experience
in the tech industry, I could work on these other problems.
And it's been very satisfying.
And for those of you that are sort of toward the end
of whatever you think is the most consequential thing,
one of the most important things to understand
is that your life is a series of chapters.
And each chapter is interesting.
And people tend to fear the next chapter
when, in fact, you should run to it.
Because that's how you learn, that's how you make new friends,
that's how you do new things.
Growth.
You understand genetic biology better than I do.
I recently started working on it,
and I've decided it's like the super coolest thing.
I would not
have had time had I been running Google or the equivalent to learn this stuff. And we
can talk about that. When I became chairman, I was CEO for a long, long time, and when
I became chairman I thought, well, what will I do? And so my friend Jared and I went to
North Korea to try to get them to open up their internet. We failed, but that trip wouldn't
have been possible. I wrote a series of books about those sorts of things. One of the life advice things when
you get old enough is you're better off having a series of events, not one thing, because
your life needs to be... You need to fulfill whatever your destiny is as an individual.
This appears to be mine. I look forward to the next few chapters.
And since you've worked hard on life extension,
sign me up.
I intend to, though.
There's a lot of stuff ahead.
This is the most exciting time ever.
Exactly.
Hey, thanks for listening to Moonshots and Mindsets.
I want to take a second to tell you about a company that I love.
It's called Levels,
and it helps me be responsible for the food that I eat,
what I bring into my body. See, we were never designed as humans to eat as much sugar as we do and sugar
is not good for your brain or your heart or your body in general. Levels helps me monitor the
impact of the foods that I eat by monitoring my blood sugar. For example, I learned that if I dip
my bread in olive oil, it blunts my
glycemic response, which is good for my health. If you're interested, learn more by going to
levels.link backslash Peter. Levels will give you an extra two months of membership. It's something
that is critical for the future of your longevity. All right, let's get back to the conversation in
the episode. I'm curious about your jumping.
I mean, being chairman of the board of the Broad is fascinating as a move for you, jumping into biology.
Where's Ben Lamb?
Ben.
So Ben is partnered with George Church and is the CEO of Colossal, bringing back the woolly mammoth.
Yes.
I actually read about this whole project.
It's incredible.
Yeah, I think you know Mr. Tull, his partner in that.
But it's, you know, the amount of extraordinary,
I mean, programming in ATCs and Gs, is that?
So I don't think I'll ever understand biology
the way a real biologist do,
but I can give you a formula,
which is you go in biology from the squishy stuff to the digital stuff.
And the quicker you can go from squishy to digital,
the more we can accelerate.
And so I got involved with the Broad years ago because I thought,
I now understand that it becomes a computational problem once
they can measure these things. And we're now in a situation where, I'll give you a model,
AI is to biology the way math is to physics. An interesting statistic is, one of the things,
you know, I'm dumb in this area, so I said, well, okay, let's see if we, let's just take a digital model of the cell and figure out what it can do. And they said, we don't have a digital
model of the cell. I said, how can you, you've been working on biology for like a thousand
years, or like a long time, right? How can you not
have a digital model of the cell? Well, we don't know this, and that is all these sort of lame excuses.
I mean, how can you do biology without understanding how it works?
Well, these poor biologists have been doing biology for literally 100 years without understanding
the underlying mechanisms.
And the only way to understand the underlying mechanisms is to estimate them.
And sort of one of the basic rules about AI is that AI is very useful at estimating a
function that is a naturally noncomputable function.
So you can essentially look at the pattern and give a good enough approximation of what's going on
that then the human scientist can look at that and say, I see the correlate.
I can see A and B cause C and D to occur, which says there's language or there's communication
or there's a mechanism which then stimulates their research.
And I started funding stuff in this area because it seems to me that until you understand the structure of a cell, how far can you go?
Now, one of the things that's going to make that squishy stuff more understandable is what you just
did with Jack Hittery, Sandbox AQ. Can you speak to that for a moment?
So I think everybody here knows that quantum is coming,
and most estimates are that real quantum computers of the kind that look like a computer or like half a computer
because you can only do half the algorithms
is probably 8 to 10 years away.
The technical reason for that is that the various approaches,
there are six or seven that are sort of favored,
and I've been briefed
on two or three, but just imagine that all of them are roughly similar at this point.
They all have measurement errors.
And in order to get something which has accurate computation, you have to have replication.
So you typically, for one qubit to be accurate, you need 100 or 1,000 qubits.
And we can build computers that are supercooled, right right down to basically 0.01 Kelvin which is quite something think about the number
of refrigerators necessary to get down to absolute zero but we can make like a
hundred of them or seventy of them so we're waiting in in in quantum computers
for the ability to make these things on moss so that's rough for the time frame
lower the error rate to lower the error rate of that.
To lower the error rate.
So Jack Hittery, I'd worked with at Google for years,
formed a company and spun it out of Google.
Google, for whatever reason, want to do it separately and
then have them be a Google Cloud customer, which is fine.
I'm the chairman.
And its objective is to get everybody ready for this.
So its first two products are quantum security communications.
And the simple rule is that in eight to ten years, all of your digital communications will be breakable, probably initially by foreign powers.
I don't know if the U.S. is...
Now, my Bitcoin is safe, though, right?
Bitcoin has its own and different set of problems, including the fact that if 51% of the miners collude,
your Bitcoin is not safe.
But that's a separate discussion.
That's not a quantum problem.
That's a human problem.
So the important thing about quantum
is that the other thing you can do
is you can anticipate these computers
by doing quantum simulation.
So using digital computers,
you can simulate what the quantum computer will do.
And it looks like
quantum simulation will allow you to take compounds that chemists have already made
and make them more robust. The way they express it is, I have this beautiful thing. I want
to make it last longer without refrigeration. I want to make it more effective. I want to
dampen this reaction. And using quantum simulation today, you can
do that to some degree. And when quantum computers come out, the ability to, because a quantum
computer is fundamentally a natural thing, it actually operates the way the analog world
does, you have a real simulator, right, an accurate simulator as opposed to a digital
simulator, 100% faithful according to the belief anyway, to see how these systems work.
That will change everything
because all of a sudden you can understand
the energy states of how things merge together,
all sorts of quantum chromodynamics
and things like this
that I don't understand at all
but are crucial to getting materials,
resistance, other things done very quickly.
Eric, when we spoke last, a couple weeks ago,
we were talking about this session,
and a few of the things that you said
that are philosophically on your mind
is what we've been talking about the last two days,
which is the digitization of absolutely everything
and how fast the acceleration is right now.
So, again, you worked on this starting 10 years ago, and were one of the first people to see it.
Others have since seen it, and I'll say it as bluntly as I can.
Whether you say it's Marc Andreessen's software is eating the world,
or you talk about technology, the digitization of these businesses
transforms each business, and it's only a matter of when
and what happens in the competitive environment. So we've already seen this in music and
entertainment. We've seen this obviously in my world. We've seen it in biology. We will eventually
see it in regulated industries, including health and education and so forth. So the regulation
slows it down, but it's definitely coming. And one of the things to understand is there's so much capital
and there's so many entrepreneurs that every idea,
many of whom are in this room,
every idea, no matter how implausible, will be tried.
And you'll be able to raise the money and try it.
And so then it's up to you as to whether you can assemble the team
and the timing and so forth.
And the industry that I have been a member of for a long time now understands this.
You have hundreds of people on every question looking for what's that scalable path?
How do you design for scalability?
And a separate but related point, look at the scale of the large language models and
the announcement last week from Google of something called Palm.
Yes, we talked about that. large language models and the announcement last week from Google of something called Palm.
Which is this industrial strength knowledge system
and the most interesting thing about it is that,
for example, it can translate from one software language
to another even though it wasn't given pairs to learn.
It clearly embeds by virtue of its immense training
and immense cost, they used four of the TPU clusters
and it cost millions and millions of dollars
to do this over six weeks,
that model represents real knowledge, right?
There's something in there.
And by the way, sort of like a teenager,
it can't explain itself,
but you know there's something going on inside that head.
Right, there's something going on of power.
Now, that is incredibly disruptive
because it leads to improvements in total intelligence in the system.
Now, today, this is not intelligence.
This is sort of special tricks with pattern matching.
But you can see the path.
You can see the ramp, right, as these things get bigger and stronger
and the algorithms get better.
There's all sorts of issues and opportunities.
But the nature of this acceleration
in terms of everyone having an opportunity
to transform the businesses and so forth and so on
that they're doing, thank you, is profound.
And my general answer to things is
if you want to choose what to work on, work on biology.
If you want to work in my field,
work on these language models.
The number I've heard is we're seeing a 10x every 10 months
in the size of these language models, and it's extraordinary.
I want to hit on the compression of time because that's something I feel.
And please, what are your thoughts on that?
I've been an executive for a very long time.
And I used to tell people as a leader that if you had infinite time,
you could do everything right.
You could get your strategy right, you could manage your people right,
you could be charismatic, you could be prepared for your speeches and so forth.
Every product you did would be perfectly and it'd have no bugs.
The only problem is we have the compression of time.
And I'm now extremely worried that the sum of everything we're describing
is causing further compression of time than we can deal with.
When you say that, you mean the human mind can comprehend or...
That we can comprehend, that we have time to process,
that our systems, that our legal systems,
that our political systems can process.
Andy Grove, years ago, explained to me
in a sort of true but unpleasant statement
that we run three times faster than normal
and the government runs three times slower than normal.
So the government is nine times slower, ten times slower.
And it's just true, unfortunately.
And I got in trouble for saying that, so I'll say it again.
It's just true.
And so your problem with the abungent religion,
which we fundamentally agree with, you're exactly right,
runs into the reality
of human systems.
Yes.
So I want to give you some examples.
Please.
The easiest examples to use are national security ones, and there are many other
ones.
So here we are, and we're on a ship.
And the ship has a supercomputer which is running an AI system that can detect hypersonics.
And hypersonics are hard to detect because they're
moving so quickly.
So the computer says to the captain of the ship, him
or her, says, you have 23 seconds to press this button
to launch a counterattack or you're dead.
OK?
Now, how many of you would fail to press that button?
This is like an IQ test.
That's the compression of time. Now, when
we were all young, the dialogue about nuclear attack, and I've now watched the simulations,
so I know what these timings are. It takes about 30 minutes for a nuclear weapon from
Russia, for example, not to use a bad example of today, but in the doctrine, from Russia to go over the pole, get to a sort
of a U.S.-based target.
It's some number like that.
And so in their doctrine, they have three minutes to wake up the president, who then
says, like, what's going on?
And say, Mr. President, you know, there's a bomb coming.
And the president says, did you wake me up for that?
And they go, yes.
And he goes, OK, OK, good.
And then, you know, another two minutes for cognition, and then wake me up for that? And they go, yes. And he goes, okay, okay, good. And then another two minutes for cognition
and then another five minutes for conversation.
And then he or she in this doctrine
orders the response just in time.
Okay, well, in 30 seconds,
you don't have time to call the president,
wake up the president and so forth.
So, okay.
Now, what happens when it's fully digital?
What happens when the system's attacks are so difficult
that a human can't spot it and can't react in time?
So now you have to have automatic defensive weapon systems.
What happens when those systems make a mistake?
Simple rule about AI systems is we don't know how they operate,
we don't know what they do, they make lots of mistakes,
and they're very powerful.
That's a dangerous combination when it comes to personal security or health.
So here's an example.
What should be the doctrine between two countries
that are potential opponents,
the canonical examples being China and the US.
China has hypersonics, for example. So let's say this all happens and we build a system
that can respond in a defensive way automatically
for this reason.
Well, what's the appropriate level of response?
I suggest, for example, that the two countries
enter into a treaty.
And the treaty goes like this.
If you have something you think is coming to you,
then you have to respond proportionately,
but not over proportionally.
That's an example.
Another example has to do with biology,
where we know that the spread of these biological databases
will allow people to begin to build things which are evil, bad viruses and things like that, how are we going to detect them?
You're going to have to have automatic observatories which will watch for them.
You're going to have to have mechanisms where if you, for example, if a lab is about
to build something biological, it scans a database to say, is this thing related to
something evil?
And then it stops it and sends this person to the police or what have you.
So all of these things require both an identification of the problem,
the compression of time, and an agreement globally of how to handle it.
Now, you sit there and you go, well, that's not possible.
Well, we did actually do this successfully in the nuclear age.
Kissinger and I wrote a book about this called The Age of AI.
Which everybody has. We've sent it. Thank you. If you read chapter 5,
you'll see the history, which he largely led. And it has this very
strange outcome, which is in the 1950s,
a RAND group that was organized at MIT and Harvard came up with the
idea of mutually assured destruction. And this is an idea where
we will deliberately not develop defensive systems to make sure we're exposed
and they will as well.
Now this makes no sense to any of us.
And yet it solved the problem of the time.
So we collectively have got to,
and I'm not suggesting mad is the correct solution for this
because I don't think it is,
but nobody in our system is talking about the compression of time,
the human decision cycle, and the structures that we have.
And I'm using national security to make the point, but the point is in general true.
Hey, everybody.
I hope you're enjoying this episode.
I'll tell you about something I've been doing for years.
Every quarter or so, having a phlebotomist come to my home to draw bloods,
to understand what's going on
inside my body.
And it was a challenge to get all the right blood draws and all the right tests done.
So I ended up co-founding a company that sends a phlebotomist to my home to measure 40 different
biomarkers every quarter, put them up on a dashboard so I can see what's in range, what's
out of range, and then get the right supplements,
medicines, peptides, hormones to optimize my health. It's something that I want for all my friends and family, and I'd love it for you. If you're interested, go to mylifeforce.com
backslash Peter to learn more. Let's get back to the episode.
Eric, let's move more directly into AI for a second. I have a lot of discussions about this with folks
like Peter Nordvig and Ray Kurzweil and Elon and others. And there's a, there's a two camps of AI
is really super dangerous. What the hell are we doing? And the camp just where everybody knows
I'm in, which is it's the most important collaborative tool, most important tool we're
going to have to solve the world's biggest problems. I'm in which is it's the most important collaborative to most important tool. We're gonna have to solve the world's biggest problems
I'm curious about your conversation there
and then the second half of that is the ability to even regulate any of this because
We live in a country of porous borders and if you make something illegal here, it just goes someplace else
and last part I'll move into that is the
place else. And last part I'll move into that is the, in the late, in the mid-1980s when we were having recombinant DNA, it was the series of sort of collaborative meetings of the scientists that
set out the rules. And you're referring to the Asilomar Conference? The Asilomar Conference, yes. Which was the culmination of this, where the scientists agreed in general that they would police each other
and that there were particular things that they would do.
For example, they would not modify the germline.
But this was not government mandate.
This was voluntary.
And it worked because the stuff is so specialized.
And it worked until the Chinese guy violated that rule
and God knows what happened to him.
So.
I think it's been recently released.
Okay, so he's probably back at his lab.
The, so let's go through the scenarios for AI.
I think it's extremely clear that in the next 10 years we're going to get industrial
strength AI. Computer scientists and funding and so forth around AI will help us understand
how these algorithms work, make them much more explainable so you'll be able to converse
with it. So for the next five years, you should expect multimodal, which means you're going
to have text and speech and video. Today they don't do video very well, but they're working on that, and that will come. It's
just computationally harder. You're going to see huge improvements in speed and computation.
The cost of algorithms will go down way. The models that I'm describing will get much larger.
The other thing that's happened that I should note is that five years ago, when I started looking at
this, almost no science people were using AI
in any interesting way.
It was all being done in places like Google and Facebook.
In the last five years, this is the genius of America
that the graduate students,
who are sort of always looking for something new,
have all embraced essentially various forms
of AI, machine learning, deep learning
to actually solve and estimate hard problems
in their sciences.
So you're going to see all of that.
So think of it as the next five years, industrial strength, usable, global, conversational systems.
You can say to the system, why did you do this?
It will give you its best understanding of why it made that decision and so forth.
It will become part of our lives.
This I'm quite sure of.
The next steps get much more interesting and much more speculative.
So the first question is how do you define artificial general intelligence? And I'll
give you mine, which is...
So can we just take a second back up and we have AGI versus narrow AI.
AGI. So most of what you deal with today is considered narrow AI. It has a specific objective function that was set by a set of humans.
And these are very powerful systems.
Google Translate.
But also the development of new drugs,
things which are really remarkably powerful,
so not taking away from that.
So my definition of artificial general intelligence
is computer intelligence that looks human but is not human intelligence.
And I want to make this distinction because of some really fundamental issues that are
going to come up, and I'll give you some examples.
So in the industry, there's sort of, as you said, two camps.
There are the people who think that AGI will happen relatively quickly, which is sort of
in the 20-year period, sort of in our lifetimes.
And there's a set of people who think it's much harder.
If you do the median of the predictions across the experts,
the exact answer is 20 years.
So I predict right now April 2042 is the arrival of AGI,
and I'm statistically going to be correct. When that occurs, we're going to have these systems
that can set their own objective functions and they can discover things that we can't necessarily understand or why they went after it.
There is speculation that these systems will also be able to write code themselves and therefore rewrite themselves.
Today, if you look at the Microsoft Codex, which is a significant technological achievement,
one of the numbers that I read was about a third of the code is being written by the computer,
two-thirds by the human. It's roughly correct. So imagine that that number will get a much larger percentage written by a computer over the next few years. At the point at which these systems are capable of independent thought that is non-human,
what do we do with them?
Now, let's say that you and you and you and I, we don't know each other,
and we probably don't trust each other for whatever reason.
But we can expect, based on our shared human experience,
what the limits of your good and evil are, right?
That there are real biological limits on you,
on your life, on your thinking, on your experience.
And furthermore, we judge people, you know,
male, female, Asian, American, Russian, whatever, European.
We have all these stereotypes that we care.
None of that applies to this new kind of intelligence. So how will we treat it?
On the one hand, it will be extraordinary. It might actually discover things that we have no
path to discover. They may actually discover, they may be able to answer the gravity question
in physics. They may be able to answer all sorts of really, really fundamental questions.
But they could also do their own thing.
Now, we're going to be watching them.
Because somebody's going to write an article that says,
we have no idea what this thing's going to do.
There'll be lots of people watching it.
And in fact, I believe that there'll
be people watching it with guns on the mistaken theory
that they should shoot it.
Pull the plug. Yeah, well, you that they should shoot it. Pull the plug.
Yeah, well, you may have to shoot it in case it takes over the plug.
But the important point is the paranoia about these things is going to be really profound,
because we don't know how to bottle it.
We don't know how to constrain it.
And that's why you get a bimodal distribution.
That's why you get this view that you have,
which is this thing's extraordinary.
Because believe me, these things, at that scale,
will accelerate your vision at a scale that
is impossible for us to understand,
because it's a non-human acceleration.
On the other hand, it also brings in these issues.
And I can give you lots of negative examples.
But at the end of the day, and I'll be more precise,
because it'll be 20
years, people will have forgotten whatever I say, I hope. I think there will be five to 10 of these
things because these things are so computationally expensive, even 20 years from now, because of the
amount of data and what we understand about thinking and sparse thinking and so forth,
that there'll be a few of them and they'll be controlled by nation states. And unless we have one universal world government,
which I think is unlikely between now and then, we're going to have tensions.
I think every science fiction movie or science fiction book I read has it as quantum clusters,
kilometers underground below Beijing. As part of my military work, I went to visit
where we keep our plutonium.
So there's a military base with large amounts of people with guns, and then inside this military base, I won't go to the details, there's another little base with even more guns. And inside the
fence, after many things they put the radiation suit on, you go in and then there's more people
with guns. So then you see behind the window, you see the person with lots of guns around with basically
handling the plutonium and moving it.
Is that our future?
Or is there a different future?
This is a question.
James and I announced a $125 million
program to fund research on these hard problems, of which this is a good example.
James from McKinsey.
From McKinsey, who's just joined Google.
Oh, I didn't know that.
Good for him.
It's funny to see my friends go to Google as I left.
Sort of the natural progression of life.
Your children do better than you, and you go, that's good.
What happened to me?
But it's all good.
And James is fantastic. So James can give you a much more sophisticated argument
as to why these problems are so hard,
but I think that the most important thing
is we need to start now with the philosophers
and the technologists, the sociologists,
and the economists to start thinking about
how does society look like
when we have these things coexisting?
Now, and just to be completely brutal,
15 years ago we embarked on an experiment in social media,
which I didn't pay that much attention to.
We were busy doing our own.
Facebook beat us, so forth and so on.
That model, which was a linear feed, made sense to me.
Today's model, which is an amped up AI feed
to increase engagement and make you outraged,
is not what was on the program 10 years ago.
And those decisions were made by technical people,
not by society as a whole.
I don't want to do that again.
I want us to collectively have this discussion
to shape this.
I can think of lots of ways
where we could put various forms of tripwires
and slow things down to address the most extremely dangerous aspects of this technology
while preserving its benefit. All technology at this level is dual use. And the question is,
you know, I often ask the question, if Einstein knew the consequences of his research,
would he have stopped?
Could he have stopped?
If he understood E-Equivalence MC2 was going to lead to a nuclear bomb.
And the question is, is AI research regulatable at all?
Well, one of the questions, I mean, Einstein, of course, wrote the famous letter to FDR.
Sure.
So he was more than just an inventor.
He was also complicit in a very important aspect of human history and an extraordinary person in our history.
If you look at software, you have a core problem of proliferation.
So let's use nuclear versus software.
problem of proliferation. So let's use nuclear versus software.
In nuclear, Kissinger tells a story, a very funny story,
where he was in the Kremlin in the 1970s.
And his negotiating style was that he would basically
start by telling the opponent what
he knew about the opponent.
So he would always start with a little presentation
of the other side's nuclear weapons.
So he starts.
And then there's this big commotion. They stop the meeting. And they throw one of the other side's nuclear weapons. So he starts, and then there's this big commotion,
they stop the meeting, and they throw
one of the Russians out,
because he wasn't clear to hear their own information.
Okay, right?
Okay, so now let's imagine Eric, right?
Trying to do a bad impersonation of this,
and I show up at the equivalent group,
let's say in China, and I start up and I say,
we know you have done x.
Well, the first thing the Chinese are going to say is no.
And then if I reveal anything that we're doing that they
don't know we're doing, they will immediately begin it.
Because if we're doing it, that they know it's possible.
So you have this core problem of when you have hidden research,
I'm using defense analogies, but you'll see the point,
you don't know how to regulate it
because you don't know what it is.
Furthermore, how do you deter people from stealing software?
Okay, so what we do is we create an observatory,
and the observatory watches for the software, right?
But what if the software is not turned on yet? And furthermore, we hear
that our opponent, let's pick China, has built a
software weapon that is so dangerous that they
refuse to even test it. But if they use it, it will result in
a horrendous outcome for America.
Then the American strategists go,
we have to preemptively destroy the software, right?
That is a dynamically unstable situation in detente
because one of the things you want to do
is you want to have stability.
And if you become afraid that your opponent has something
that you don't know and you want to destroy it,
how do you do it?
So I'm just giving you example after example
where software is different.
It's easily spread, open source is a problem because,
so okay, so we're really smart here,
we work with UCLA and Caltech
and the other great universities here in LA,
and we build an open source software
that includes all the checks and balances.
And we release it, and we're proud, and we write lots of papers, we get lots of awards software that includes all the checks and balances. And we release it.
And we're proud.
We write lots of papers.
We get lots of rewards.
The first thing the opponent does is take all the checks and balances out.
Okay?
So you go, what do we do now?
You need some kind of hardware limit.
Well, any hardware limit that you write is reproducible by the opponent.
You see where I'm taking you?
For each of the problems, you hit a roadblock and
what happens is people say I want a robot killer robot regular killer robots. We're not building killer robots guys.
We're building incredibly powerful
intelligent systems that we don't fully understand their use. I don't know how to regulate them.
Remember probably about four or five years ago at the World Economic Forum,
you said China was five years ahead of us in AI or 10 years?
Or would be.
So let's talk about China for a moment.
So two and a half years ago, China announced its AI 2050,
sorry, China 2030 plan, So two and a half years ago China announced its AI 2050 sorry China
2030 plan which included a statement in Chinese that they would be dominant in AI by 2030 that they would catch up by
2025 etc
And China issues lots of these written statements
so one of the good things about studying China is that they say what they intend which doesn't necessarily mean they're going to pull it
off and One of the good things about studying China is that they say what they intend, which doesn't necessarily mean they're going to pull it off.
And it is an indication, however, of where the money is going.
And they have enormous money going into this.
I was the chairman of this AI commission that you mentioned for the Congress.
We looked at this really, really carefully.
And we're still a little bit ahead.
And we're a little bit ahead, my own estimate, it's hard to estimate, is a couple of years,
maybe one year,
simply because they're catching up.
But I think a fair reading of this is that in the next five years,
the two countries will be roughly equal on everything interesting,
but the systems will be built with different values.
So a consequence and a prediction I will make
is that we're gonna have, that the competition
between the US and China is the most important competition
that we're gonna face during the rest of my life.
And I mean it as a competition, not a war.
And in that competition, it'll be a rivalry partnership.
There'll be a set of things that we collaborate on,
things which are, shall we say, non-strategic, and a lot of things which we do, which people claim are strategic or non-strategic. So, for example,
steel imports, farming, you know, plastics, things like that, they're not going to change the level.
But anything which involves information is going to be very carefully fought over and blocked.
And the reason is the Chinese cannot allow the Western models into China.
We have chosen in America to allow those into the U.S.
TikTok is the most successful website in America today.
So you may not know where your teenagers are, but it does.
And we've made that decision.
But China's not going to allow it because they can't afford the instability in their system.
We're going to go to your questions in a moment.
I have two questions, Eric, which I was like, should I ask them, shouldn't I ask it?
I'm going to.
You've known a lot of extraordinary, successful tech billionaires, tech founders over your
last 30 years. I've known many as well, and I've seen a predominance of,
how do I put this bluntly, assholes in some cases. It typically improves with age.
You have never been that. You've always been a gentleman and a great,
just a joy to work with. But some of the most successful people, I won't name names, but do you think to be successful on the world stage at that level,
you need to become desensitized and, you know, I'll use the word again, an asshole in cases.
So I've now worked with founder,
I'm not a founder, I'm a person who works with founders.
And what I've learned is that there are
different management styles, and mine is unique to me
and theirs is unique to them,
and not one is the only one that works.
So using Steve Jobs as an example,
Steve was my very close friend.
He put me on the board.
He was also incredibly rough on me
on things that Google was doing,
and yet we remained friends, and I admired him,
and he certainly cared about me.
His brilliance, which was undeniable,
meant that when he was in a room,
he would get people so excited about the future that he would foretold.
It was a genuine gift that people would follow him anyway.
It was one of the first times I saw that special leadership gift.
It's very, very, very rare.
And I think the people that you're describing have, as part, they may be very
unpleasant, but they have a countervailing brilliance that is that charisma. It's almost
cult-like. And I'm proud to have worked with such people. I don't think you can generalize to any particular style. The great founders break out early.
They have extreme self-confidence.
And I would say they disagree.
They disagree with everything.
So one day I was sitting there,
and Larry and Sergey had been rollerblading
around the campus of Mountain View,
and they said, we're going to take over these buildings.
And I said, no, you're not.
They said, yes, we will. and they said, we're going to take over these buildings. And I said, no, you're not. And they said, yes, we will.
And they said, okay.
They just see the world differently.
And I think without knowing all of you, a number of you have this skill.
But most of you will ultimately work with such people.
So my approach to life in hindsight was find the incredibly smartest person in the room.
And inevitably, we're in systems where such a person is around, man or woman, and figure out a way to work with them.
And do you know why?
You say, well, they don't need me.
They're so brilliant.
No, they absolutely need you because they can't get through the day, right?
They literally just can't, right? They've got
so much conflict and
confusion and drive and so forth. They need
help. So I set myself up as
the helper.
And that means you have
to be willing to listen to them and so
forth. It's worked well my whole career.
Amazing. Amazing.