Moonshots with Peter Diamandis - How Governments Should Handle AI Policy & Deepfakes w/ Eric Schmidt | EP #99
Episode Date: May 2, 2024In this episode, recorded during Abundance360 2024, Peter and Eric discuss AI policy, government struggles, and AI’s global impact.   06:33 | AI's Power and Impact Today 15:03 | AI and the F...ight Against Misinformation 27:12 | Government Struggles with Rapid Tech Growth Eric Schmidt is best known as the CEO of Google from 2001-2011, including as the Executive Chairman of Google, Alphabet, and later as their Technical Advisor until 2020. He was also on the board of directors at Apple from 2006-2009 and is currently the Chairman of the board of directors at the Broad Institute. From 2019 to 2021, Eric chaired the National Security Commission on Artificial Intelligence. He’s also a founding partner at Investment Endeavors, a VC firm. Learn more about Abundance360: https://www.abundance360.com/summit ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/  AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter ____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog Get my new Longevity Practices book for free: https://www.diamandis.com/longevity My new book with Salim Ismail, Exponential Organizations 2.0: The New Playbook for 10x Growth and Impact, is now available on Amazon: https://bit.ly/3P3j54J _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Is crypto perfect? Nope. But neither was email when it was invented in 1972.
And yet today, we send 347 billion emails every single day.
Crypto is no different. It's new. But like email, it's also revolutionary.
With Kraken, it's easy to start your crypto journey with 24-7 support when you need it.
Go to kraken.com and see what crypto can be.
Not investment advice. Crypto trading involves risk of loss.
See kraken.com slash legal slash ca dash pru dash disclaimer
for info on Kraken's undertaking to register in Canada.
Maple syrup, we love you, but Canada is way more.
It's poutine mixed with kimchi,
maple syrup on hollow hollow,
Montreal style bagels eaten in Brandon, Manitoba.
Here, we take the best from one side of the world and mix it with the other.
And you can shop that whole world right here in our aisles.
Find it all here with more ways to save at Real Canadian Superstore.
Something is about to change that I don't think we've clocked yet.
We're going to have a very different world and it's going to happen very quickly for the following reason.
People tend to think of AI as language to language and we're going to move from language to action.
What are we going to do when super intelligence is broadly available to everyone?
Well, in this case, I'm both optimistic and also fearful.
And obviously there's evil people
and they'll use it in evil ways,
but I'm gonna bet that the good people will route out evil.
That's historically been true in human society.
The systems will get so good that you and I,
everyone in this audience will have access to a polymath.
Let's be a little proud that we are inventing a future that will accelerate physics, science, chemistry and so forth.
Eric, first of all thank you for your friendship and for your partnership.
You've been an incredible friend, mentor, supporter for XPRIZE, for Singularity, for all the things
that we've been working on,
and I just want to say a heartfelt thank you for that.
Thank you.
So the conversation we've had over the past 24 hours has been what we call the great AI debate,
the great AI debate, which is a obviously a challenge.
It's not truly a debate, but it's been the conversation around as we evolve digital super intelligence
that's a million times and a billion times faster.
How do we think about it?
Is it our greatest hope or our gravest existential threat? And how do we steer the course there? We had Rick Kurzweil and Jeffrey Hinton on
stage with us yesterday as well as many people that you know, Imad Mustak and
Nat Friedman and Guillaume Verdun. I'm curious how you're steering this in your mind.
I mean, you have been,
Google from its earliest roots has been an AI first company.
I don't think people realize that it's always been
the fundamental of the organization.
And Google actually developed all this technology
way before anybody else,
but it chose not to release it
just to make sure it's safe, which was the responsible thing to do, but your hand was forced.
How do you think about fear versus optimism in your own mind?
Well, in this case, I'm both optimistic and also fearful.
Well, in this case, I'm both optimistic and also fearful. And Larry Page's PhD research was on AI.
So you're correct that Google was founded in the umbrella,
if you will, of what AI was going to do.
And for a while, it seemed like about 2 thirds
of the world's AI resources were well-hosted within Google,
which is a phenomenal achievement on the part
of the founders and the leadership. I think that that people tend to
think of AI from what they've seen in the movies. And which
is typically, you know, the sort of female scientist kills the
killer robot kind of scenarios. And first, we haven't figured
out how to get robotics to work yet. But we certainly understand
how to get information to work. I but we certainly understand how to get information to work.
I wrote a book with Dr. Kissinger called The Age of AI, and we have our second
one, his last one, he died unfortunately, late last year, called Genesis coming
out later this year, which is precisely on this topic.
I think the thing that to understand is that we're going to have a very different
world and it's going to happen very quickly for the following reason.
is that we're going to have a very different world and it's going to happen very quickly for the following reason.
The systems will get so good that you and I, everyone in this audience and everyone
in the world through their phone or what have you, will have access to essentially a polymath,
as in the historic polymaths of old.
So imagine if you had Aristotle to consult with you on logic, and you had Oppenheimer to
consult with you on physics, and not the person, but rather the knowledge and that kind of scaling
intelligence, these sort of truly brilliant people who were historically incredibly rare,
their equivalence would become generally available. In the's the long-term answer is what are we going
to do when super intelligence is broadly available to everyone? And obviously there's evil people and
they'll use it in evil ways, but I'm going to bet that the good people will route out evil. That's
historically been true in human society. The thing I would emphasize for this audience is that
something is about to change that I don't think people have clocked yet,
which is people tend to think of AI as language to language,
and we're gonna move from language to action.
Specifically and technically,
it means that your text will be essentially computed
into a program that can be then used.
So in your case, you're doing a conference,
start at all the potential conference members,
call them up, figure out if they're going to come,
lock them in, figure out who the most important ones
and do the seating chart, right?
And do it all by program, right?
That's something that humans do all day, right?
In what you do and many of the, you do many things,
but that was one that will become automatic just by a verbal command. Somebody else will say you know
I really like to see a competitor to Google so build a search engine sort the
ranking but do it using my algorithm not the one that Google is which I don't
like and the system won't do the same thing. So you're gonna see this explosion in
digital power on a per person basis and no one's quite set it this way
Maybe you're very good at marketing. Maybe you can come up with a name for this. It's an abundance of intelligence
But it's also in your format. It's a bunch of action
Yeah, it's it's intentional AI. Yeah making things happen
and
And this is gonna do everything everything is going to become,
I think we're heading towards the trillion sensor economy,
where everything is knowable, where AI can then
take actions based upon the information out there
and execute through robotics and such.
You've been very active in guiding national leaders on security.
And that's been a really important work
at this stage in your life.
And I wanna hit on three of these.
We have such a short period of time.
So let me mention the three and then weave them
as you would.
The first is AI and US national security.
The second is AI and competitiveness with China.
And the third is the impact of AI
on the upcoming US elections,
which many people have said could be patient zero
and a lot of concerns.
So how do you think about these three things?
How should we think about them?
So I'm a part of a group that has looked very carefully at the real dangers of the current
LLMs and they're scary.
The conclusion of our group, which is roughly 20 people who are basically scientists, is
we think we're okay now and we're worried about the future.
And the point at which you really want to get worried
is called recursive self-improvement.
So recursive self-improvement means go learn everything,
start now and don't stop until you know everything.
And this could allow, this recursive self-improvement
could eventually allow self-invocation of things.
And imagine a recursive self-improvement system
which gets access to weapons. So you can imagine doing things in biology that we cannot currently understand.
So there is a threshold.
Now, my standard joke about that is that when that thing starts learning on its own,
do you know what we're going to do?
We're going to unplug it because you can't have these things running randomly around,
if you will, in the information space
and not understanding at all what they're doing.
Another threshold point is when two different
agentic systems, agents as a computer science point,
are today defined as LLMs with state.
So in other words, not only do they know
how to go from input to output,
but they can also, they know what they did in the past
and they can make judgments based on that.
So they accumulate knowledge. So there's a scenario where your
agent and my agent learn how to speak to each other and they start and they stop talking
in English and they start lock talking in a language that they have invented. What do
we do in that case? Unplug the things. You see my, you've seen that and we've seen that.
So these scenarios, these threshold points, and we'll know when they're happening.
Another example will be when does, when the system can start doing math on its own at
a level that's, you know, incredibly advanced math.
That's another threshold point.
Now will these things occur?
When will they occur?
There's a debate in the industry.
Some people think five years, I think it's going
to be longer. But people you know, that's that's the clear
threshold. Now with respect to AI safety in general, I was
heavily involved with the UK Act in November, the executive order
from the White House. And we've started a series of track two
dialogues with China. So I kind of roughly understand Europe, of
course, is usually its usual hopeless self. So I roughly know what everybody's doing. And the governments are
trying to tread lightly at the moment by doing essentially various forms of notification and
self-regulation. So if you look at the US act, for example, you're not required to tell them what
you're doing, but you're required above 10 to the 26 flops, which is an
arbitrary measure that we frankly just invented, that you
have to notify that the training event begins, that seems like a
reasonable compromise. We don't know what the Chinese are going
to do in this area, but you have to assume that they're going to
fear the broad scale impacts of AI more than democracies will
because it will be used to
disempower the state. And so we have to assume that the government will ultimately restrict it more than the West will. Eric, how do you benchmark China today in terms of their capabilities in
large language models and neural nets against the US. I think as the audience knows that the government did did something good,
which is a restricted access to SML and H 100 H now H 800 chips from Nvidia,
although Nvidia is doing just fine without all that revenue. And, uh,
so China is now stuck at the a 100 level. They're roughly limited at five nanosecond.
I'll just say broadly speaking, seven nanometers lower is better.
The chips that we're using now are three nanometers going down to two and then 1.4 or so.
So it looks like the hardware gap is going to increase.
And it also looks like the Chinese will be forced to do scalable software with lesser
hardware.
Can they pull it off?
Absolutely.
How will they do it?
They'll spend more money.
So if it costs us a billion dollars to do training, they'll spend five billion.
So it's a temporary gap.
It's not a crippling gap, if you will, in the competition.
You asked about the elections. One way to understand this is that people now, and it's sad, don't really get their information out of the traditional news sources.
They get it out of, let's think about it, YouTube, which is, in my view, well managed,
Instagram, and Twitter, and Facebook, and Twitter and Facebook and TikTok. Now TikTok is not
really social media. TikTok is really television. Remember,
it's not really a function of what your friends are doing. It
uses the different algorithm, which is super impressive. And
it's growing like, like crazy. And of course, the US is busy
trying to ban it, which is probably not a very good idea.
But in any case, with TikTok's growth,
you should expect regulation of content
because every country regulates television
in one form or another for precisely this issue
of election interference.
So I think you're going to see the decisions
that are made by the social media companies
with respect to how they present content
will determine
how badly regulated they're going to be in this election because most people will encounter
misinformation not because they built it, but because they saw it through social media.
So the secret that the social media companies understand the peril that they're in with
respect to the downside if they screw this up on either side. Everybody want to take a short break from our episode to talk about a company that's very important to me and
could actually save your life or the life of someone that you love. The company is called Fountain Life.
It's a company I started years ago with Tony Robbins and a group of very talented physicians.
You know, most of us don't actually know what's going on inside our body.
We're all optimists. Until that day when you have a pain in your side, you go to the physician in
the emergency room and they say, listen, I'm sorry to tell you this, but you have this stage three or
four going on. And you know, it didn't start that morning. It probably was a problem that's been
going on for some time. But because we never look, we don't find out.
So what we built at Fountain Life was the world's most advanced diagnostic centers.
We have four across the US today and we're building 20 around the world.
These centers give you a full body MRI, a brain, a brain vasculature,
an AI enabled coronary CT looking for soft plaque, dexa scan,
a grail blood cancer test, a full executive blood workup. It's the most advanced workup you'll ever
receive. 150 gigabytes of data that then go to our AIs and our physicians to find any disease at the
very beginning when it's solvable. You're gonna find out eventually.
Might as well find out when you can take action.
Fountain Life also has an entire side of therapeutics.
We look around the world for the most advanced therapeutics
that can add 10, 20 healthy years to your life,
and we provide them to you at our centers.
So if this is of interest to you,
please go and check it out.
Go to fountainlife.com backslash Peter.
When Tony and I wrote our New York Times bestseller Life Force, we had 30,000 people reached out
to us for Fountain Life memberships.
If you go to fountainlife.com backslash Peter, we'll put you to the top of the list.
Really it's something that is for me one of the most important things I offer
my entire family, the CEOs of my companies, my friends.
It's a chance to really add decades onto our healthy lifespans.
Go to fountainlife.com backslash Peter.
It's one of the most important things I can offer to you as one of my listeners.
All right, let's go back to our episode.
I saw recently some limitations put on Gemini and talking about
elections and politics.
Is this our other companies doing this or is it just Google that's stepping up?
So, well, Google, again, my view and I'm obviously biased has always
been at the forefront of this.
In 2016, when Google faced the question of
elections and the Trump interference, we did not have trouble because we had done the advertising
with a white list. In other words, you had to be approved. Whereas the others in particular,
Facebook that was ultimately the biggest casualty of this had not put a white list in since then
Facebook has put a white list in. So there's hope that the companies
who have a vested interest in their own survival
will manage this.
I'll let you speculate on X and Elon.
But the important thing here is that
I didn't fully understand this until the last few years.
When you run a large social network,
there are well- funded information transparency opponents,
who for whatever reason, misinformation, disinformation, national security, what have you,
they want their information out there. I got in trouble one day because I announced,
why would we ever source from RT, which is Russia today.
And people yelled at me at the time, Russia today and RT after the Crimea invasions was in fact banned for precisely this reason. So you really do have to be careful about the power of misinformation
at scale. The misinformer is guilty, but so are the platforms if they spread it without checking,
right? And that's damaging
to a democracy. It really does put democracies at threat. And this problem will only get worse.
There are a gazillion videos now where basically, I'll give you an example. You can have have
chat CPT equivalent generate a text, you can generate the mouth movements, you can move the face and so forth. To the average person, they're indistinguishable from real.
If you look at what happened with Taylor Swift and the deep fakes about her,
there were plenty of systems that were trying to prevent the creation of the deep fakes,
but people were so motivated to create these images that they managed to get around all of
the checks and balances.
So it is a war between the locks and the lock pickers and lock makers and the lock makers need
to win with disinformation for the nation, frankly for democracy. You have been involved
in the inner workings of U.S. national defense policy,
how will AI change the business of war?
Is it ultimately a positive right now,
helping us be more accurate?
I'll say this, it'll sound cynical,
but I'll say it, I genuinely mean it.
The best thing about the Western militaries is they're
not at war. And so they're incredibly slow. Right? There is a real war in the West. And that's in
Ukraine and Russia. And I've now been many times to Ukraine and I've provided some advice to
and I obviously want want I think that however imperfect we want to preserve
democracies in our world, they're just better and safer to
have democracies and autocracies is certainly not ones that are
busy invading the neighboring country. So what's really going
on in Ukraine is a vision of what's happening in the future.
You now have and again, I can avoid my own history with
respect to this, but a year ago, I could go to the front,
and I could hang out and you know, joke and so forth. The weather was nice, you know, the food
was good kind of a thing. Now, you cannot walk during the day or the night because there's a
traffic jams of your drones and enemy drones for both sides on top. And it's essentially a death zone.
So the ubiquity of drones means, in my view,
that tanks and artillery and mortars
go away as weapons of war.
I'm a sufficient optimist that I believe
that once countries figure out a way
to make this ubiquitous notion of drones
for their own defense,
it'll become impossible to
invade an adjacent country. Because once the tanks roll, what you could do is just bomb them with
drones and a drone costs $5,000 or less and the tank costs $5 million or less. So the kill ratio
is such that the tanks just don't make it. And you can make enough drones to pull it off.
The current drones are not particularly AI sophisticated. But if the US government in its
infinite stupidity were actually to do something right and approve the Ukraine aid pact, it would
give us another another year, right? So to my current phrase publicly is let's get another year
here. And in that year, you can see asymmetric asymmetric innovation that can allow a smaller government, which is a new democracy trying
hard to counter the moves of a large and established of
invading power. I suppose the cynic would say, well, that
means it's going to get harder for the US to invade neighboring
countries. And I said, well, that may be true, too. But when
having now seen real war, as opposed to what you see in the
movies, and I have lots of drone
death videos that I will not show anybody. It's really
horrific. And we want everything we can to stop war. And I think
that there's a scenario where AI makes it actually much less
likely, they'll certainly with AI and empowered weapons, be far
fewer collateral damage because of the targeting.
And again, this is lost in the various critics of what I and others are doing.
The biggest casualties of war are not actually the soldiers, but the civilians.
So war is horrific, and it should be if you have to have a deal with the professionals and don't kill kids and women and old ladies and bomb the buildings like the Russians have been doing with their tanks, which upsets me no.
Those are called war crime.
I had Palmer lucky on this stage last year describing what he's doing with Anderil and that was his key point that precision is everything.
And being able to.
Yeah, and Palmer's company has done a fantastic job.
They're one of the great US leaders in this space. Yeah being able to. Yeah, and Homer's company has done a fantastic job.
They're one of the great US leaders in this space.
Yeah, for sure. Let's talk about AI safety.
You know, the point's been made over and over again
in the last 24 hours that these AI models are our progeny.
They're built on our digital exhaust.
How should we be training models? they're built on our digital exhaust.
How should we be training models?
How should we be trying to maximize?
Is containment ever an issue?
Is how do you think about safety in our super advanced AI models? I mean, the first, the first rules were don't put it on the open internet and don't allow it to
self-referentially improve itself.
And we've put it out in the open internet and we've had software coding software.
So where do we go from here?
Well, let's understand the structure of the future internet. At the moment, the hyperscalers, the big ones, which essentially are Microsoft, Microsoft
Open Eyes kind of a pair, Google, Anthropic, Inflection, there's a couple in China that
are coming, these are closed models.
And when I say closed, that means that you don't know how they work internally, the source
code is not available, the weights are not available, and the APIs are limited in some way.
And there's been a debate in the industry for a long time as open versus closed models.
If you look at the open models that have come out, if you look at the Mistral most recent models,
if you look at Lama 3, each of these models are incredibly powerful.
They get to roughly 80%.
But so the debate that's going on in the industry
is will the open source and closed models,
will they track?
In other words, will open source lag a year or two,
or will the hyperscalers get much bigger?
That is essentially a question of dollars, right?
And trade time, dollars and so forth.
And we're talking about $250 million for a training run,
$500 million for a training run, escalating quite quickly.
And you see this in Nvidia's stock price, et cetera.
So the first question is,
do you think that there'll be a small number
or large number of such things?
My own view is there'll be a small number
of incredibly powerful AGI systems, which will be heavily regulated
because they're so powerful.
This is my personal view.
And then a much larger number of what I'm going to call
middle-sized models, which will be open source.
A bunch of people would just plug in and out.
I looked very carefully at this question of,
could you selectively train?
In other words, if you could delete the bad part of the
information in the world and just only train on good words, if you could delete the bad part of the information in the world
and just only train on good information,
would you get a better model?
Unfortunately, it appears that
it doesn't actually work that way.
When you restrict training data,
you actually get a more brittle model.
So it looks like you're better off,
at least today with the current algorithms,
to build a large model and then restrict it with guardrails
with the so-called red teams and so forth.
And the red teaming is clever because what they do is they have humans who think that they test something,
they say if it knows something, it must know something else. And that seems to be working.
Eventually, the consensus of the groups that I've been working with is that the red teaming will become its own business.
I've been thinking about how to fund this philanthropically.
Because if you think about it, how do you know what an AI is doing unless an AI is watching
it?
Well, how can the AI that's watching it know what the AI discovered it unless the AI tells
it but it doesn't know how to tell you what it knows?
So this conundrum is to be worked on.
There are plenty of people working on this problem.
I think we'll get this solved. But but
I think it's it's important to say that these very large models
are ultimately going to get regulated. And the reason is,
they're just too powerful, and they're going to be regulated
because they need to be they know too many ways of harm as
well as enormous, enormous power of gain, right, the ability to
cure cancer and fix our energy problems and do
new materials and on and on and on. I mean, I can go on and on and on about what they'll
be able to do because they're polymaths.
Did you see the movie Oppenheimer? If you did, did you know that besides building the
atomic bomb at Los Alamos National Labs, that they spent billions on bio-defense weapons,
the ability to accurately detect viruses and microbes
by reading their RNA.
Well, a company called Viome exclusively licensed
the technology from Los Alamos Labs
to build a platform that can measure your microbiome
and the RNA in your blood.
Now, Viome has a product that I've personally used
for years called Full Body Intelligence,
which collects a few drops of your blood, spit, and stool and can tell you so much
about your health. They've tested over 700,000 individuals and used their AI
models to deliver members critical health guidance like what foods you
should eat, what foods you shouldn't eat, as well as your supplements and
probiotics, your biological age, and other deep health insights. And the results of
the recommendations are nothing short of stellar. You know, as reported in the
American Journal of Lifestyle Medicine, after just six months of following
Viome's recommendations, members reported the following. A 36% reduction in
depression, a 40% reduction in anxiety, a 30% reduction in diabetes, and a 48% reduction in IBS.
Listen, I've been using Viome for three years.
I know that my oral and gut health is one of my highest priorities.
Best of all, Viome is affordable, which is part of my mission to democratize health.
If you want to join me on this journey, go to Viome.com slash Peter.
I've asked Naveen Jain, a friend of mine who's the founder and CEO of Viom, to give my listeners a special discount.
You'll find it at viom.com slash Peter.
We had two political leaders on stage with us yesterday.
And, you know, the question is, can the government possibly keep up with this? From your own experiences inside the hallowed
halls of this, our government and others? How are you, how are you seeing it? Are they,
is there enough attention? Is there enough awareness and the fear?
Well, fear, fear is a heavy motivator for political leaders, especially if they're worried
about their own jobs. What I found in the Senate was that there's a group of four,
two Republicans and two Democrats who really got it.
And I worked very closely with them.
We had a series of Senate hearings on this subject,
which were well attended.
There's a similar initiative now in the House.
And this is largely, it's happening so quickly.
In fairness to our
political leaders. Most of us have trouble understanding what's going on. Can you imagine
a normal person who's got like political, political problems to deal with? So I think this is a
situation where America and I think it's important to say that we should be very proud of our country.
We spend all of our time complaining. But the fact of the matter is the future is being invented in the United States and in the UK, our closest ally. And the fact of
the matter is that the Chinese, for example, every Chinese training run starts with an open source
event and then moves on. Right. So they get it. Right. And they start with our great work.
So let's be a little proud that we are inventing a future
that will accelerate physics, science, chemistry,
and so forth.
I'm working with people who are busy reading science journals,
reading chemistry journals, generating hypotheses,
and then labeling proteins and so forth in new ways,
and doing it all automatically, and then using robotic farms
to do it.
The scale of innovation that this notion of
read everything, take an action, write a program
and run the program is profound.
And by the way, the innovation is not being done
by the faculty, it's being done by the graduate students.
And by the way, guess what?
The engine of growth in our society is the graduate students
who are trying to get their PhDs
and they invent whole industries.
It's phenomenal.
And then when they get their PhDs,
we kick them out of the country and send them home. Perhaps we should try to keep them in the US. Perhaps
you should staple a green card on the back of the doctoral degree. Sure.
I think my point here is, you know, everyone spends all their time with these sort of concerns
about how society will adapt.
This is going to happen first, the systems are not prepared for this.
So the government's not prepared for it.
The companies who are doing the majority of the work have an enormous responsibility to
maintain human values, to maintain decency, to deal with some of the abuses that occur
online.
And they need to do it on their own.
They need to clean up their own act if they don't have it cleaned up now.
And if they don't, they'll get regulated.
And hopefully the industry as a group, which is what we're trying to do,
can present a coherent structure that manages the downside correctly
but gives us this incredible upside,
both for national security, which I work on most of the time, but also
for health science and education.
You know, back, I remember when I was a gene jockey in the labs at MIT and Harvard Med
School in the 80s, when the first restriction enzymes came out.
And there was a huge fear about that, of what that could mean.
The biotech industry got together
in a series of Asilomar conferences to self-regulate.
Is that same sort of regulation, and it worked, by the way,
is that same conversation going on now in the AI leadership
world?
And in fact, we had a meeting in December,
which was an attempt at that.
It was not at Asilomar.
There was in a meeting a week ago at Asilomar.
There's another meeting at Stanford in two weeks on the same subject.
All of us are participating in it.
And we're talking about all of these things precisely.
If you go back to your training way back when you were a doctor, the RAG, right, which is
the sort of group that managed all of this was actually created out of
the scientists, not out of the government. And eventually the RAG was put under what is now HSS.
So there is a pot, there's a, there's a history here of the scientists who really do understand
what this thing can do, but are otherwise clueless on its impact typically can basically,
you can get the structure right. And then the government can figure out how to,
what is the human impact of it. and that's the right partnership in my view
Last question Eric and again. Thank you for your time the work that you do with Schmidt features foundation. You're a very
Curious individual in across a multitude of different areas. I
imagine that AI to
Discovering you physics and you math and new biology and new materials has to be just an extraordinary candy for you.
What are you most excited about there?
Well, I've gone to a series of conferences in physics and chemistry,
which I did not really understand a word of it.
But here's my report.
They're doing, they're taking the LLMs and more importantly, diffusion models.
And diffusion model is essentially this strange thing
where you take something you add, you add noise to it, and
then you denoise it and you get a more accurate version of the
same thing. They're using these tools in very complicated ways
in physics to solve problems that are not that have just not
solved. A typical example is that using
physics equations or chemistry equations, we know precisely how
the forces work, we just can't they're in computable by
computers in the next 100,000 years, right, but you can use
these techniques to get approximations. And these
approximations are good enough to solve the problem that you
have in front of you, which is an estimation problem
or an annealing problem or something like that.
I think the biggest area of impact
is going to be biology, because biology
is so vast and so unknown.
And the way you do it is you basically
do math solving through a thing called Lean,
and then you do all this chemistry work,
and then it builds on top of that.
In physics, there are people who are
working on partial differential equation solvers,
which are the base of everything.
And again, they're using variants of LLMs,
but they're not actually LLMs.
And the math is impossible to understand,
but that's okay, I wasn't good enough to do physics.
You know,
I should have mentioned, you're the,
are you still the chairman of Sandbox AQ?
I am, yeah.
We had Jack Hittery here last year.
He's phenomenal and brilliant.
And congratulations on the success of Sandbox AQ.
Jack will come back with us again next year.
I have to imagine that as explosive and exciting as AI is,
that quantum compute and quantum technologies
are going to make that look like it's standing still.
Is that a fair statement?
Yeah, I've been waiting for quantum computing to arrive for about 20 years.
The physical problem with quantum computing is the error rate.
And so for one qubit, you need a thousand real qubit, one accurate qubit, you need a thousand and so forth.
People are working on this.
That stuff remains very hard.
What Jack's company, Sandbox IQ, did is said, we're not going to work on that. We're going to basically build simulations of quantum and apply them to real-world problems. An interesting,
I assume I can talk about this a little bit in public, the interesting new thing that they
figured out is that they can take a drug, if you will, and using quantum effects, but
using a simulator of quantum because they don't have a quantum computer, they can perturb
it.
And in the perturbations, they can make the drugs more effective, longer lasting, longer
shelf life, what have you.
That turns out to be an incredibly powerful and big industry.
And it's an example of a short-term impact of quantum
that I, for one, never occurred to me.
I assume we had to wait for quantum computers.
But the quantum simulation is so good now
that you can make wins now, and that's what he's doing.