Lex Fridman Podcast - #155 – Max Tegmark: AI and Physics
Episode Date: January 18, 2021Max Tegmark is a physicist and AI researcher at MIT. Please support this podcast by checking out our sponsors: - The Jordan Harbinger Show: https://www.jordanharbinger.com/lex/ - Four Sigmatic: https:...//foursigmatic.com/lex and use code LexPod to get up to 60% off - BetterHelp: https://betterhelp.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free EPISODE LINKS: News Project Explainer Video: https://www.youtube.com/watch?v=PRLF17Pb6vo News Project Website: https://www.improvethenews.org/ Max's Twitter: https://twitter.com/tegmark Max's Website: https://space.mit.edu/home/tegmark/ Future of Life Institute: https://futureoflife.org/ Lex Fridman Podcast #1: https://www.youtube.com/watch?v=Gi8LUnhP5yU PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/LexFridmanPage - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:15) - AI and physics (21:32) - Can AI discover new laws of physics? (30:22) - AI safety (47:59) - Extinction of human species (58:57) - How to fix fake news and misinformation (1:20:30) - Autonomous weapons (1:35:54) - The man who prevented nuclear war (1:46:02) - Elon Musk and AI (1:59:39) - AI alignment (2:05:42) - Consciousness (2:14:45) - Richard Feynman (2:18:56) - Machine learning and computational physics (2:29:53) - AI and creativity (2:41:08) - Aliens (2:56:51) - Mortality
Transcript
Discussion (0)
The following is a conversation with Max Tagmark, his second time in the podcast.
In fact, the previous conversation was episode number one of this very podcast.
He is a physicist and artificial intelligence researcher at MIT, co-founder of the Future
of Left Institute, an author of Life 3.0, being human in the age of artificial intelligence.
He's also the head of a bunch of other huge fascinating projects and has written a lot
of different things that you should definitely check out.
He has been one of the key humans who has been outspoken about long-term existential
risks of AI and also its exciting possibilities and solutions to real world problems, most
recently at the intersection of AI and physics,
and also in re-engineering the algorithms that divide us
by controlling the information we see,
and thereby creating bubbles and all other kinds of complex,
social phenomena that we see today.
In general, he's one of the most passionate and brilliant people
I have the fortune of knowing.
I hope to talk to him many more times on this podcast in the future.
Quick mention of our sponsors, the Jordan Harbinger Show, Four Sigma-Tik-Marsham Coffee,
Better Help Online Therapy, and ExpressVPN.
So the choice is wisdom, caffeine, sanity, or privacy.
Choose wisely my friends, and if you wish, click the sponsor links below
to get a discount to support this podcast.
As a side note, let me say that much of the researchers
in the machine learning and artificial intelligence
communities do not spend much time
thinking deeply about existential risks of AI.
Because our current algorithms are seen as useful useful but dumb, it's difficult to imagine
how they may become destructive to the fabric of human civilization in the foreseeable future.
I understand this mindset, but it's very troublesome.
To me this is both a dangerous and uninspiring perspective, reminiscent of a lobster sitting
in a pot of lukewarm water that a minute ago was cold.
I feel a kinship with this lobster.
I believe that already the algorithms that drive our interaction on social media have
an intelligence and power that far outstripped the intelligence and power of any one human
being.
Now really is the time to think about this, to define the trajectory of the interplay
of technology and human
beings in our society. I think that the future of human civilization very well may be at stake
over this very question of the role of artificial intelligence in our society. If you enjoy this
thing, subscribe by YouTube, review it, and I'll put podcasts, follow and spotify, support on
Patreon, or connect with me on Twitter, Alex Friedman.
As usual, I'll do a few minutes of ads now and no ads in the middle.
I tried to make these interesting, but I give you time stamps, so if you skip, please
still check out the sponsors by clicking the links in the description.
It is, in fact, the best way to support this podcast.
This episode is sponsored by the Jordan Harbinger Show.
Go to jordanharbinger.com slash lex.
Subscribe to it, listen, you won't regret it.
I've been bingeing on this podcast for the entirety of 2020.
Jordan is a great interviewer.
He gets the best out of his guests, dives deep,
calls them out when it's needed,
and makes the whole thing fun to listen to.
He has interviewed Kobe Bryant, Mark Cuban, Neil deGrasse Tyson, Gary Kasparov, and
many more.
Perhaps more importantly, he is unafraid of addressing challenging, even controversial
topics with thought and grace.
I especially like his feedback Friday's episodes, where his combination of fearlessness
and thoughtfulness
is especially on display, touching topics of sex, corruption, mental disorders, hate,
love, and everything in between.
Again, go to jordanharbinger.com slashlex.
It's how he knows I sent you.
On that page, there's links to subscribe to the show everywhere you listen to podcasts,
including Apple Podcasts, Spotify. If you're listening
to this, I'm sure you know how this works. Oh, and if you like it, probably go give 5 stars to
show. Leave a nice review. Tell me I sent you. This show is also sponsored by the thing that
currently really need which is coffee, forsegmatic, the maker of delicious mushroom coffee and plant-based protein. I enjoy both. The
coffee has lines made mushroom for productivity and chaggle mushroom for
immune support. The plant-based protein has immune support as well and tastes
delicious, which I think honestly at least for me is the most important quality of
both protein and coffee. Supporting your immune system is one of the best things you can do now to stay healthy
in this difficult time for the human species.
Not only does four-sigmatic always have 100% money back guarantee, but right now you
can try their amazing products for up to 50% off.
And on top of that, you get extra discounts for just the listeners of this podcast.
If you go to foursigmatic.com slash flex,
that's foursigmatic.com slash flex.
This episode is sponsored by BetterHelp.
Spelled H-E-L-P-H-H-O-S.
I always think about the movie cast the way
when I spell out the word help.
Okay, they figured out what you need
and match you with a licensed professional therapist in under 48 hours. cast the way when I spell out the word help. Okay, they figured out what you need and
match you with a licensed professional therapist in under 48 hours. I chat with a person on
there and enjoy it. Of course, I also have been talking to over the stretch of 2020 and
looks like 2021 as well, the David Goggins, who is definitely not a licensed professional or some would say sane, but he does help me
meet his and my demons and become comfortable to exist in their presence.
This is just my view, but I think mental struggle is a central vocation, but I think you can
struggle in a way that's controlled and done beautifully as opposed to in a way that
destroys you.
I think therapy can help on this and better help us a good option for that.
There are easy, private, affordable, available worldwide.
You can communicate by text anytime and schedule a weekly audio and video session to give
us a shout out to the 2 OGs, my two favorite psychiatrists, Sigmund Freud and Carl Jung.
When I was younger, their work was important in my intellectual development.
Anyway, check out betterhelp.com slash Lex.
That's betterhelp.com slash Lex.
This show is also sponsored by ExpressVPN.
It provides privacy in your digital life.
Without VPN, your internet service provider can see every site you've ever visited
keeping a log of every single thing you did online.
So your internet provider like AT&T or Comcast is allowed to store those logs and sell
this data to anyone.
That's why you should use ExpressVPN as much as possible.
I do it.
You should consider doing it as well.
They don't keep any logs,
they audit their tech by external companies, these guys are legit. I think this topic
and VPNs certainly are especially relevant now when the power of social media firms and
ISPs was made apparent with a wave of de-platforming actions. We need to use tools that less than
the power of decentralized entities. I use ExpressVPN lessen the power of these centralized entities. I use
ExpressVPN as just one example of such a tool. Go to ExpressVPN.com slash Lex pod to get
an extra three months free on a one year package. It's a big red button if you enjoy those
kinds of things I certainly do. Okay, that's Expressn.com slash Lex pod.
Sign up for the privacy and the big red button.
That's expressvpn.com slash Lex pod.
And now here's my conversation with Max Tagmark. So people might not know this, but you were actually episode number one of this podcast
just a couple of years ago and now we're back. And it so happens that a lot of exciting things
happened in both physics and artificial intelligence, both fields that you're super passionate about.
We try to catch up to some of the exciting things happening in artificial intelligence,
especially in the context of the way it's cracking, open the different problems of the sciences.
Yeah, I'd love to, especially now, as we start 2021 here,
it's a really fun time to think about what were the biggest
breakthroughs in AI, not the ones necessarily the media wrote about,
but that really matter.
And what does that mean for our ability to do better science?
What does it mean for our ability to help people around the world?
And what does it mean for new problems
that they could cause if we're not smart enough to avoid them?
So what do we learn basically from this?
Yes, absolutely.
So one of the amazing things you're part of
is the AI Institute for Artificial Intelligence
and Fundamental Interactions.
What's up with this institute?
What are you working on?
What are you thinking about?
The idea is something I'm very on fire with,
which is basically AI meets physics.
And it's been almost five years now,
since I shifted my own MIT research from physics to machine learning and in the beginning
I noticed a lot of my colleagues even though they were polite to better. Well, I kind of
What is it Max doing? What is this weird stuff? He's lost his mind
then but thank gradually I
Together is some colleagues were able to persuade more and more of
together with some colleagues were able to persuade more and more of the other professors in the physics department to get interested in this.
And now we got this amazing NSF center, so 20 million bucks for the next five years,
MIT and a bunch of neighboring universities here also.
And I noticed now those colleagues are looking at the funny, have stopped asking why I'm what the point is of this
because it's becoming more clear.
And I really believe that, of course,
AI can help physics a lot to do better physics.
But physics can also help AI a lot,
both by building better hardware.
My colleague, Madin's nojachic, for example, is working on an optical chip
for much faster machine learning, where the computation is done
not by moving electrons around, but by moving photons around
dramatically less energy use faster, better.
We can also help AI a lot, I think, by having a different set of tools and a different, maybe
more audacious attitude.
AI has a significant extent been an engineering discipline where you're just trying to make
things that work and being more interested in maybe selling them than figuring out exactly how they work and proving theorems about that they will always work, right?
Contrast that with physics, you know, when Elon Musk sends a rocket to the International Space Station, they didn't just train with machine learning, all that's fired a little bit left more to the left, a bit more to the right, so that also missed. Let's try here. No, we figured out Newton's laws or gravitation
and other things.
It got a really deep fundamental understanding.
And that's what gives us such confidence in rockets.
And my vision is that in the future,
all machine learning systems that actually
have impact on people's lives
will be understood at a really, really deep level.
So we trust them, not because some sales rep told us to, but because they've earned our trust.
We can, and really safety critical things even prove that they will always do, you know,
what we expect them to do.
That's very much the physics mindset.
So it's interesting, if you look at big breakthroughs
that have happened in machine learning this year,
from dancing robots, it's pretty fantastic.
I'm not just because it's cool,
but if you just think about not that many years ago,
this YouTube video at this DARPA challenge
where the MIT robot comes out of the car and face plants.
How far we've come in just a few years. Similarly, alpha-fold too, you know,
crushing the protein folding problem. We can talk more about implications for
medical research and stuff, but hey, you know, that's huge progress. You can look at GPT-3, they can spout off English
texts, which sometimes really, really blows you away. You can look at DeepMinds Musee0, which
doesn't just kick our butt and go and chest and show G, but also in all these Atari games and you don't even have to
teach it to rules now. You know, what all of those haven't common is besides being powerful is
we don't fully understand how they work. And that's fine if it's just some dancing robots and the
worst thing that can happen is they faceplant, right? Or if they're playing Go and the worst thing that can happen is that they make a bad move and lose the game, right? It's less fine if that's
what's controlling your self-driving car or your nuclear power plant. And we've seen already
that even though Hollywood had all these movies where they try to make us worry about the
wrong things, like machines turning evil, The actual bad things that have happened with automation have not been machines turning
evil.
They've been caused by over trust in things we didn't understand as well as we thought
we did, right?
Even very simple automated systems, like what Boeing put into the 737 MAX, killed a lot of people,
was it that that little simple system was evil,
of course not, but we didn't understand it
as well as we should have, right?
And we trusted without understanding.
Exactly, we didn't even understand
that we didn't understand, right?
The humility is really at the core of being a scientist. I think step one,
if you want to be a scientist, is don't ever fool yourself into thinking you understand
things when you actually don't. Right? That's probably a good advice for humans in general.
I think humility in general can do us good, but in science, it's so spectacular. Like, why did
we have the wrong theory of gravity ever from our Aristotle onward and close until Galileo's time. Why would we believe something so dumb as that if I throw this
water bottle it's gonna go up with constant speed until it realizes that it's
natural motion is that changes its mind. Because we people just kind of assumed
Aristotle was right he's an authority we understand that why did we believe
things like that the Sun is going around the earth?
Why did we believe that time flows at the same rate for everyone until Einstein?
Same exact mistake over and over again, we just weren't humble enough to acknowledge
that we actually didn't know for sure.
We assumed we knew, so we didn't discover the truth because we assumed there was nothing
there to be discovered,
right? There was something to be discovered about the 737 MAX. And if you had been a bit
more suspicious and tested it better, we would have found it. And it's the same thing with most
harm that's been done by automation so far, I would say. So I don't know if you did you hear
of a company called Knight Capital? So good. That means you didn't invest in the moralier.
They deployed this automated rating system.
All nice and shiny.
They didn't understand it as well as they thought.
And it went about losing 10 million bucks per minute
for 44 minutes straight.
No.
Until someone presumably was like,
Oh no, shut this up.
You know, it was an evil? No, it was again misplaced trust, something they didn't fully understand,
right? And there have been so many, even when people have been killed by robots,
it's just quite rare still, but in factory accidents, in every single case, been not malice,
just that the robot didn't understand that human
is different from an older part or whatever.
So this is where I think there's so much opportunity for physics approach, where you just aim
for a higher level of understanding.
And if you look at all these systems that we talked about from reinforcement learning systems and
dancing robots, to all these neural networks that power GPT-3 and go playing software itself.
They're all basically black boxes, much like not so different from if you teach a human
something, you have no idea how their brain works, right?
Except the human brain at least has been error corrected during many, many centuries of evolution in
a way that these, some of these systems have not, right? And my MIT research is entirely
focused on demystifying this black box. Intelligible intelligence is my slogan. That's a good
line, intelligible intelligence. Yeah, we shouldn't settle for something that seems intelligent, but we should,
it should be intelligible so that we actually trust it because we understand it,
right? Like, again, Elon trusts his rockets because he understands Newton's laws
and thrust and everything works. And let me tell you what, can I tell you why I'm
optimistic about this? Yes. I think, I there is we've made a bit of a mistake.
Yeah, where we.
Some people still think that somehow we're never going to understand neural networks.
And we're just going to have to learn to live with this.
It's this very powerful block box.
Basically for those, you know, haven't spent time building their own.
It's super simple.
What happened inside?
You send in a long list of numbers,
and then you do a bunch of operations on them,
multiply by matrices, et cetera, et cetera,
and some other numbers come out that's output of it.
And then there are a bunch of knobs you can tune.
And when you change them, it affects the computation,
the input output relation.
And then you just give the computer some definition of good,
and it keeps optimizing these knobs until it performs as good as possible.
And often you go like, wow, that's really good.
This robot can dance or this machine is beating me at chest now.
And in the end, you have something which, even though you can look inside it,
you have very little idea of how it works.
You know, you can print out tables of all the millions of parameters in there.
Is it crystal clear now how it's working?
Of course not, right?
Many of my colleagues seem willing to settle for that.
And I'm like, no, that's like the halfway point.
Some have even gone as far as sort of guessing that the mystery, the in-street ability of this is where the power comes from and sort of mysticism.
I think that's total nonsense. I think the real power of neural networks comes not from in-street ability, but from different shability. And what I mean by that is simply that the output depends,
change is only smoothly if you tweak your knobs.
And then you can use all these powerful methods we have for optimization in science.
We can just tweak them a little bit and see, did that get better or worse?
That's the fundamental idea of machine learning that the machine itself can keep optimizing until it gets better.
Suppose you wrote this algorithm instead in Python or some
other programming language, and then what what the knobs did was they just changed random letters in
your code. Now we'll just epically fail you right, you change one thing and instead of saying print
that says sin't, sin tax error. You don't even know was that for the better or for the worse, right?
This to me is this is what I believe is the fundamental power of neural networks and just
to clarify the changing of the different letters in a program would not be a differentiable
process. It would make it in an invalid program typically and then you wouldn't even know
if you changed more letters if it would make it work again, right?
So that's the magic of neural networks, uh, these
group ability, the differentiability that every, every setting of the parameters is a program.
And you can tell it is a better or worse, right? And so, so you don't like the poetry of
the mystery of neural networks as the source of its power.
Hi, generally like poetry, but not in this case, it's so misleading. And it's
above all, it's short changes. It's failed. It makes us underestimate what we can, the good
things we can accomplish. Because so what we've been doing in my group is basically step
one, train the mysterious neural network to do something well. And then step two, do
some additional AI techniques
to see if we can now transform this black box
into something equally intelligent
that you can actually understand.
So for example, I'll give you one example,
this AI findman project that we just published, right?
So we took the 100 most famous or complicated equations
from one of my favorite physics textbooks.
In fact, the one that got me into physics
in the first place, the Feynman lectures on physics.
And so you have a formula, you know,
maybe it has what goes into the formula is six different variables
and then what comes out is one.
So then you can make it like a giant Excel spreadsheet,
the seven columns.
You put in just random numbers for the six columns,
for those six input variables,
and then you calculate with a formula of the seventh column.
The output.
So maybe it's like the force equals in the last column,
some function of the other.
And now the task is, okay,
if I don't tell you what the formula was,
can you figure that out from looking at my spreadsheet?
I gave you.
Yes.
This problem is called symbolic regression.
If I tell you that the formula is what we call a linear formula.
So it's just that the output is some of all the things input
at the time, some constants.
That's a famous easy problem.
We can solve.
We do it all the time in science
and engineering. But the general one, if it's more complicated functions with logarithms
or co-science or other map, it's a very, very hard one and probably impossible to do fast
in general, just because the number of formulas with n symbols, you know, just grows exponentially,
just like the number of passwords you can make grow
dramatically with length. So we had this idea that if you first have neural network that can actually
approximate the formula, you just trained it, even if you don't understand how it works, that can be
first step towards actually understanding how it works. So that's what we do first.
towards actually understanding how it works. So that's what we do first.
Then we study that neural network now,
and put it also to other data that
wasn't in the original training data,
and use that to discover
simplifying properties of the formula,
and that lets us break it apart often into many simpler pieces,
and a divide and conquer approach.
So we were able to solve all of
those hundred formulas, discover them automatically, plus a whole bunch dividing conquer approach. So we were able to solve all of those hundred formulas,
discover them automatically,
plus a whole bunch of other ones.
It's actually kind of humbling to see that this code,
which anyone who wants now listening to this can type,
pip install AI Fiamen on the computer and run it.
It can actually do what Johannes Kepler spent four years doing
when he stared at Mars data.
It's like, finally, Eureka, this is in a lips.
This will do it automatically for you in one hour, right?
Or Max Planck.
He was looking at how much radiation comes out from a different wavelength from a hot
object and discovered the famous black body formula.
This discovers it automatically.
I'm actually excited about
seeing if we can discover not just old formulas, again, but new formulas that no one has seen before.
And do you like this process of using kind of a neural network
to find some basic insights
and then dissecting the neural network
to then gain the
final. So that's in that way you've forcing the explainability issue. You know, really trying
to analyze a neural network for the things it knows, you know, to come up with the final
beautiful, simple theory underlying the
whole, the initial system that you were looking at.
I love that.
The reason I'm so optimistic that it can be generalized to so much more is because that's
exactly what we do as human scientists.
Think of Galileo whom we mentioned.
I bet when he was a little kid, if his dad threw him an apple, he would catch it. Why? Because he had a neural network in his brain, but he had trained to predict
the parabolic orbit of apples that are thrown under gravity. If you throw a tennis ball to a dog,
it also has this same ability of deep learning to figure out how the ball is going to move and catch it.
But Galileo went one step further when he got older. He went back and it was like, wait a minute. I can
write that in a formula. Yes. Why equals x squared, a parabola. You know what? And he helped
revolutionize physics as we know it, right? So there was a basic neural network in there from childhood that captured like the base,
the experiences of observing different kinds of trajectories.
And then he was able to go back in with another extra
little neural network and analyze all those experiences
and be like, wait a minute, there's a deeper rule here.
Exactly.
He was able to distill out in symbolic form what that
complicated black box mirror that we was doing. Not only did he, the formula he get,
he ultimately become more accurate. And similarly, this is how Newton got Newton's laws,
which is why Elon can send rockets to the space station now. So it's not only more accurate,
but it's also simpler, much simpler, and it's so simple
that we can actually describe it to our friends and each other, right?
We've talked about it just in the context of physics now, but hey, you know, isn't this
what we're doing when we're talking to each other also?
We go around with our neural networks, just like dogs and cats and chipmunks and blue
jays, and we experience things in the
world. But then we humans do this additional step on top of that, where we then distill out
certain high-level knowledge that we've extracted from this in a way that we communicate it to
each other in a symbolic form in English in this case, right? So if we can do it and we believe that we are
information processing entities, then we should be able to make machine learning
that does it also.
Well, do you think the entire thing could be learning because there
is this dissection process like for AI Feynman, the secondary stage feels like
something like reasoning. And the initial step feels like more like the more basic
kind of differentiable learning.
Do you think the whole thing could be differentiable learning?
Do you think the whole thing could be basically
neural networks on top of each other?
It's like turtles all the way down.
Can we neural networks all the way down?
I mean, that's a really interesting question.
We know that in your case,
it is neural networks all the way down because that's all you have in your skull as a bunch
of neurons doing their thing, right? But if you ask the question more generally, what
algorithms are your brain is your brain are being used in your brain? I think super interesting
to compare. I think we've gone a little bit backwards historically because we humans first discovered good old-fashioned AI,
the logic-based AI that we often call GoFi for good old-fashioned AI.
And then more recently, we did machine learning
because it required bigger computers,
so we have to discover it later.
So we think of machine learning with neural networks
as the modern thing and the logic based AI
as the old fashioned thing.
But if you look at evolution on Earth, right,
it's actually been the other way around.
I would say that, for example, an eagle
has a better vision system that I have using
and dogs are just as good at casting tennis balls
as I am. All this stuff which is done by training in neural network and not interpreting
it in words, you know, is something so many of our animal friends can do, at least as
well as us, right? What is it that we humans can do that the chipmunks and the eagles
cannot? It's more to do with this logic-based
stuff where we can extract out information in symbols, in language, and now even with
equations if you're a scientist. So basically what happened was first we built these computers
that could multiply numbers real fast and manipulate symbols and we felt they were pretty dumb.
computers that could multiply numbers real fast and manipulate symbols. And we felt they were pretty dumb.
And then we made neural networks that can see as well as a CATCAN and do a lot of this
inscrutable black box neural networks.
What we humans can do also is put the two together in a useful way.
Yes, artificial in our own brain.
Yes, in our own brain.
So if we ever want to get artificial general intelligence,
that can do all jobs as well as humans can, right?
Then that's what's going to be required.
To be able to combine the neural networks with symbolic,
combine the O.A.I with a new AI in a good way.
We do it in our brains.
And there seems to be basic,
two strategies I see in industry now. One scares the
EBG bees out of me and the other one I find much more
encouraging. Okay, which one can we break them apart? Which
other two? The one that scares the EBG bees out of me is this
attitude that we're just going to make ever bigger systems
that we still don't understand until they can do be as smart
as humans. I what could possibly go wrong, right?
I think it's just such a reckless thing to do.
And unfortunately, and if we actually succeed as a species to build artificial general intelligence,
then we still have no clue how it works.
I think at least 50% chance we're going to be extinct before too long is just going
to be an other epic own goal.
That 44 minutes losing money, problem, or the paperclip problem, where we don't understand
how it works and just in a matter of seconds runs away in some kind of direction that's
going to be a very problematic.
Even long before you have to worry about the machines themselves, somehow deciding to do things and to us that we have to worry about people
using machines that are short of AI, AI, and power to do bad things.
I mean, just take a moment and if anyone who's not worried, particularly about advanced AI,
just take 10 seconds and just
think about your least favorite leader on the planet right now.
Don't tell me who it is.
I'm going to keep this apolitical.
But just see the face in front of you, that person for 10 seconds.
Now imagine that that person has this incredibly powerful AI under their control and can use
it to impose their will on the whole planet.
How does that make you feel? Yeah, so the can we break that apart just briefly for the 50
percent chance that we'll run to trouble with this approach. Do you see the bigger
worry in that leader or humans using the system to do damage or are you more worried?
And I think I'm in this camp, more worried about like accidental unintentional
destruction of everything. So like humans trying to do good and like in a way where everyone agrees
it's kind of good, it's just that they're trying to do good without understanding.
Because I think every, you know, leader in history
thought there to some degree thought they're trying to do good.
Oh yeah, I'm sure Hitler thought he was doing it.
To make stuff that was thought.
Yeah, I've been reading a lot about Stalin.
I'm sure Stalin is from he legitimately thought
that communism was good for the world and then he was doing good.
I think Mao Tadong thought what he was doing with a great leap forward was good for the world, and then he was doing good. I think I'm out to Dong thought what he was doing with it greatly forward, it was good too.
Yeah.
I'm actually concerned about both of those.
Before I promised to answer this in detail,
but before we do that,
let me finish answering the first question
because I told you that there were two different boots
we could get to artificial general intelligence
and one scares the TV's out of me,
which is this one where we build something,
we just say bigger neural networks, more hardware and it's just training
that got more data and poof. Now it's very powerful. That I think is the most unsafe and
reckless approach. The alternative to that is the intelligent, intelligible intelligence approach instead, where we say, your networks is just a tool
for the first step to get the intuition. But then we're going to spend also serious resources,
sources on other AI techniques for demystifying this black box and figuring out what's it actually doing,
so we can convert it into something that's
equally intelligent but that we actually understand what is doing. Maybe we can even prove theorems
about it that this car here will never be hacked when it's driving because here's a proof.
There is a whole science of this. It doesn't work for neural networks,
there are big black boxes but it works well and and more to certain other kinds of codes, right?
That approach, I think, is much more promising.
That's exactly why I'm working on it, frankly, not just because I think it's cool for science,
but because I think the more we understand these systems, the better the chances that we
can make them do the things that are good for us that are actually intended,
not unintended. So you think it's possible to prove things about something as complicated as a
neural network. That's the hope. Well, ideally, there's no reason it has to be a neural network in the
end either, right? Like we discovered Newton's laws of gravity with neural network in Newton's head.
we discovered that Newton's laws are gravity with neural network in Newton's head.
Yes, but that's not the way it's programmed into the navigation system of Elon Musk's rocket anymore.
Right. It's written in C++ or I don't know what language he uses exactly.
And then there are software tools called symbolic verification, the Dart by the US and
military has done a lot of really great research on this because they really want to understand that when they build weapon systems, they don't just go fire and
run them or malfunction, right? And there is even an operating system called cell 3 that's been
developed by a DARPA grant where you can actually mathematically prove that this thing can never be hacked.
actually mathematically prove that this thing can never be hacked.
One day, I hope that will be something you can say about the OS that's running on our laptops too, as you know, but we're not there, but I think we should be ambitious, frankly.
And if we can use machine learning to help do the proofs and so on as well,
then it's much easier to verify that a proof is correct than to come up with a proof
from the first place.
That's really the core idea here.
If someone comes on your podcast and says they prove the Riemann hypothesis or some sensational
new theorem, it's much easier for someone else to take some smart grad math grad students
to check.
Oh, there's an error here on equation 5 or this really checks out than it was to discover the proof.
Yeah, although some of those proofs are pretty complicated, but yes, it's still never
less much easier to verify the proof. I love the optimism. You know, we kind of, even with
the security of systems, there's a kind of cynicism that pervades people
who think about this, which is like, oh, it's hopeless.
I mean, in the same sense, exactly like you're saying
when you own networks, oh, it's hopeless
to understand what's happening.
With security, people are just like, well,
there's always going to be attack vectors
and like, waste, waste to attack the system.
But you're right, we're just very new
with these computational systems.
We're even new with these intelligent systems
and it's not out of the realm of positive.
Just like people that understand the movement
of the stars and the planets and so on.
Yeah, it's entirely possible that like,
within hopefully soon, but it could be within a hundred years,
we start to have an obvious laws of gravity about intelligence and God forbid about consciousness
too.
That one is agreed.
I think, of course, if you're selling computers that get hacked a lot, that's in your interest
as a company that people think it's impossible to make it safe.
So, you know, but he's gonna get the idea of suing you.
But I wanna really inject optimism here.
It's absolutely possible to do much better than we're doing now.
And, you know, your laptop does so much stuff.
You don't need the music player to be super safe
in your future self-driving car, right?
If someone hacks it and starts playing music, you don't like
The world won't end. What you can do is you can break out and say like the drive computer that controls your safety
Must be completely physically decoupled entirely from the entertainment system and it must
the coupled entirely from the entertainment system and it must physically be such that it can't take on over the air updates while you're driving and it can be, it can have, it's
not that, it can have ultimately a sort of some operating system on it which is symbolically
verified and proven that it's always going to do what it's going to, what it's supposed
to do, right? We can basically can basically have companies to take that attitude.
They should look at everything they do and say, what are the few systems in our company,
the threat in the whole life of the company if they get hacked, and have the highest standards
for them.
And then they can save money for the L-cheap, or poorly understood stuff for the rest.
This is very feasible, I think. And coming
back to the bigger question, but you're worried about that there will be unintentional failures,
I think. There are two quite separate risks here, right? We talked a lot about one of
them, which is that the goals are noble of the human. The human says, I want this airplane
to not crash, because this is not Muhammad Ata now flying the airplane right.
And now there's this technical challenge of making sure that the autopilot is actually
going to behave as the pilot wants.
If you set that aside, there's also a separate question.
How do you make sure that the goals of the pilot are actually aligned with the goals of
the passenger.
How do you make sure very much more broadly that if we can all agree as a species that we would like things to kind of go well for humanity as a whole that the goals are aligned here.
The alignment problem and yeah, there's been a lot of progress in the sense that there's suddenly huge amounts of research going on on it about it.
I'm very grateful to Elon Musk for giving us that money five years ago so we could launch the first research program on
technical AI safety and alignment. There's a lot of stuff happening.
But I think we need to do more than just make sure little machines do always what their owners do.
You know, that wouldn't have prevented September 11. If Muhammad Atta said, okay, okay, auto pilot, please fly into the World
Trade Center. And it's like, okay, that even happened in a different situation. There
was this depressed pilot named Andreas Lubits, who told his German wings passenger jet to
fly into the Alps. He just told the computer to change the altitude to 100 meters or something like that.
You know what the computer said?
Okay.
Okay.
And it had the freaking topographical map of the Alps in there.
It had GPS, everything.
No one had bothered teaching it even the basic kindergarten ethics of like, no, we never
want airplanes to fly into mountains under any circumstances.
And so we have to think beyond just the technical issues and think about how do we align in general
incentives on this planet for the greater good. So starting with simple stuff like that, every
airplane that has a computer in it should be taught whatever kindergarten ethics it's smart enough to understand.
Like, no, don't fly into fixed objects if the pilot tells you to do so, then go on autopilot
mode, send an email to the cops and learn at the latest airport, nearest airport, you
know, any car with a forward-facing camera
should just be programmed by the manufacturer,
so that it will never accelerate into a human ever.
That would avoid things like the niece attack
and many horrible terrorist vehicle attacks
where they deliberately did that, right?
This was not some sort of thing,
oh, you know, you as in China, different views on, no,
there was not a single car manufacturer in the world,
in the world who wanted the cars to do this.
They just hadn't thought to do the alignment.
And if you look at more broadly, problems that happen on this planet,
vast majority have to do a poor alignment.
Think about, let's go back really big,
because I know this is, you're so good at that.
Yeah, in the very so long ago in evolution we had these jeans and they wanted to make copies of
themselves. That's really all they cared about. So they some jeans and hey, I'm going to
build a brain on this body I'm in so that I can get better at making copies myself.
Yeah. And then they decided for their benefit to get copied more to align your brains
incentives with their incentives. So it didn't want you to starve to death.
So it gave you an incentive to eat.
And it wanted you to make copies of the genes.
So it gave you an incentive to fall in love and do also not
the things to make copies of itself, right? So that was successful value alignment on
the genes. They created something more intelligent than themselves, but they made sure to try
to align the values. But then something went a little bit wrong against the idea of what
the genes wanted because a lot
of humans discovered, hey, yeah, we really like this business about sex that the genes have
made us enjoy, but we don't want to have babies right now.
Yeah.
So we're going to hack the genes and use birth control.
And I really feel like drinking a Coca-Cola right now, but I don't want to get a pot belly.
So I'm going to drink diet coke.
We have all these things we've figured out
because we're smarter than the genes,
how we can actually subvert their intentions.
So it's not surprising that we humans now,
when we're in the role of these genes,
creating other non-human entities with a lot of power,
have to face the same exact challenge. How do we make other powerful entities have incentives that are aligned with ours
and that so they won't hack them. Corporations, for example, right? We humans decided to create
corporations because it can benefit us greatly. Now all of a sudden there's a supermarket. I
can go buy food there. I don't have to hunt. Awesome. And then to make
sure that this corporation would do things that were good for us and not bad for us, we
created institutions to keep them in check. Like if the local supermarket sells poisonous
food, then some owners of a supermarket have to spend some years reflecting behind bars.
Right.
So we created incentives to align them.
But of course, just like we were able to see through this thing, you, the, well, birth
control, if you're powerful corporation, you also have an incentive to try to hack the
institutions that are supposed to govern you because you ultimately as a corporation
have an incentive to maximize your profit.
There's like you have an incentive to maximize the enjoyment your brain has, not for
your genes. So if they can figure out a way of, of,
bribing regulators, then they're going to do that.
In the US, we kind of caught on to that and made laws against corruption and
bribery. Then in the late 1800s, Teddy Roosevelt realized that,
no, we were still being hacked because the Massachusetts railroad companies
had a bigger budget than the state of Massachusetts,
and they were doing a lot of very corrupt stuff.
So he did the whole trust-busting thing to try to align
these other non-human entities, the companies, again, more with
the incentives of Americans as a whole. But it's not surprising, though, that this is a battle
you have to keep fighting. Now, we have even larger companies than we ever had before. And, of course,
they're going to try to, again, subvert the institutions, not because, you know, I think people make a mistake of getting
all too black thinking about things in terms of good and evil, like arguing about whether
corporations are good or evil or whether robots are good or evil.
A robot isn't good or evil, it's tool. And you can use it for great things, like robotic surgery or for bad things.
And a corporation also is a tool, of course.
And if you have good incentives to the corporation,
it'll do great things, like start a hospital
or a grocery store.
If you have really bad incentives,
then it's going to start maybe a marketing-addictive drugs
to people and you'll have an opioid epidemic, right?
It's all about, we should not make the mistake of getting into some sort of fairy tale good
evil thing about corporations or robots.
We should focus on putting the right incentive in place.
My optimistic vision is that if we can do that, then we can really get good things.
We're not doing so great with that right now, either on AI, I think, or on other
intelligent, non-human entities, like big companies, right?
We just have a new secretary of defense.
There's going to start up now and the Biden administration, who is
was an active member of the board of Raytheon.
For that.
Yeah.
So, you know, I have nothing against Raytheon.
I'm not a pacifist, but there's obvious conflict of interest if someone is in the job where
they decide who they're going to contract with.
And I think somehow we have, maybe we need another Teddy Roosevelt to come along again and
say, hey, you know, we want what's good for all Americans and we need to go do some serious
realigning again of the incentives that we're giving to these big companies.
And then we're going to be better off.
It seems that naturally with human beings, just like you beautifully described the history
of this whole thing, it all started with the genes and they're probably pretty upset by all the unintended
consequences that happen since. But it seems that it kind of works out. Like it's in this
collective intelligence that emerges at the different levels. It seems to find, sometimes last
minute, a way to realign the values or keep the values aligned.
It finds a way, like different leaders, different humans pop up all over the place that reset
the system.
Do you want, I mean, do you have an explanation why that is?
Or is that just survivor bias?
And also, is that different, somehow fundamentally different
than with AI systems where you're no longer dealing
with something that was a direct, maybe companies
are the same, a direct byproduct of the evolutionary process?
I think there is one thing which has changed.
That's why I'm not all optimistic. That's why I think there's about
a 50% chance if we take the dumb route with artificial intelligence, that we will human that
humanity will be extinct in this century. First, just the big picture, companies need to have
a right incentive. Even governments, right?
We used to have governments, usually there were just some king, you know, was the king because
his dad was the king, you know, and then there were some benefits of having this powerful
kingdom because or empire of any sort because then it could prevent a lot of local squabbles.
So at least everybody in that region would stop worrying against each other and their incentives
of different cities in the kingdom became more aligned, right?
That was the whole selling point.
Harari.
Yeah.
No, you know what?
Harari has a beautiful piece on how empires were collaboration, and then we also, Harari
says, invented money for that reason.
So we could have better alignment
and we could do trade even with people we didn't know.
So this sort of stuff has been playing out
since time immemorial, right?
What's changed is that it happens on ever larger scales,
right?
Technology keeps getting better
because science gets better.
So now we can communicate over larger distances,
transport things fast over larger distances.
And so the entities get ever bigger, but our planet is not getting bigger anymore.
So in the past, you could have one experiment that just totally screwed up like Easter Island.
Well, it actually managed to have such poor alignment that when they went extinct, people
there, there was no one else to come back and replace them. If Elon Musk doesn't get us to Mars and then we go extinct on a global scale,
and we're not coming back, that's the fundamental difference. And that's an
next mistake, you would rather that we don't make for that reason. In the past, of course, history is full of fiascos, right?
But it was never the whole planet.
And then, okay, now there's this nice uninhabited land here,
as some other people could move in and organize things better.
This is different.
The second thing, which is also different, is that technology gives us
so much more empowerment, right? Both the do good things and also to screw up.
In the Stone Age, even if you had someone whose goals
were really poorly aligned, like maybe he was really pissed
off off because his Stone Age girlfriend dumped him
and he just wanted to, if you wanted to, like,
kill as many people as he could.
How many could he really take out with a rock and a stick
before he was overpowered, right?
Just hand the phone, right?
Now, with today's technology,
if we have an accidental nuclear war between Russia
and the US, which we almost have about a dozen times,
and then we have a nuclear winter,
it could take out seven billion people,
or six billion people, or we don't know.
So the scale of the damage is bigger that we can do.
And if there's obviously no law of physics that says
that technology will never get powerful enough that we could wipe out our species entirely,
that would just be fantasy to think that science is somehow doomed not to get more powerful than that.
And it's not that all unfeasible in our lifetime
that someone could design a designer pandemic,
which spreads as easily as COVID,
but just basically kills everybody.
We already had smallpox,
it killed one third of everybody who got it.
And what do you think of the,
here's an intuition,
maybe it's completely naive
and this optimistic intuition I have,
which it seems, and maybe it's a biased experience that I have, but it seems like the most brilliant
people I've met in my life all are really like fundamental good human beings.
And not like naive, good, like they really want to do good for the world in a way that, well, maybe it's aligned to my sense of what good means. And so I have a sense that the people that
will be defining the very cutting edge of technology, there will be much more of the
ones that are doing good versus the ones that are doing evil. So the race, I'm optimistic
on the us always like last minute
coming up with a solution. So if there's an engineered pandemic that has the capability
to destroy most of the human civilization, it feels like to me either leading up to that
before or as is going on, there will be, we're able to rally the collective genius
of the human species. I could tell by your smile that you're at least some percentage
doubtful. But is that could that be a fundamental law of human nature that evolution only creates, karma is beneficial, good is beneficial and therefore will be all right.
I hope you're right. I would really love it if you're right if there's some sort of law of nature
that says that we always get lucky in the last second because of karma. But, you know, I prefer not playing it so close and jambling on that.
I think it can be dangerous to have too strong faith in that because it makes it complacent.
If someone tells you you never have to worry about your house burning down, then you're
not going to put in a smoke detector because why would you need to?
Even if it's sometimes very simple precautions, we don't take them.
If you're like, oh, the government is gonna take care
of everything for us, I can always trust my politicians.
We advocate our own responsibility.
I think it's a healthier attitude to say,
yeah, maybe things will work out.
Maybe I'm actually gonna have to myself step up
and take responsibility.
And the stakes are so huge. I mean, if we do this right, we
can develop all this ever more powerful technology and cure all diseases and create a future
where humanity is healthy and wealthy for not just the next election cycle, but like billions
of years throughout our universe. That's really worth working hard for and not just, you
know, sitting and hoping for some sort of fairy tale karma.
Well, I just mean so you're absolutely right from the perspective of the individual, like from me,
like the primary thing should be to take responsibility and to build the solutions that your skill set
allows to. Yeah, which is a lot. I think we underestimate it often very much how much good we can do.
If you or anyone listening to this is
completely confident that our government would do a perfect job on handling any future crisis
with engineered pandemics or future AI, I see there's a lot there on what actually happened
in 2020. Do you feel that the government by and large around the world is handled
this flawlessly? That's a really sad and disappointing reality that hopefully is a wake-up call for
everybody. For the scientists, for the for the for the engineers, for the researchers in AI,
especially, it was disappointing to see how inefficient we were at collecting the right amount
of data in a privacy preserving way and spreading that data and utilizing that data to make
decisions, all that kind of stuff.
Yeah, I think when something bad happens to me, I made myself, I promise many years ago that I would not be a whiner.
So when something bad happens to me, of course, it's a process, the disappointment, but then
I try to focus on what did I learn from this that can make me a better person in the future.
And there's usually something to be learned when I fail.
And I think we should all ask ourselves, what can we learn from
the pandemic about how we can do better in the future? And you mentioned there's a really
good lesson. We were not as resilient as we thought we were. And we were not as prepared,
maybe as we wish we were. You can even see very stark contrast around the planet. South
Korea, they have over 50 million people.
Do you know how many deaths they have from COVID last time I checked?
So about 500. Why is that? Well, the short answer is that they had prepared.
They were incredibly quick, incredibly quick to get on it with very rapid testing
and contact tracing and so on, which is why they never had more cases than they could contract
trace effectively, right?
They never even had to have the kind of big lockdowns we had in the West.
But the deeper answer to, it's not just the Koreans are just somehow better people. The reason I think they were better prepared was because they had already had a pretty bad
hit from the SARS pandemic, or which never became a pandemic, something like 17 years ago,
I think.
So it was kind of a fresh memory that, you know, we need to be prepared for pandemics, so
they were, right? And so maybe this is a lesson here for all of us to draw from COVID that rather than just wait for the next pandemic or the next problem with AI getting out of control or anything else.
Maybe we should just actually set aside a tiny fraction of our GDP to have people very systematically do some horizon scanning
and say, okay, what are the things that could go wrong? And let's do it out and see which
are the more likely ones and which are the ones that are actually actionable and then be
prepared.
So, one of the observations is one little ant-slash human that I am of disappointment is the political division over
information that has been observed that I observed this year that it seemed the
discussion was less about sort of what happened and understanding what happened deeply and more about there's different truths out there.
And it's like a argument, my truth is better
than your truth.
And it's like red versus blue or different.
Like it was like this ridiculous discourse
that doesn't seem to get at any kind of notion of the truth.
It's not like a some kind of scientific process.
Even science got politicized in ways
that's very heartbreaking to me.
You have an exciting project on the AI front
of trying to rethink one of the,
you mentioned corporations.
There's one of the other collective intelligence systems
that have emerged through all of this is social networks.
And just to spread the internet,
is the spread of information on the internet,
our ability to share that information,
there's all different kinds of new sources and so on.
And so you said, like that's from first principles,
let's rethink how we think about the news,
how we think about information.
Can you talk about this amazing
effort that you're undertaking? Oh, I love to. This has been my big COVID project since
my nights and weekends on ever since the lockdown. The segue into this actually, let me come back
to what you said earlier that you had this hope that in your experience, people who you felt
were very talented, often idealistic and wanted to do good. Frankly, I feel the same about all people
by and large. They're always exceptions, but I think the vast majority of everybody,
regardless of education and whatnot, really are fundamentally good, right? So how can it be
that people still do so much nasty stuff, right? I think it has everything to do with the information that we're given.
If you go into Sweden 500 years ago and you start telling all the farmers that those
dains in Denmark, they're so terrible people.
We have to invade them because they've done all these terrible things that you can't
fact check yourself.
There are a lot of people Swedes did that, right?
And we've seen so much of this today in the world,
both geopolitically, where we are told
that China is bad and Russia is bad and Venezuela is bad
and people in those countries are often told that we are bad.
And we also see it at a
micro level, you know, where people are told that, oh, those are voted for the other party are
bad people. It's not just an intellectual disagreement, but they're bad people. And we're getting
ever more divided. And so how do you reconcile this with this intrinsic goodness in people?
I think it's pretty obvious that it has again to do with this, with information,
there were fed and given, right?
We evolved to live in small groups where you might know 30 people in total, right?
So you then had a system that was quite good for assessing who you could trust and who you could not.
And if someone told you that, you you that Joe there is a jerk, but you had interacted with him yourself
and seen him in action, and you would quickly realize maybe that that's actually not quite
accurate, right?
But now that we, the most people on the planet are people who have never met, it's very
important that we have a way of trusting the information we're given.
So, okay, so where does the news project come in? Well, throughout history, you can go read Machiavelli from the 1400s and you'll see how already then there were busy manipulating people
with propaganda and stuff. Propaganda is not new at all. And the incentive to manipulate people is just not new at all. What is it that's new?
What's new is machine learning meets propaganda. That's what's new. That's why this has gotten so
much worse. You know, some people like to blame certain individuals like in my liberal university
bubble, many people blame Donald Trump and say it was his fault. I see it differently.
I think what, Donald Trump just had this extreme skill
at playing this game in the machine learning algorithm age.
A game he couldn't have played 10 years ago.
So what's changed?
What's changed is, well, Facebook and Google
and other companies, and I'm not bad at my thing,
then I have a lot of friends who work for these companies, good people.
They deployed machine learning algorithms, just increased their profit a little bit,
to just maximize the time people spent watching ads.
And they had totally underestimated how effective they were going to be.
This was again, the black box, non-intelligible intelligence.
They just noticed, oh, we're getting more ad revenue-grade. It took a long time to even realize why and how, and how damaging this
was for society. Because, of course, what the machine learning figured out was that the, by far
most effective way of gluing you to your little rectangle was to show you things that triggered strong emotions, anger,
etc., resentment.
And if it was true or not, it didn't really matter.
It was also easier to find stories that weren't true.
If you weren't limited, that's just the limitation.
That's a show people.
That's a very limiting fact.
And before and long, we've got these amazing filter bubbles on a scale we had never seen
before. A couple of this to the fact that also the online news media were so effective that they
killed a lot of Prince journalism. There's less than half as many journalists now in America,
I believe, as there was a generation ago.
You just couldn't compete with the online advertising.
So all of a sudden, most people are not getting
even reading newspapers.
They get their news from social media.
And most people only get news in their little bubble.
So along comes now, some people like Donald Trump
who figured out among the first successful
politicians, to figure out how to really play this new game and become very, very influential.
But I think that what Donald Trump was as simple, he took advantage of it. He didn't create
that the fundamental conditions were created by machine learning, sort of taking over the news media.
were created by machine learning, sort of taking over the news media. So this is what motivated my little COVID project here.
So, you know, I said before, machine learning and tech in general is not evil, but it's
also not good.
It's just a tool that you can use for good things or bad things.
And as it happens, machine learning and news is mainly used by the big players, big tech, to manipulate people
into watches, many ads as possible, which had this unintended consequence of really
screwing up our democracy and fragmenting it into filter bubbles.
So I thought, well, machine learning algorithms are basically free.
They can run on your smartphone for free also if someone gives them away to you, right?
There's no reason why they only have to help the big guy to manipulate the little guy.
They can just as well help the little guy to see through all the manipulation attempts
from the big guy.
So, did this project, it's called, you can go to improve the news.org.
The first thing we've built is this little news aggregator looks a bit like Google News
except it has these sliders on it
to help you break out or you filter bubble.
So if you're reading, you can click, click,
and go to your favorite topic.
And then if you just slide the left right slider
away all the way over to the left.
There's two sliders, right?
Yeah, there's the one, the most obvious one
is the one that has left right labeled on us.
You go to left, you get one set of articles, you go to the right, you see it, very different truth appearing.
Oh, that's literally left and right on the political spectrum.
On the political spectrum.
Yeah. So if you're reading about immigration, for example, it's very, very noticeable.
And I think step one, always if you want to not get manipulated, it is just to be able to recognize the techniques
people use.
So it's very helpful to see how they spin things on the two
sides.
I think many people are under the misconception that the main
problem is fake news.
It's not.
I had an amazing team of MIT students where we did an
academic project, the use machine learning to detect the main kinds of bias over the summer.
Yes, of course, sometimes there's fake news where someone just claimed something that's false.
Right? Like, Hillary Clinton just got divorced or something.
Yes.
But what we see much more of is actually just omissions.
what we see much more of is actually just omissions.
If you go to, there's some stories which just won't be mentioned by the left or the right
because it doesn't suit their agenda.
And then they'll still mention other ones very, very much.
So for example, we've had a number of stories
about the Trump family's financial dealings. And then there's been a bunch of stories about the Trump family's financial dealings.
And then there's been a bunch of stories about the Biden family's,
Hunter Biden's financial dealings.
I surprise, surprise, they don't get equal coverage on the left and the right.
One side loves to cover the Biden, Hunter Biden's stuff.
And one side loves to cover the Trump, never guess which is which, right?
But the great news is if you want a normal American citizen and you dislike corruption in
all its forms, then slide slide, you can just look at both of the sides and you'll see
all the corrupt, those political corruption stories.
It's really liberating to just take in both sides, the spin on both sides,
it somehow unlocks your mind to like think in your own,
to realize that, I don't know,
it's the same thing that was useful
in the Soviet Union times for when everybody
was much more aware that they're surrounded by propaganda.
Right.
So interesting what you're saying actually.
So, Noam Chomsky used to be a MIT colleague once said that propaganda is to democracy.
What violence is to totalitarianism. And what he means by that is if you have a really totalitarian
government, you don't need propaganda. People
will do what you want them to do anyway out of fear, right? But otherwise, you need propaganda.
So I would say actually that the propaganda is much higher quality in democracies, more
believable. And it's brilliant. It's really striking when I talk to colleagues Science colleagues like from Russia and China and so on I
Notice they are actually much more aware of
The propaganda in their own media than many of my American colleagues are about the propaganda in Western media
That's brilliant. That means the propaganda in the Western media is just better. Yes
That's so brilliant. Better in the West, even the proper gun. Oh, man.
But there's.
That's good. But when you, when once you realize that, you realize there's also something
you're optimistic there that you can do about it, right? Because first of all, omissions,
as long as there's no outright censorship, you can just look at both sides
there's no outright censorship, you can just look at both sides and pretty quickly piece together, a much more accurate idea of what's actually going on, right?
And develop a natural skepticism too.
Yeah.
So analytical scientific mind about how you take in information.
Yeah.
And I think I have to say, sometimes I feel that some of us in the academic bubble are too
arrogant about this and somehow think
Oh, it's just people who aren't as
Educated when we are often just as gullible also, you know
We read only our media and and don't see through things
Anyone who looks at both sides like this in comparison. Well, we immediately start noticing the
Chenanigans being pulled it and you know, I think I think what I tried to do with this app is that the big tech has the
summit, Stent tried to blame the individual for being manipulated, much like big tobacco,
tried to blame the individuals entirely for smoking.
And then later on, you know, our government stepped up and said, actually, you know, you
can't just blame little kids for starting to smoke. We have to have more responsible advertising and this
and that. I think it's a bit the same here. It's very convenient for a big tech to blame.
So it's just people who are so dumb and get fooled. The blame usually comes in saying,
oh, it's just human psychology. People just want to hear what they already believe. But
Professor David
ran the MIT, actually partly debunked that with a really nice study showing that people tend to be
interested in hearing things that go against what they believe if it's presented in a respectful way.
Like, suppose, for example, that you have a company and you're just about to launch this project
and your convince is going to work and someone says, you know, Lex, I hate to tell you
this, but this is going to fail.
Here's why would you be like, shut up.
I don't want to hear it.
La la la la la la.
Would you?
You would be interested, right?
Yes.
And also if you're on an airplane back in the code pre-COVID times, you know,
and the guy next to you is clearly from the opposite side of the political spectrum,
but is very respectful and polite to you.
Wouldn't you be kind of interested to hear a bit about how he or she thinks about things?
Of course.
But it's not so easy to find out, respectful disagreement now, because like, for example,
if you are a
Democrat and you're like, I want to see something on the other side
So you just go bright bar calm and then after the first 10 seconds you feel deeply insulted by something and they
It's it's not gonna work or if you take someone who votes Republican and
They go to something on the left and they just get very offended very quickly
by them having put a deliberately ugly picture
of Donald Trump on the front page or something.
It doesn't really work.
So this news aggregator also has a nuance slider
which you can pull to the right
and then to make it easier to get exposed
to actually more sort of academic style
or more respectful.
That's brilliant. Port portrayals of different views. And finally, the one kind of bias, I think people are mostly aware of, is the
left right, because it's so obvious, because both left and right are very powerful here,
right? Both of them have well-funded TV stations and newspapers, and it's kind of hard to
miss. But there's another one,
the establishment slider, which is also really fun. I love to play with it. And that's
more about corruption. Yes. Because if you have a society where almost all the powerful
entities want you to believe a certain thing, That's what you're going to read.
And both the big media, mainstream media on the left and on the right, of course.
And powerful companies can push back very hard.
Like tobacco companies push back very hard back in the day when some newspapers started
writing articles about tobacco being dangerous so that it was hard to get a lot of coverage
about it initially.
And also if you look geopolitically, right, of course, in any country when you read their media, you're
mainly going to be reading a lot about articles about how our country is the good guy and
the other countries are the bad guys, right? So if you want to have a really more nuanced
understanding, you know, like the Germans used to be told that the British used to be
told that the French were the bad guys and the French used to be told that the British used to be told that the French were the bad guys. And the French used to be told that the British were the bad guys.
Now they both visit each other's countries a lot and have a much more nuanced understanding.
I don't think there's going to be any more wars between France and Germany.
But on the geopolitical scale, it's just as much as ever, you know, a big Cold War
now, US, China, and so on.
And if you want to get a more nuanced understanding
of what's happening to you politically,
then it's really fun to look at this establishment slider
because it turns out there are tons of little newspapers,
both on the left and on the right,
who sometimes challenge establishment and say,
maybe we shouldn't actually invade Iraq right now.
Maybe this weapons and mass destruction thing is BS.
If you look at the journalism research afterwards,
you can actually see that.
It's quickly related.
Both CNN and Fox, very pro.
Let's get rid of Saddam.
There are weapons and mass destruction.
Then there were a lot of smaller newspapers.
They were like, wait a minute.
This evidence seems a bit sketchy.
And maybe we, but of course, there were so hard to find.
Most people didn't even know they existed, right?
Yet it would have been better for American national security
if those voices that also come up.
I think it harmed America's national security
actually that we invaded Iraq.
And I'll give you a little more interest
in that kind of thinking too, from those small sources.
So like the, when you say big, it's more about kind of the reach of the broadcast, but
it's not big in terms of the interest.
I think there's a lot of interest in that kind of anti-establishment or like skepticism
towards, you know, out of the box thinking, there's a lot of interest in that kind of anti-establishment or like skepticism towards, you know, out of the box thinking,
there's a lot of interesting that kind of thing.
Do you see this news project or something like it
being basically taken over the world
as the main way we consume information?
Like, how do we get there?
Like, how do we, you know, so, okay,
the idea is brilliant.
It's a, you're
calling it your little project in 2020, but how does that become the new way we consume information?
I hope, first of all, there's the plan a little seed there because normally the big barrier of
doing anything in media is you need a ton of money, but this costs no money at all. I've just been paying
myself, you pay a tiny amount of money each month to Amazon to run the thing in their cloud.
There never will never be any ads. The point is not to make any money off of it, and we
just train machine learning algorithms to classify the articles and stuff, so it just
kind of runs by itself. So if it actually gets good enough at some point that it's
sort of catching on, it could scale.
And if other people carbon copy and make other versions that are better, that's
the more the merrier, I think there's a real opportunity for machine learning to empower the
individual against the powerful players.
As I said in the beginning here,
it's been mostly the other way around so far,
that the big players have the AI and then they tell people,
this is the truth, this is how it is.
But it can just as well go the other way around.
When the internet was born, actually,
a lot of people had this hope that maybe this will be a great thing for democracy,
make it easier to find out about things and maybe machine learning and things like this can actually help again.
And I have to say, I think it's more important than ever now, because this is very linked
also to the whole future of life as we discussed earlier.
We're getting this ever more powerful tech.
It's pretty clear, if you look on the one or two generation,
three generation timescale that there are only two ways
this can end geopolitically.
Yeah.
Either it ends great for all humanity or it ends terribly
for all of us.
There's really no way between it.
And we're so stuck in that because technology
knows no borders.
And you can't have people fighting
When the weapons just keep getting ever more powerful
indefinitely eventually
Lux the luck runs out and and you know that right like right now we have I love America
But the fact of the matter is what's good for America is not opposites in the long
term to what's good for other countries.
It would be if this was some sort of zero-stum game, like it was thousands of years ago, when
the only way one country could get more resources was to take land from other countries, because
that was basically the resource.
Look at the map of Europe, Some countries kept getting bigger and smaller,
and endless wars. But then since 1945 there hasn't been any war in Western Europe, and they all got way richer because of tech.
So the optimistic outcome is that the big winner in this century is going to be America and China and Russia and everybody else because technology just makes us all healthier and wealthier and we just find some way of keeping
the peace on this planet. But I think unfortunately there are some pretty powerful forces right
now that are pushing in exactly the opposite direction and trying to demonize other countries
which just makes it more likely that this ever more powerful tech we're building is gonna
use disastrous ways. Yeah, for aggression versus cooperation, that kind of thing.
Yeah, even look at look at just military AI now, right? It's went 20. It was so awesome to see
these dancing robots. I loved it, right? But one of the biggest growth areas in robotics now is
of course autonomous weapons. And 2020 was like the best marketing year ever for autonomous weapons because in both Libya,
it's a civil war, and in Nagorno-Karabakh, they made the decisive difference.
And everybody else is watching this.
Oh yeah, we want to build autonomous weapons too. In Libya, you had on one hand our ally, the United Arab Emirates,
that were flying their autonomous weapons that they bought from China,
bombing Libyans. And on the other side, you had our other ally Turkey flying their drones.
They had no skin in the game,
any of these other countries.
And of course, there was the Libyons
who really got screwed in Nagorno-Karabakh.
You had actually, again,
there's a Turkey sending drones built by this company
that was actually founded by a guy who went to MIT Aero Astro.
Do you know that?
Bakratiar, yeah. So know that? Buckratiar. Yeah.
So MIT has a direct responsibility for ultimately this.
And a lot of civilians were killed there.
And so because it was militarily so effective, now suddenly
there's a huge push.
Oh yeah, yeah, let's go build ever more autonomy
into these weapons.
And it's going to be great.
And I think actually people who are obsessed about some sort of future
terminus, later scenario right now should start focusing on the fact that we
have too much more urgent threats happening for machine learning.
One of them is the whole destruction of democracy that we've talked about
now where where our flow of information is being manipulated by machine learning. One of them is the whole destruction of democracy that we've talked about now where our flow of information is being manipulated by machine learning. And the other
one is that right now, you know, this is the year when the big arms race and out of control
arms race and at least Thomas weapons is going to start or it's going to stop.
So you have a sense that there is, 2020 was an instrumental catalyst for the race
of, for the autonomous weapons race.
Yeah, because it was the first year when they proved decisive in the battlefield.
And these ones are still not fully autonomous, mostly they're remote controlled, right?
But, you know, we could very quickly make things about, you know, the size and cost of a smartphone, which you just
put in the GPS coordinates or the face of the one you want to kill, a skin color or whatever
and it flies away and does it.
The real good reason why the US and all the other superpowers should put the kebosh on
this is the same reason we decided to put the kibosh
on bio weapons.
So you know, we gave the future of life award that we can talk more about later to Matthew
Messelson from Harvard before for convincing Nixon to ban bio weapons and I asked him,
how did you do it?
And he was like, well, I just said, look, we don't want there to be a $500 weapon of mass destruction that even
all our enemies can afford even non-state actors.
And Nixon was like, good point.
You know, it's an America's interest that the powerful weapons are all really expensive.
So only we can afford them or maybe some more stable adversaries, right?
Nuclear weapons are like that. But bio weapons were not like that. That's what we banned them.
And that's why you never hear about them now. That's why we love biology.
So you have a sense that it's possible for the big powerhouses in terms of the big nations in
the world to agree that autonomous weapons
is not a race we want to be on.
It doesn't end well.
Yeah, because we know it's just going to end in mass proliferation and every terrorist
everywhere is going to have these super cheap weapons that they will use against us.
Our politicians have to constantly worry about being assassinated every time they go outdoors
by some anonymous little mini drone, you know, we don't want that. And if, even
if the US and China and everyone else could just agree that you can only build these
weapons if they cost at least 10 million bucks, that would, that would be a huge win for
the superpowers. And frankly, for everybody, the, you don't, and people often push back and say, well,
it's so hard to prevent cheating.
But hey, you can say the same about bio weapons, you know, take any of your MIT colleagues
and biology.
Of course, they could build some nasty bio weapon if they really want to, but first of
all, they don't want to because they think it's disgusting because of the stigma.
And second, even if there's some sort of nutcase and want to, it think it's disgusting because of the stigma. And second, even if there's
some sort of nutcase in want to, it's very likely that some other drag students or someone
would rat them out because everyone else thinks it's so disgusting. And in fact, we now
know there was even a fair bit of cheating on the bio weapons ban, but none, no countries
used them because it was so stigmatized that it just wasn't worth revealing that they had cheated.
You talk about drones,
but you kind of think that drones is the remote operation.
Which they are mostly stuff.
But you're not taking the next intellectual step
of like, where does this go?
You're kind of saying the problem with drones
is that you're removing
yourself from direct violence. Therefore, you're not able to sort of maintain the common humanity
required to make the proper decisions strategically. But that's the criticism. I was supposed to like,
if this is automated, and just exactly as you said, if you automate it and there's a race,
then you're going to the technology to get
better and better and better, which means getting cheaper and cheaper and cheaper.
And unlike perhaps nuclear weapons, which is connected to resources in a way like it's
hard to get the, it's hard to engineer.
Yeah.
It feels like it's, you know, there's too much overlap between the tech industry and autonomous weapons,
the way you could have a smartphone type of cheapness.
If you look at drones, it's a $4,000, you can have an incredible system that's able to
maintain flight autonomously for you and take pictures and stuff.
You can see that going into the autonomous weapons base.
But why is that not thought about or discussed enough
in the public do you think?
You see those dancing Boston Dynamics robots
and everybody has this kind of,
like as if this is like a far future.
They have this like fear,
like, oh, this will be Terminator
in like some, I don't know know unspecified 20 30 40 years
And they don't think about well. This is like some much less dramatic
Version of that is actually happening now. It's not gonna have it's not gonna be legged. It's not gonna be dancing
But it's it's already has the capability to use artificial intelligence to kill humans.
Yeah.
The Boston Dynamics leg robots.
I think the reason we imagine them holding guns is just because we've all seen Arnold Schwarzenegger.
Right.
Right.
That's our reference point.
That's pretty useless.
That's not going to be the main military of the use of them.
They might be useful in law enforcement in the future.
And there's a whole debate about
you want robots showing up at your house with guns telling you to who will be perfectly obedient
to whatever dictator controls them. But let's leave that aside for a moment and look at what's
actually relevant now. So there's a spectrum of things you can do with AI in the military.
And again, to put my card on the table, I'm not the pacifist. I think we should have good defense. So, for example, a predator drone is basically a fancy
little remote control airplane. There's a human piloting it and the decision, ultimately,
if I were to kill somebody, whether it is made by a human still. And this is a line I think we should never cross.
There's a current theodia policy.
Again, you have to have a human in the loop.
I think algorithms should never make life
for death decisions.
They should be left to humans.
Now, why might we cross that line?
Well, first of all, these are expensive, right?
So for example, when Arsabah John had all these drones and Armenia didn't have any, they
start trying to jerry-rig little cheap things, fly around.
But then of course, Armenians would jam them.
Or the Azarees would jam them.
Remote control things can be jammed.
That makes them inferior.
Also, there's a bit of a time delay between, you know, if we're piloting something before
our way, speed of light, and the human has a reaction time as well, it would be nice
to eliminate that jamming possibility in the time delay by having it fully autonomous.
But now you might be crossing that exact line.
You might program it to just, oh yeah, the air drone, go hover over this country for a while,
and whenever you find someone who is a bad guy, kill them,
now the machine is making these sort of decisions.
And some people who defend this still say, well, that's morally fine,
because we are the good guys.
And we will tell it the definition of bad guy
that we think is moral.
But now, it would be very naive to think
that if ISIS buys that same drone
that they're gonna use our definition of bad guy,
maybe for them bad guy is someone wearing a US Army uniform.
Right.
Or maybe there will be some
weird ethnic group who decides that someone of another ethnic group there, the bad guys,
right? The thing is human soldiers, with all of our faults, we still have some basic wiring in us.
Like, no, it's not okay to kill kids and civilians.
And Thomas weapon has none of that.
It's just gonna do whatever is programmed.
It's like the perfect Adolf Aishman on steroids.
Like, they told him, Adolf Aikman, you know,
you want you to do this and this,
and this to make the Holocaust more efficient.
And he was like, yeah.
And off he went and did it, right?
Yeah.
Do we really want to make machines that are like that, like completely aim oral and will
take the user's definition of who is the bad guy?
And do we then want to make them so cheap that all our adversaries can have them like,
what could possibly go wrong?
That's the, that's I think the big arguments for why we want to, this year, really put the caboch
on this. And I think you can tell there's a lot of very active debate even going on within
the US military and undoubtedly in other militaries around the world also about whether we should
have some sort of international agreement to at least require that these weapons have to be above a certain size and cost,
you know, so that things just don't totally inspire a lot of control. And finally, just
for your question now, but is it possible to stop it? Because some people tell me, oh,
just give up, you know. But again, so Matthew Meslitzend again from Harvard, right, who the
bio-eapons hero, he had exactly this criticism, most of the bio-eapons. People were like,
how can you check for sure that the Russians aren't cheating? And he told me this, I think
really ingenious insight, he said, you know, Max, some people think you have to have inspections and things,
and you have to make sure that people, you can catch the cheaters with a hundred percent chance.
You don't need a hundred percent, he said, one percent is usually enough.
Because if it's another big state, I suppose China and the US have signed the treaty,
eliminating a certain line and saying, yeah, these kind of drones are okay, but these fully autonomous
ones are not. Now, suppose you are China and you have cheated and secretly developed some
clandestine little thing, or you're thinking about doing it. You know, what's your calculation that you do? Well, you're like, okay,
what's the probability that we're going to get caught?
If the probability is 100%, of course, we're not going to do it.
But if the probability is 5% that we're going to get caught,
and it's going to be like a huge embarrassment for us.
And it doesn't have, it's, we still have our nuclear weapons anyway, so it doesn't really
make any enormous difference in terms of deterring the US.
And that feels the stigma that you kind of establish, like this fabric, this universal stigma
over the thing. Exactly. It's very reasonable for them to say, well, we probably get away with it,
but if we don't,
then the US will know we cheated, and then they're going to go full tilt with their
program and say, look, the Chinese are cheaters, and that's going to, now we have all these
weapons against us, and that's bad.
So the stigma alone is very, very powerful.
And again, look what happened with bio weapons, right?
It's been 50 years now.
When was the last time you read about a bio-terrorism attack?
The only deaths I really know about with bio-weapons
that have happened when we Americans managed to kill some
of our own with anthrax, the idiot who sent them
to Tomm Daschel and others and letters.
And similarly, in the Soviet Union,
they had some anthrax and some lab there.
Maybe they were cheating or who knows. And it leaked out and killed a bunch of Russians. I'd say that's a pretty
good success, right? 50 years, just two own goals by the superpowers, and then nothing.
And that's why whenever I ask anyone what they think about biology, they think it's
great. They associated with new cures, new diseases, maybe a good vaccine.
This is how I want to think about AI in the future.
I want others to think about AI too, as a source of all these great solutions to our problems,
not as, oh AI, oh yeah, that's the reason I feel scared going outside these days.
Yeah, it's kind of brilliant that bio-weens and nuclear weapons we've figured out, I mean,
of course, they're still a huge source of danger, but we figured out some way of creating
rules and social stigma over these weapons that then create stability to our, whatever
that game theoretic stability that occurs.
Exactly.
And we don't have that with AI, and you're kind of screaming from the top of the mountain
about this that we need to find that because just like,
it's very possible with the future of life
as you point out, Institute Awards pointed out
that with nuclear weapons, we could have destroyed ourselves quite a few times.
And it's, you know, it's a learning experience that doesn't, is very costly.
We gave, you know, this Future Life Award. We gave it the first time to discover Vasili Arkepov,
you know, he was on, most people haven't even of it. Yeah, can you say who he is? Vasiliev Arkipov he
has in my opinion
made the greatest
positive contribution to humanity of any human in modern history and maybe it sounds like hyperbole here
Like I'm just over the top, but let me tell you with a story and I think maybe you'll agree so during the Cuban Missile Crisis
We Americans first didn't know
that the Russians had sent four submarines,
but we caught two of them.
And we didn't know that,
so we dropped practice depth charges
on the one that he was on,
trying to force it to the surface.
But we didn't know that this nuclear submarine
actually was a nuclear submarine with a nuclear torpedo. We also didn't know that this nuclear submarine actually was a nuclear submarine with a nuclear
torpedo.
We also didn't know that they had an authorization to launch it without clearance from Moscow.
And we also didn't know that they were running out of electricity.
The batteries were almost dead.
They were running out of oxygen.
Sailors were fainting left and right.
The temperature was about 110 and 120 Fahrenheit on board.
It was really hellish conditions,
really kind of doomsday.
And at that point, these giant explosions start happening
from Americans dropping these.
The captain thought World War III had begun.
They decided that they were going to launch the nuclear torpedo.
And one of them shouted, you know, we're all going to die,
but we're not going to disgrace our Navy, you know.
We don't know what would have happened if there had to be a giant mushroom cloud all of
a sudden, you know, against the Americans. But since everybody had their hands on the
triggers, pretty, you don't have to be too creative to think that it could have led to
all our nuclear war, in which case we wouldn't be having this conversation now, right?
What actually took place was they needed three people to approve
this. The captain had said yes, there was the Communist Party political officer, he also
said yes, let's do it. And the third man was the sky was still the Archipelago who said
yet. For some reason, he was just more chill than the others and he was the right man at
the right time. I don't want this as a species rely on the right person
being there at the right time.
We tracked down his family living in relative poverty
outside Moscow.
Maybe flew his daughter.
He had passed away.
And the fluent to London, they had never been to the West even.
It was incredibly moving to get the honor of them for this.
The next year we gave this future life award
to Stanislav Petrov, have you heard of him?
Yes.
So he was in charge of the Soviet early warning station,
which was built with Soviet technology,
and honestly not that reliable.
It said that there were five US missiles coming in.
Again, if they had launched at that point,
we probably wouldn't be having this conversation.
He decided based on just mainly gut instinct
to not escalate this.
And I'm very glad he wasn't replaced by an AI
that was just automatically falling orders.
And then we gave the third one to Matthew Messelson. Last year, we gave this award to these guys who actually used technology for
good. Not avoiding something bad, but for something good. The guys who eliminated this
disease, which is way worse than COVID, that had killed half a billion people in its
final century, smallpox. So, so you mentioned it earlier.
COVID on average kills less than 1% of people who get it, smallpox, about 30%.
And they just, ultimately, Victor Zhdanov and Bill Feige, most of my colleagues have never heard
of either of them. One American, one Russian. They did this amazing effort.
Not only was Shdana able to get the US
and the Soviet Union to team up against smallpox
during the Cold War, but Bill Faggy came up with this ingenious
strategy for making it actually go all the way to defeat
the disease without funding for vaccinating everyone.
And as a result, we haven't had any,
we went from 15 million deaths.
The year I was born in smallpox.
So what are we having COVID now,
a little bit short of two million, right?
Yes.
Two zero deaths, of course, this year.
And forever, there have been 200 million people
the estimate who would have died since then
by smallpox had it not been for this.
So isn't science awesome?
Yeah, that's when you use it for good.
And the reason we want to celebrate these sort of people is to remind them of this.
Science is so awesome when you use it for good.
And those those awards actually the variety there, it's a very interesting picture.
So the the first two are looking at, it's kind of exciting to think that these these
average humans in some sense, that there are products of billions of other humans that
came before them evolution. And some little, you said gut, but there's something in there
that stopped the annihilation of the human race. And that's a magical thing,
but that's like this deeply human thing. And then there's the other aspect where that's also very
human, which is to build solution to the existential crises that we're facing, like to build it,
to take their responsibility, to take, come up with different technologies and so on.
And both of those are deeply human.
The gut and the mind, whatever that is.
The best is when they work together.
Archipel, I wish I could have met him, of course,
but he had passed away.
He was really a fantastic military officer,
combining all the best traits that we in America admire in our military.
Because first of all, he was very loyal, of course. He never even told anyone about this during his whole life,
even though you think he had some bragging rights, right? But he just was like, this is just business,
just doing my job. It only came out later after his death. And second, the reason he did the right thing
was not because he was some sort of liberal or some sort of not because he was just, oh, you know, peace and love.
It was partly because he had been the captain on another submarine that had a nuclear reactor
meltdown.
And it was his heroism that helped contain this.
That's why he died of cancer later also. But he's seen many of his crew members die.
And I think for him that gave him this gut feeling that, you know, if there's a nuclear war between the US and the Soviet Union,
the whole world is going to go through what I saw my dear crew members suffer. It wasn't just an abstract thing for him.
I think it was real. And second though, not just the gut, the mind, right?
He was for some reason very level headed personality and very smart guy, which is exactly what we want.
Our best fighter pilots to be also, right? I never forget Neil Armstrong when he's landing on the
moon and almost running out of gas. And he doesn't even change, really say 30 seconds. He doesn't even
change the tone of voice, just keeps going.
Archipelago, I think, was just like that.
So when the explosions start going off
and it's captain is screaming, and we should nuke them,
and all, he's like,
I don't think the Americans are trying to sink us.
I think they're trying to send us a message.
That's pretty badass.
Yes, coolness. Because he said, if they trying to send us a message. That's a pretty badass. Yes, coolness.
And because he said, if they wanted to sync us,
no, and he said, listen, listen, listen.
It's alternating.
One loud explosion on the left, one on the right,
one on the left, one on the right.
He was the only one to notice this pattern.
And it is like, I think this is them trying to send us
a signal that they want us to surface.
And they're not going to sink us.
And somehow this is how he then managed it ultimately with his combination of gut and
also just cool analytical thinking was able to deescalate the whole thing.
And, yeah, so this is the best in humanity.
I guess coming back to what we talked about earlier is the combination of the neural network, the instinctive, you know, with, with, I'm getting, the tearing up here, getting emotional, but he is just, he is one of the gut, the heart, you know, and the mind combined.
And especially in that time, there's something about the, I mean, this is a very in America
people are used to this kind of idea of being the individual of like on your own thinking.
Yeah.
I think under, in the Soviet Union under communism is a section which harder to do that.
Oh, yeah.
He didn't even, he even got, he didn't get any accolades either when he came
back for this, right? They just wanted to hush the whole thing up. Yeah. There's echoes of
that which are noble. There's all kinds of, that's one, that's, that's a really hopeful
thing that amidst big centralized powers, whether it's companies or states,
they're still the power of the individual
to think on their own to act.
But I think we need to think of people like this,
not as a panacea, we can always count on,
but rather as a wake up call, you know.
So because of them, because of our Kupov,
we are alive to learn from this lesson, to
learn from the fact that we shouldn't keep playing Russian and left and almost have a
nuclear war by mistake now and then, because relying on luck is not a good long-term strategy.
If you keep playing Russian and left over and over again, the probability of surviving
just drops exponentially with time.
And if you have some probability of having an accidental nuclear every year, the probability
of not having one
also drops exponentially.
I think we can do better than that.
So I think the message is very clear.
There are once in a while she happens.
And there's a lot of very concrete things we can do
to reduce the risk of things like that happening
in the first place.
On the AI front, if we just link on that first second.
Yeah.
So your friends with you often talk with Elon Musk, thoughts, history, did a lot of interesting
things together.
He has a set of fears about the future of artificial intelligence, AGI.
Do you have a sense, we've already talked about the things we should be worried about with AI?
Do you have a sense of the shape of his fears in particular about AI, of the, which subset of what we've talked about, are not explainable, they're not intelligible
intelligence. And then, as a branch of that, is it the manipulation by big corporations
of that or individual evil people to use that for destruction or the unintentional consequences?
Do you have a sense of where he's thinking is on this? From my many conversations with Elon, yeah, I certainly have a model of how
he thinks. It's actually very much like the way I think also. I'll elaborate on it a bit. I just
want to push back on when you said evil people. I don't think it's a very helpful concept. Evil people.
Sometimes people do very, very bad things,
but they usually do it because they think it's a good thing.
Yes. Because somehow other people have told them that that was a good thing or
given them incorrect information or whatever, right?
I believe in the fundamental goodness of humanity that if we
educate people well and they find out how things really are, All right, I have I believe in the fundamental goodness of humanity that if we
K people well and they find out how things really are
People generally wanted to do good and be good
Now hence the value alignment
Yes, it's it's about information about knowledge and then once we have that will will likely be able to
Do good in the way that's aligned with everybody else who thinks.
Yeah, and it's not just the individual people we have to align.
So we don't just want people to be educated to know
where the way things actually are and to treat each other well.
But we also need to align other non-human entities.
We talked about corporations or has to be institutions.
So that what they do is actually good for the country they're in and we should align,
make sure that what countries do is actually good for the species as a whole, etc.
Coming back to Elon, yeah, my understanding of how Elon sees this is really quite similar
to my own, which is one of the reasons I'd like him so much and enjoy talking with him so much. I feel he's quite different from most people in that he thinks much more
than most people about the really big picture, not just what's going to happen in the next
election cycle, but in millennia, millions and billions of years from now. And when
you look at this more cosmic perspective, it's so obvious that we are gazing out
into this universe that as far as we can tell,
is mostly dead with life being a almost imperceptibly
tiny perturbation, right?
And he sees this enormous opportunity for our universe
to come alive, first to become an interplanetary species.
Mars is obviously just first stop on this cosmic journey.
And precisely because he thinks more long term,
it's much more clear to him than to most people
that what we do with this Russian relifting,
we keep playing with our nukes,
is a really poor strategy, really reckless strategy.
And also that we're just building
these ever more powerful AI systems that we don't understand, it's also just a really reckless strategy. And also that we're just building these ever more powerful AI
systems that we don't understand.
It's also to really reckless strategy.
I feel Elon is a humanist in the sense
that he wants an awesome future for humanity.
He wants it to be us that control the machines,
rather than the machines that control us. Yes.
And why shouldn't we insist on that?
We're building them after all, right?
Why should we build things that just make us
into some little cog in the machinery
that has no further say in the matter, right?
It's not my idea of an inspiring future either.
Yeah, if you think on the cosmic scale
in terms of both time and space, so much is put into perspective.
Yeah. Whenever I have a bad day, that's what I think about. It immediately makes me feel better.
It makes me sad that for us individual humans, at least for now, the right ends too quickly.
We don't get to experience the Cosmoose scale.
Yeah, I mean, I think in our universe sometimes as an organism that has only begun to wake up
a tiny bit, just like the very first little glimmers of consciousness you have in the
morning, when you start coming around for the coffee, before the coffee, even before you
get out of bed, before you even open your eyes, start to wake up a little bit.
There's something here.
That's very much how I think of what we are.
All those galaxies out there, I think they're really beautiful.
But why are they beautiful?
They're beautiful because conscious entities are actually observing them, experiencing
them through our telescopes. If I define consciousness
as subjective experience, whether it be colors or emotions or sounds, so beauty is an experience,
meaning is an experience, purposes and experience. If there was no conscious experience observing these galaxies, they wouldn't be beautiful. If we do something dumb with advanced AI in the future here, and Earth originating
life goes extinct, and that was it for this. If there is nothing else with telescopes in
our universe, then it's kind of game over from beauty and meaning and purpose in our
whole universe. And I think that will be just such an opportunity lost frankly and I think
When Elon points this out he gets very unfairly
Maligned in the media for all the dumb media bias reasons we talked about right they want to
Print precisely the things about Elon out of context that are really click baity. Like he has gotten so much flack for this
summoning the demon statement.
But yeah, I happen to know exactly the context
because I was in the front row when he gave that talk.
He was at MIT, you'll be pleased.
And there was the Aero Astro anniversary.
They had Buzz Aldrin there from the moon landing,
a whole house, a crezg auditorium packed with MIT students.
And he had this amazing Q&A of MITagon for an hour and they talked about rockets and Mars and
everything. At the very end, this one student was actually in my class, asked him, what about AI?
Elon makes this one comment. And they take this out of context, print that, goes viral.
With AI, with summoning the demons and like that.
And try to cast him as some sort of doom and gloom dude, you know.
Yeah.
You know Elon, that's not the doom and gloom dude.
He is such a positive visionary.
And the whole reason he warns about this is because he realizes more than most what the
opportunity cost is of screwing up, that there is so much awesomeness in the future, that we can,
we can and our descendants can enjoy if we don't screw up, right?
I get so pissed off when people try to cast him as some sort of technophobic luddite.
And at this point, this kind of ludicrous, when I hear people say that people who worry
about artificial general intelligence are law dites, because of course, if you look
more closely, you have some of the most outspoken people making warnings, are people like
professors to address from Berkeley who's written the best-selling AI textbook, you know. So claiming that he
is a Luddite who doesn't understand AI is the joke is really on the people who said it,
but I think more broadly, this message is really not sunk in at all. What it is that people
worry about, they think that Elon and Stuart Russell and others are worried about the dancing robots picking up an AR-15 and
going on a rampage, right?
They think they're worried about robots turning evil.
They're not.
I'm not.
The risk is not malice.
It's competence.
The risk is just that we build some systems that are incredibly competent, which means
they're always going gonna get their goals accomplished
Even if they clash with our goals. That's the risk
Why did we humans drive the West African black rhino extinct? Is it because we're malicious evil rhinoceros haters?
No, it's just because our goals didn't align with the goals of those rhinos and tough
luck for the rhinos.
So the point is, we don't want to put ourselves in the position of those rhinos, creating
these more powerful than us if we haven't first figured out how to align the goals.
And I am optimistic, I think we could do it if we worked really hard on it because I
spent a lot of time around intelligent entities that were more intelligent than me, my mom
and my dad. And I was little and that was fine because their goals were actually aligned
with mine quite well. But we've seen today many examples of where the goals of our powerful
systems are not so aligned. So those click through optimization algorithms
that are polarized social media, right?
They were actually pretty poorly aligned
with what was good for democracy, it turned out.
And again, almost all problems we've had
in the machine learning again came so far,
not from Alice, but from poor alignment.
And that's exactly why that's why we should be concerned
about it in the future.
Do you think it's possible that with systems like Neuralink
and Brink of pewter interfaces,
you know, again, thinking of the cosmic scale,
Elon's talked about this,
but others have as well throughout history
of figuring out how the exact mechanism of how to achieve
that kind of alignment.
So one of them is having a symbiosis with AI, which is like coming up with clever ways where we're
like stuck together in this weird relationship, whether it's biological or in some kind of other way.
Do you think there's that's a possibility of having that kind of symbiosis? Or do we want to instead kind of focus on this distinct entities of us humans,
talking to these intelligible self-doubbing AI's, maybe like Stuart Russell thinks about it,
is like these, these, we're we're self-doubbing and full of uncertainty and have RIA systems
that are full of uncertainty,
we communicate back and forth and in that way achieve somebody else's.
I honestly don't know.
I would say that because we don't know for sure what if any of our, which of any of our ideas will work,
but we do know that if we don't, I'm pretty convinced that if we don't get any of these things to
work and just barge ahead, then our species is, you know, probably going to go extinct this
century.
I think it's this century.
You think like, you think we're facing this crisis is a 21st century crisis.
Like, yeah, this century will be remembered.
We're on a hard drive.
On a hard drive somewhere or maybe by future generations as like, like there will be future
of life as a two awards for people that have done something about AI.
I think it also and even worse, whether it's not superseded by leaving any AI behind either
where we're just totally wiped out, you know, like on Easter Island. Our century is long.
No, there are still 79 years left of it.
Right, think about how far we've come just in the last 30 years.
So we can talk more about what might go wrong, but you asked me this really good question
about what's the best strategy?
Is it Newer link or Russell's approach or whatever? I think
you know when we when we did the Manhattan project we didn't know if any of our four ideas
for enriching uranium and getting out the uranium 235 we're going to work but we felt this was
really important to get it before Hitler did so you know what we did we tried all four of them
To get it before Hitler did so you know what we did we try it all four of them
Here I think it's it's analogous where
There's the greatest threat that's ever faced our species and of course US national security by implication
We don't know we don't have any method that's guaranteed to work But we have a lot of ideas so we should invest pretty heavily in pursuing all of them with an open mind
It's in hope that one of them at least works.
These are the good news is the century is long, you know, and it might take decades until we have artificial general intelligence.
So we have some time, hopefully, but it takes a long time to solve these very, very difficult problems.
It's going to actually be the most difficult problem we were ever trying to solve as a species.
So we have to start now, so we don't want to have,
rather than, you know, begin thinking about it the night
before some people who had too much red bull switching on.
And we have to, coming back to your question,
we have to pursue all of these different avenues and see.
If you were my investment advisor,
and I was trying to invest in the future. How do you think the
human species is most likely to destroy itself in this century? Yeah, so if the crises,
many of the crises we're facing are really before us within the next 100 years, how do we make explicit, make known the unknowns and solve those problems
to avoid the biggest starting with the biggest existential crisis?
So as your investment advisor, how are you planning to make money on us going, destroying
ourselves? I have to ask. I don't know. It might be the Russian origins.
Somehow it's involved.
At the micro level of detailed strategies, of course, these are unsolved problems.
For AI alignment, we can break it into three sub-problems that are all unsolved.
I think you want first to make machines understand our goals, then adopt our goals, and then retain our goals.
So to hit on all three, we'll quickly.
The problem when Andreas Lubitz told his autopilot
to fly into the Alps was that the computer didn't even
understand anything about his goals, right?
It was too dumb.
It could have understood, actually.
But you would have had to put some effort in as a system
to the diner to don't fly into mountains.
So that's the first challenge.
How do you program into computers?
Human values, human goals.
We can start rather than then saying, oh, it's so hard,
we should start with
a simple stuff, as I said, self-driving cars, airplanes, just put in all the goals that we
all agree on already, and then have a habit of whenever machine gets smarter so they can
understand one level higher goals, you know, put them into. The second challenge is getting
them to adopt the goals. It's easy for situations
like that. We just program it in, but when you have self-learning systems like children,
you know, any parent knows that there is a difference between getting our kids to understand
where we want them to do and to actually adopt our goals. But with humans, with children, fortunately,
they go through this phrase, first, they're too dumb to understand what we want our
goals are. And then they have this period of some years when they're both smart enough
to understand them and malleable enough that we have a chance to raise them well. And
then they become teenagers, cut to late. But we have this window with machines, the
challenges that my intelligence might grow so fast that that window is pretty short. So
that's a research problem. The third one is how do you make sure they keep the goals
if they keep learning more and getting smarter? Many sci-fi movies are about how you have
something in which initially was aligned, but then things kind of go off-key.
And, you know, my kids were very, very excited about their legos when they were little.
Now they're just gathering dust in the basement.
You know, if we create machines that are really on board with a goal of taking care of humanity,
we don't want them to get as bored with us and as my kids go with legos.
So this is another research challenge. You don't want them to get as bored with us and as my kids got with Legos.
So this is another research challenge.
How can you make some sort of recursively self-improving system retain certain basic goals?
That said, a lot of adult people still play with Legos, so maybe we succeeded with Legos.
I like your optimism.
So not all AI systems have to maintain the goals, right?
Some just some fraction.
Yeah.
So there is a lot of talented AI researchers now who have heard of this and want to work on it.
Not so much funding for it yet.
Of the billions that go into building AI more powerful.
It's only a minuscule fraction.
So for going into the safety research.
My attitude is generally we should not try to slow down the technology, but we should greatly
accelerate the investment in this sort of safety research.
And also make sure, this was very embarrassing last year, but the NSF decided to give out
six of these big institutes.
We got one of them for AI in science.
You asked me about.
Another one was supposed to be for AI safety research
and they gave it people studying oceans and climate and stuff.
Yeah, so I'm all for starting oceans and climates, but we need to actually have some money that actually goes into AI safety research also and doesn't just get grabbed by whatever. That's a
fantastic investment. And then at the higher level, you ask this question,
okay, what can we do, you know, what are the biggest risks? I think we cannot just consider this
to be only a technical problem. Again, because if you solve only the technical problem,
can I play with your robots? Yes, please. If we can get our machines, you our machines to just blindly obey the orders we give them.
So we can always trust that it will do what we want.
That might be great for the owner of the robot.
That might not be so great for the rest of humanity.
If that person is that least favorite world leader or whatever you imagine, right?
So we have to also take, look at the apply alignment,
not just to machines, but to all the other powerful structures.
That's why it's so important to strengthen our democracy again.
As I said, to have institutions,
make sure that the playing field is not rigged
so that corporations are given the right incentives
to do the things that both make profit and are good for people,
to make sure the countries have incentives to do things that are both good for their people and
Don't screw up the rest of the world and this is not just something for AI nerds, you know to geek out on this is
Interesting challenge for political scientists economists and so many other thinkers
So one of the magical things that
perhaps makes this earth quite unique is that it's home to conscious beings.
So you mentioned consciousness.
Perhaps it's a small side because we didn't really get specific to what how we might do
the alignment.
Like you said, it's just a really important research problem.
But do you think engineering consciousness into AI systems is a possibility?
Is something that we might, to one day, do, or is there something fundamental to consciousness
that is, is there something about consciousness that is fundamental
to humans and humans only?
I think it's possible.
I think both consciousness and intelligence are information processing, certain types
of information processing and that fundamentally it doesn't matter whether the information is processed by carbon atoms in neurons and brains or by silicon atoms and
So on in our technology
Some people disagree. This is what I think is this physicist that I and
The consciousness is the same kind of you said consciousness is information processing. So meaning you know
I think you had a quote of something like it's information, uh, knowing itself that kind of thing.
I think consciousness is, yeah, is the way information feels when it's being processed.
Once people die, yeah, that's right. We don't know exactly what those complex ways are.
It's clear that most of the information processing in our brains
does not create an experience. We're not even aware of it. For example, you're not aware of your
heartbeat regulation right now, even though it's clearly being done by your body. It's just kind of
doing its own thing. When you go jogging, there's a lot of complicated stuff about how you put your
foot down. We know it's hard, that's why robots used to fall over so much.
But you're mostly unaware about it.
Your brain, your CEO, consciousness module,
just sends an email, hey, you know,
I'm gonna keep drawing along this path.
The rest is on autopilot, right?
Yes.
So most of it is not conscious,
but somehow there are some of the information processing
which is we don't know what what exactly I think this is a science problem that I hope one day
Well have some equation for something so we can be able to build a conscious detector and say yeah
Here there is some consciousness here. There's not
Oh, don't boil that lobster because it's feeling pain or it's okay because it's not feeling pain.
Right now we treat this as sort of just metaphysics, but it will be very useful in emergency
rooms to know if a patient has locked in syndrome and is conscious or if they are actually just
out.
And in the future, if you build a very, very intelligent helper robot to take care of you,
I think you'd like to know if you should feel guilty about shutting it down
or if it's just like a zombie going through emotions like a fancy tape recorder, right?
And once we can make progress on the science of consciousness and figure out
what is conscious and what isn't, then we, assuming we want to create positive
experiences and not suffering, we'll probably choose to build some machines that are deliberately
unconscious, that do incredibly boring repetitive jobs in us, iron mine somewhere or whatever.
And maybe we'll choose to create helper robots for the elderly that are conscious so that
people don't just feel creeped out, that the robot is just faking it when it acts like
it's sad or happy.
Oh, okay, you said elderly.
I think everybody gets pretty deeply lonely in this world.
And so there's a place, I think, for everybody to have a connection with conscious beings,
whether they're human or otherwise.
But I know for sure that I would, if I had a robot, if I was going to develop any kind
of personal emotional connection with it, I would be very creeped out if I knew it in
the intellectual level, that the whole thing was just a fraud. Today you can buy little talking doll for a kid which will say things in the little child
will often think that this is actually conscious and even real secrets to it that then go
on the internet and with lots of creepy repercussions.
I would not want to be just hacked and tricked like this.
If I was going to be developing real emotional connections with the robot, I would want to be just hacked and tricked like this. If I was going to be developing real
emotional connections with the robot, I would want to know that this is actually real.
It's acting conscious, acting happy because it actually feels it. And I think this is not
sci-fi. I think it's possible to measure, to come up with tools. I mean, after we understand
the science of consciousness, you're saying it we'll be able to come up with tools that can measure consciousness and definitively say like this
thing is experiencing the things it says it's experiencing.
Yeah.
Kind of by definition, if it is a physical phenomena, information processing, there, and
we know that some information processing is conscious and some isn't, well, then there
is something there to be discovered with the methods of science.
Julia Tunoni has stuck his neck out the farthest and written down some equations for a theory.
Maybe that's right, maybe it's wrong, we certainly don't know.
But I applaud that kind of efforts.
So take this, say this, say this is not just something that philosophers can have beer and
muse about, but something we can measure and study.
And coming being that back to us, I think what we would probably choose to do, as I said,
is if we cannot figure this out, choose to make, to be quite mindful about what sort of consciousness,
if any, we put in different machines that we have.
And certainly, we wouldn't want to make, we should not be making much of machines, it's
tougher without us even knowing it, right?
And if at any point someone decides to upload themselves like Ray Kurtz, why I want to
do, I don't know if you had him on your show.
Do you agree, but then COVID happens?
Oh, we're waiting it out a little bit.
You know, suppose you upload themself into this Robo Ray and it talks like him and acts
like him and laughs like him.
And before he powers off his biological body, he would probably be pretty disturbed if you
realize that there's no one home.
This robot is not having any subjective experience.
If you manage to get replaced by a descendants, which do all these cool things
and build spaceships and go to intergalactic rock concerts, and it turns out that they
are all unconscious, just going through the motions.
Wouldn't that be like the ultimate robot zombie apocalypse, right?
Just a play for empty benches?
Yeah.
I have a sense that there are some kind of, once we understand consciousness,
better, we'll understand that there's some kind of continuum. And it would be a
greater appreciation. And we'll probably understand, just like you said, it'd be
unfortunate if it's a trick. We'll probably definitely understand that love is
indeed a trick that we'll play on each other that we humans are, we convince ourselves we're
conscious, but we're really, you know, ostentries and dolphins are all the same kind of consciousness.
Can I try to cheer you up a little bit with the philosophical thought here about the love
part?
Yes, let's do it.
You know, you might say, okay, yeah, love is just a collaboration enabler.
And then you'll, and then maybe you can go and get the press about that.
But I think that would be the wrong conclusion, actually.
I know that the only reason I enjoy food
is because my genes hacked me and they
don't want me to starve to death, not because they care about me.
It's consciously enjoying succulent delights
of pistachio ice cream, but they just
want me to make copies of them.
The whole thing, so in a sense, the whole enjoyment of food is also a scam like this.
But does that mean I shouldn't take pleasure in this pistachio ice cream?
I love pistachio ice cream and I can tell you, I know this is an experimental fact.
I enjoy pistachio ice cream every bit as much, even though I
scientifically know exactly why what kind of scam this was.
Your genes really appreciate that you like the pistachio ice cream.
Well, but I, my mind appreciates it too, you know, and I have a
conscious experience right now. Ultimately, all of my brain is
also just something the genes built to copy themselves.
But so what?
I'm grateful that, yeah, thanks, Jean's for doing this.
But now it's my brain that's in charge here.
And I'm going to enjoy my conscious experience.
Thank you very much.
And not just the pistachio ice cream,
but also the love.
I feel for my amazing wife.
And all the other delights of being conscious,
I don't actually Richard Feynman, I think, said this so well.
He is also the guy who really got me into physics. Some art friend said that, oh, science is the
party pooper. It's kind of ruined the fun... You have a beautiful flower, it says the artist,
and then the scientist is going to deconstruct that
into just a blob of quarks and electrons.
And finally, it's pushed back on that,
in such a beautiful way,
which I think also can be used to push back
and make you not feel guilty about falling in love.
So here's what Feynman basically said.
He said to his friend, you know,
yeah, I can also, as a scientist, see that this is a beautiful flower.
Thank you very much.
Maybe I can't draw as good a painting as you,
because I'm not as talented in artists,
but yeah, I can really see the beauty in it.
And it just, it also looks beautiful to me.
But in addition to that, Feynman said,
as a scientist, I see even more beauty,
that the artist did not see, right?
Suppose this is a flower on a blossoming apple tree, you
could say this tree has more beauty in it than there's the colors and the fragrance.
This tree is made of air, Feynman wrote. This is one of my favorite Feynman quotes ever.
And it took the carbon out of the air and bounded in using the flaming heat of the sun,
you know, to turn the air into tree. And when you burn
logs in your fireplace, it's really beautiful to think that this is being reversed. Now the
tree is going, the wood is going back into the air, and in this flaming, beautiful dance
of the fire that the artist can see is the flaming light of the sun that was bound into turn
the air into tree. And then the ashes is the little residue
that didn't come from the air that the tree sucked out of the ground. You know, Feynman said,
these are beautiful things and science just adds. It doesn't subtract and I feel exactly that way
about love and about pistachio ice cream also. I can understand that it didn't even, there is even
more nuance to the whole thing, right?
At this very visceral level, you can fall in love just as much as someone who knows nothing about neuroscience.
But you can also appreciate this even greater beauty in it.
It's like, isn't it remarkable that it came about from this completely lifeless universe just a bunch of hot blob of plasma expanding.
And then over the eons, gradually first the strong nuclear force decided to combine quarks
together into nuclei and then the electric force bound in electrons and made atoms and then they
clustered from gravity and you got planets and stars and this and that, and then natural selection came along
and the genes had their little thing and you started getting what went from seeming like a
completely pointless universe. So we're just trying to increase entropy and approach heat death
into something that looked more goal oriented. Isn't that kind of beautiful? And then this goal
orientedness through evolution got ever more sophisticated where you got ever more and then you started getting this thing which is kind of like deep minds muse hero and steroids self the ultimate
self-play is not what deep minds AI does against itself to get better at the go. It's what
all these little quirk blobs did against each other and in the game of survival of the
fittest. You know when you had really dumb bacteria living
in a simple environment, there wasn't much incentive
to get intelligent, but then the life made environment
more complex.
And then there was more incentive to get even smarter.
And that gave the other organisms more
of incentive to also get smarter.
And then here we are now just like like mu zero learn to become world master at go and chest from playing it itself.
By just playing against itself all the quirks here on our planet.
Electrons have created giraffes and elephants and humans.
Statue of scream.
I just find that really beautiful and I think that just adds to the enjoyment and love. I just find that really beautiful.
And I, me that just adds to the enjoyment of love.
It doesn't subtract anything.
Do you feel a little more careful now?
I feel more careful now. You're way better.
That was incredible.
So this self-play of Quarks,
taking back to the beginning of our conversation a little bit,
there's so many exciting possibilities
about artificial intelligence, understanding the basic laws
of physics.
Do you think AI will help us unlock?
There's been quite a bit of excitement throughout
the history of physics of coming up with more
and more general simple laws that explain
the nature of our reality.
And then the ultimate of that will be
a theory of everything that combines everything together. Do you think it's possible that,
one, we humans, but perhaps AI systems will figure out a theory of physics that unifies all the
laws of physics? Yeah, I think it's absolutely possible.
I think it's very clear that we're
going to see a great boost to science.
We're already seeing a boost actually
from machine learning, helping science.
Alpha-fold was an example.
The decade's all protein-folding problem.
So and gradually, yeah, unless we go extinct by doing something
dumb, like to discuss, I think it's
very likely that our understanding of physics will become so good that our technology will
no longer be limited by human intelligence, but instead be limited by the laws of physics.
So our tech today is limited
by what we've been able to invent, right? I think as I progress as it'll just be limited
by the speed of light and other physical limits, which will mean, it's just dramatically
beyond where we are now.
Do you think it's a fundamentally mathematical pursuit of trying to understand the laws of
the government, our universe, from a mathematical perspective, so almost like, if it's AI, it's
exploring the space of like theorems and all those kinds of things?
Or is there some other, is there some other more computational ideas, more sort of empirical ideas?
They're both, I would say. It's really interesting to look out of the landscape of everything
we call science today. So here we come out with this big new hammer. It says machine
learning on it and ask, you know, where are there some nails that you can help with here
that you can hammer? Ultimately, if machine learning gets the point that it can do
everything better than us, it will be able to help
it cross the whole space of science.
But maybe we can anchor it by starting a little bit right now
near-term and see how we kind of move forward.
So, right now, first of all, you have a lot of big data science
where, for example, with telescopes, we are
able to collect way more data every hour than grad student can just pour over, like in
the old times, right?
And machine learning is already being used very effectively, even at MIT.
You like to find planets around other stars, the detect exciting new signatures of new
particle physics in the sky to detect the
the ripples and the fabric of space time that we call gravitational waves caused by enormous
black holes crashing into each other halfway across the durable universe. Machine learning is running
and ticking right now, you know, doing all these things and it's really helping all these
experimental fields. There is a separate front of physics,
computational physics, which is getting
in the norm as boost also.
So we had to do all our computations by hand, right?
People would have these giant books with tables of logarithms
and oh my god, it pains me to even think
how long time it would have taken to do simple stuff.
Then we started to get a little calculator as computers to do some basic math for us.
Now what we're starting to see is kind of a shift from go-fi computational physics to
neural network computational physics to Neural network computational physics what I mean by that is
most
computational physics
would be done by humans programming in
The intelligence of how to do the computation into the computer
Just as when Gary Kasparov got his posterior kicked by IBM's deep blue and chess humans had programmed in exactly how to play chess
Yes intelligence came from the humans it wasn't learned, right is posterior kicked by IBM's deep blue and chest. Humans had programmed in exactly how to play chest.
Intelligence came from the humans.
It wasn't learned, right?
Mu zero can be not only Kasparov and chest,
but also stock fish, which is the best
sort of go-fi chest program by learning.
And we've seen more of that now,
that shift beginning to happen in physics.
So let me give you an example.
So lattice QCD is an area of physics whose goal is basically to take the periodic table
and just compute the whole thing from first principles.
This is not the search for theory of everything.
We already know the theory that's supposed to produce as output the periodic table, which
atoms are stable, how heavy they are, all that good stuff, their spectral lines.
It's a theory, lattice QCD, you can put it on your t-shirt or colleague Frank Wilcheck,
got the Nobel Prize for working on it.
But the math is just too hard for us to solve.
We have not been able to start with these equations and solve them to the extent that we can predict.
Oh, yeah.
And then there is carbon.
And this is what the spectrum of the carbon atom looks like.
But awesome people are building these super-computed simulations
where you just put in these equations.
And you make a big cubic lattice of space,
or actually it's a very small lattice
because you're going at the subatomic scale.
And you try to solve this,
but it's just so computationally expensive
that we still haven't been able to calculate things
as accurately as we measure them in many cases.
And now machine learning is really revolutionizing this.
So my colleague, Fiorl Hashanahan and MIT, for example, she's been using this really cool machine learning is really revolutionizing this. So my colleague, Fihala Shanahan, and MIT, for example,
she's been using this really cool machine learning technique
called normalizing flows, where she can actually
speed up the calculation dramatically by having
the AI learn how to do things faster.
Another area like this where we suck up an enormous amount of supercomputer time to do
physics is black hole collisions.
So now that we've done this sexy stuff of detecting about to this, like go another experiments,
we want to be able to know what we're seeing.
And so it's a very simple conceptual problem.
It's the two-body problem.
Newton solved it for a classical gravity.
100 years ago where the two-body problem is still not fully solved.
For black holes.
Yes, and a nice thing is gravity because they won't disturb each other forever anymore.
Two things.
They give off gravitational waves and make sure they crash into each other.
And the game, what you want to do is you want to figure out, okay, and make sure they crash into each other. And the game, what you wanna do is you wanna figure out,
okay, what kind of wave comes out
as a function of the masses of the two black holes,
as a function of how they're spinning,
relative to each other, et cetera.
And that is so hard, it can take months
of super computer time,
and massive numbers, of course, to do it, you know,
wouldn't it be great if you can use machine learning to
greatly speed that up. Now you can use the expensive old GoFi calculation as the truth,
and then see if machine learning can figure out a smarter, faster way of getting the right answer.
Yet another area, like computational physics, These are probably the big three that suck up the most computer time, Lattice QCD, black hole
collisions and cosmological simulations where you take not a subatomic thing and try to
figure out the mass of the proton, but you take something with enormous and try to look
at how all the galaxies get formed in there.
Yeah.
There again, there are a lot of very cool ideas
right now about how you can use machine learning
to do this sort of better, stuff better.
The difference between this and the big data
is you kind of make the data yourself.
So you, and then finally, we're looking over the physical landscape and seeing what can we hammer
with machine learning. So we talked about experimental data, big data, discovering cool stuff
that we humans then look more closely at. Then we talked about taking the expensive computations
we're doing now and figuring out how to do them much faster and better with AI. And finally, let's go really theoretical.
So things like discovering equations,
having the fundamental insights.
This comes, this is something closest
to what I've been doing in my group.
We talked earlier about the whole AI-Fimon project,
where if you just have some data, how
do you automatically discover equations that seem to describe this well, that you can then go back as a human
and work with and test and explore? And you asked a really good question also about if this
is sort of a search problem in some sense, that's very deep actually what you said, because it is.
Suppose I ask you to prove some mathematical theorem.
What is a proof in math?
It's just a long string of steps, logical steps,
that you can write out with symbols.
And once you find it, it's very easy
to write a program to check whether it's a valid proof or not.
So why is it so hard to prove it then?
Well, because there are ridiculously
many possible candidate proofs you could write down,
right?
If the proof contains 10,000 symbols, even if there are only 10 options for what each symbol
could be, that's 10 to the power of 1000 possible proofs, which is way more than their atoms
in our universe.
Right.
So you could say it's trivial to prove these things.
You just write a computer, generate all strings, and then check, is this a valid proof?
E.
No.
Is this a valid proof?
E.
No.
And then you just keep doing this forever.
But there are a lot of, but it is fundamentally a search problem.
You just want to search the space of all those all strings of symbols to find find the one with find one that is the proof, right?
And there's a whole area of machine learning called search.
How do you search with some giant space to find the needle in the haystack?
It's easier in cases where there's a clear measure of good, like you're not just right or wrong, but this is better and this is worse, you can maybe get some hints at which direction to go
in. That's why we talked about neural networks work so well.
I mean, that's such a human thing of that moment of genius of figuring out the intuition
of good, essentially. I mean, we thought that that was, maybe it's not right. We thought
that about chess, right? That the ability to see like 10, 15, sometimes 20 steps ahead
was not a calculation that humans were performing. It was some kind of weird intuition about
different patterns about board positions, about the relative positions of some how stitching stuff together and a lot of it is just like intuition.
But then you have like alpha like zero be the first one that did.
The self play it just came up with this it was able to learn through self play mechanism is kind of intuition. Exactly. But just like you said, it's so fascinating to think
what they're in the space of totally new ideas. Can that be done in developing theorems?
We know it can be done by neural networks because we did it with the neural networks in the
craniums of the great mathematicians of humanity, right? And I'm so glad you brought up Alpha
Zero because that's the counter example.
It turned out we were flattening ourselves
when we said intuition is something different.
It's only humans can do it.
It's not information processing.
It used to be that way.
Again, it's really instructive, I think,
to compare the chest computer deep blue
that beat Kasparov
with Alpha Zero, that beat Lisa Doll at the go. Because for Deep Blue, there was no intuition.
There was some pro-humans had programmed in some intuition. After humans had played a lot of games,
they told the computer, you know, count the pawn as one point, the bishop is three points,
the rook is five points,
and so on. You add it all up, and then you add some extra points
for past pawns and subtract if the opponent has it and blah, blah, blah.
And then what, what deep blue did was just search.
Just very brute force tried many, many moves ahead,
all these combinations and the prune research.
And it could think much faster than Kasparov and at one, right?
And that, I think, inflated our egos in a way it shouldn't have, because people started
to say, yeah, yeah, it's just brute force search, but it has no intuition.
Alpha zero really popped our bubble there, because what Alpha Zero does, yes, it does also do some of that research.
But it also has this intuition module, which in Geek speak is called a value function,
where it just looks at the board and comes up with a number for how good is that position.
The difference was no human told it how good the position is. It just learned it.
And Mu0 is the coolest or scariest of all depending on your mood.
Because it the same basic AI system will learn what the good board position is
regardless of whether it's chess or go or shogi or Brit Pac-Man or Lady Pac-Man or Breakout or Space Invaders or any number of
bunch of other games. You don't tell anything and it gets this intuition
after a while for what's good. So this is very hopeful for science, I think,
because if it can get intuition for what's a good position there,
maybe it can also get intuition for what are some good
directions to go if you're trying to prove something.
It's not, I often, one of the more fun things
in my science career is when I've been able to prove
some theorem about something and it's very heavily
intuition guided of course.
I don't sit and try all random strings.
I have a hunch that this reminds me a little bit
of about this other proof I've seen for this
thing. So maybe I first, what if I try this? No, that didn't work out. But this reminds me
actually the way this failed reminds me of that. So combining the intuition with all these
brute force capabilities, I think it's going to be able to help physics too. Do you think there'll be a day when an AI system
being the primary contributor, let's say 90% plus,
when's the Nobel Prize in physics?
Obviously they'll give it to the humans
because we humans don't like to give prizes to machines.
It'll give it to the humans behind the system.
You could argue that AI has already been involved in some Nobel Prize probably, maybe
some of the black holes and stuff like that.
Yeah, we don't like giving prizes to other life forms.
If someone wins a horse racing contest, they don't give the prize to horse either.
That's true.
But do you think that's, we might be able to see something like that in our lifetimes
when AI. So like the first
system, I would say that makes us think about a Nobel Prize seriously is like Alpha Fold is making
us think about in medicine physiology, a Nobel Prize. Perhaps discoveries that are direct
result of something that's discovered by Alpha Fold. Do you think in physics, we might be able to see that
in our lifetimes?
I think what's probably going to happen
is more of a blurring of the distinctions.
Today, if somebody uses a computer to do a computation
that gives them the normal price,
nobody's going to dream of giving the price of the computer.
They're going to be like, that was just a tool.
I think for these things, also people are just gonna, for a long time,
view the computer as a tool.
But what's gonna change is that you ubiquity of machine learning.
I think at some point in my lifetime, finding a human physicist who knows nothing about machine learning is going to be about almost as hard as it is today finding a human physicist who doesn't says, oh, I don't know anything about computers or I don't use math.
It would just be a ridiculous concept.
The thing is, there is a magic moment though, like with Alpha Zero, when the system surprises us in a way where the best people in the world truly learn something from the system in
a way where you feel like it's another entity.
Like the way people, the way Magnus Carlson, the way certain people are looking at the work of Alpha Zero, it's like, it
truly is no longer a tool in the sense that it doesn't feel like a tool.
It feels like some other entity.
So there's a magic difference where you're like, you know, if an AI system is able to
come up with an insight that surprises everybody in a some like major way that's
a phase shift in our understanding of some particular size or some particular aspect
of physics.
I feel like that is no longer a tool and then you can start to say that like it perhaps
deserves a prize. So for sure, the more important and the more fundamental transformation of the 21st century
science is exactly what you're saying, which is probably everybody will be doing machine
learning.
It's a some degree, like if you want to be successful at unlocking the mysteries of science,
you should be doing machine learning.
But it's just exciting to think about,
like whether there will be one that comes along
that's super surprising and they'll make us question
like who the real inventors are in this world.
Yeah.
Yeah, I think the question of,
isn't if it's gonna happen, but when?
And, but it's important, also in my mind, the time when that happens is also more or less the same time when we get artificial
general intelligence.
And then we have a lot bigger things to worry about than the way that we get the Nobel
Prize or not, right?
Yeah.
Because when you have machines that can outperform our best scientists at science,
they can probably outperform us at a lot of other stuff as well,
which can at a minimum make them incredibly powerful agents in the world.
And I think it's a mistake to think we only have to start worrying
about loss of control when machines get to AGI across the board,
where they can do everything, all our jobs,
long before that, they'll be hugely influential.
We talked at length about how the hacking of our minds
with algorithms trying to get us glued to our screens,
right, has already had a big impact on society.
That was an incredibly dumb algorithm in the grand scheme of things, right?
The supervised machine learning, yet that had huge impact.
So I just don't want us to be lulled into a false sense of security and think there won't
be any societal impact until things reach human level because it's happening already and it's I was just thinking the other week you know when I see some scare monger going oh the
robots are coming the implication is always that they're coming to kill us yeah and
maybe you should have worried about that if you were in Nagorno-Karabakh during
the recent war there, but more seriously,
the robots are coming right now, but they're mainly not coming to kill us. They're coming to hack us.
They're coming to hack our minds into buying things that we maybe we didn't need
to vote for people who may not have our best interest in mind.
And it's kind of humbling, I think, actually as a human being to admit that it turns out
that our minds are actually much hack more hackable
than we thought.
And the ultimate insult is that we are actually getting hacked
by the machine learning algorithms that are in some objective
sense much dumber than us, you know.
But maybe we shouldn't be so surprised because, you know,
how do you feel about the cute puppies?
Love them. So you know you would probably argue that in some across the board
measure you're more intelligent than they are but boy our cute puppies good at
hacking us right yeah they move into our house persuadis to feed them and do all
these things and what do they ever do but for us yeah other than being cute and
making us feel good, right?
So if puppies can hack us, maybe we shouldn't be so surprised if
if pretty dumb machine learning algorithms can hack us too.
Not the speak of cats, which is another level.
Oh, and I think we should to counter your previous point about there,
let us not think about evil creatures in this world.
We can all agree that cats are as close to objective evil as we can get. But that's just me saying that.
Okay. So you have you seen the cartoon? I think it's maybe the onion, where this is incredibly
cute kitten. And it just says, underneath something, like, the thing's about murder all day.
underneath something that thinks about murder all day. Exactly.
That's accurate.
You mentioned offline that there might be a link between post-biological AGI and SETI.
So last time we talked, you've talked about this intuition that we humans might be quite unique in our galactic neighborhood. Perhaps our galaxy,
perhaps the entirety of the observable universe, who might be the only intelligent civilization here,
which is, and you argue pretty well for that thought. So I have a few little questions around this. One, a scientific
question, in which way would you be, if you were wrong in that intuition, in which way do
you think you would be surprised? Like, why were you wrong? We find out that you ended up being wrong.
Like in which dimension?
So like is it because we can't see them?
Is it because the nature of their intelligence or the nature of their life is totally different
than we can possibly imagine?
Is it because the, I mean, something about the
great filters and surviving them, or maybe because we're being protected from signals, all
those explanations for why we haven't heard a big loud like red light that says we're here.
Yeah.
So there are actually two separate things there that I could be wrong about,
two separate claims that I made, right?
One of them is, I made the claim, I think,
most civilizations,
going from simple bacteria like things to space colonizing civilizations,
they spend only a very, very tiny fraction of their life being where we are.
That I could be wrong about.
The other one I could be wrong about is the quite different statement that I think that actually I'm guessing
that we are the only
civilization in our observable universe from which light has reached us so far.
That's actually gotten far enough to invent telescopes.
So let's talk about maybe both of them in turn because they really are different.
The first one,
if you look at the n equals one, the date of one we have on this planet.
Right. So we spent four and a half billion years f***ing around on this planet with life. Right.
We got, and most of it was pretty lame stuff from an intelligence perspective. You know,
bacteria and then the dinosaurs spent, then the things gradually accelerated.
Then the dinosaurs spent over a hundred million years stomping around here without even
inventing smartphones.
And then very recently, it's only spent 400 years going from Newton to us, right, in terms
of technology.
And we've looked at what we've done even, you know, when I was a little kid,
there was no internet even. So it's, I think it's pretty likely for in this case of this planet,
but we're either gonna really get our act together and start spreading life into space, the
century, and doing all sorts of great things, or're going to wipe out. It's a little hard.
If I could be wrong in the sense that maybe what happened on this earth is very atypical.
And for some reason, what's more common on other planets is that they spend an enormous
long time fudging around with the ham radio and things, but they just never really take
it to the next level for reasons I don't have, I haven't understood.
I'm humble and open to that. But I would bet at least 10 to one that our situation is more typical because the
whole thing with Moore's law and accelerating technology, it's pretty obvious why it's happening.
Everything that grows exponentially, we call it an explosion, whether it's a population explosion
or a nuclear explosion, is always caused by the same thing. It's that the next step triggers a step after that.
So tomorrow's technology enables tomorrow's technology and that enables the next level.
And because the technology is always better, of course, the steps can come faster and faster.
On the other question that I might be wrong about, that's the much more controversial one, I think.
But before we close out on this thing about,
if the first one, if it's true that most civilizations
spend only a very short amount of their total time
in the stage, say, between inventing telescopes
or mastering electricity and leaving their and doing space travel.
If that's actually generally true, then that should apply also elsewhere out there.
So we should be very, very surprised if we find some random civilization and we happen
to catch them exactly in that very, very short stage.
It's much more likely that we find this planet full of bacteria.
Yes.
Or that we find some civilization that's already post-biological and has done some really
cool galactic construction projects and their galaxy.
Would we be able to recognize them, do you think?
Is it possible that we just can't, I mean, this post-biological world
could it be just existing in some other dimension? It could be just all a virtual reality game for
them or something, I don't know, that it changes completely where we won't be able to detect.
We have to be honestly very humble about this. I think I said earlier the number one
principle of being a scientist is you have to be
humble and willing to acknowledge that everything we think guess might be totally wrong. Of course,
you can imagine some civilization where they all decide to become Buddhists and very inward looking
and just move into their little virtual reality and not disturb the flora and fauna around them and
might not notice them. But this is a numbers game, right? If you have
millions of civilizations out there or billions of them, all it takes is one with a more ambitious
mentality that decides, hey, we are going to go out and settle a bunch of other solar systems
in maybe galaxies. And then it doesn't matter if they're quite Buddhist, we're still going to notice that expansionist one. And it seems
like quite distressed to assume that we know even in our own galaxy that there
are probably a billion or more planets that are pretty earth-like. And many of
them were formed over a billion years before ours. So I had a big head start.
So if you actually assume also that life happens kind of automatically
on an Earth-like planet, I think it's pretty quite the stretch
to then go and say, OK, so we have there are billions
of, another billion civilizations out there
that also have our level of tech.
And they all decided to become Buddhists. And not a single one decided to go like go Hitler on the galaxy and say we need the go on and colonize or and or not and not a single one decided for more benevolent reasons to go out and get more resources.
That that seems it seems like a bit of a stretch frankly and this leads into the second thing you challenge me to be that I might be wrong about. How rare or common is life? So France is Drake when he wrote down the Drake equation, multiplied
together, I used a number of factors and said, we don't know any of them. So we know even
less about what you get when you multiply it together than the whole product.
Since then, a lot of those factors have become much better known. One of his big uncertainties was how common is it that a solar system even has a planet?
Right.
Well, now we know a very common.
Earth-like planets, we know where better planets are.
There are times when there are many, many other, even in our galaxy.
At the same time, we have, thanks to, I'm a big supporter of the SETI project and its
cousins, and I think we should keep doing this.
And we've learned a lot.
We've learned it so far.
All we have is still unconvincing hints, nothing more.
And there are certainly many scenarios where it would
be dead obvious.
If there were 100 million other human-like civilizations
in our galaxy, it would not be that hard to notice some of them with today's technology,
and we haven't, right?
So, what we can say is, well, okay.
We can rule out that there is a human-level civilization on the moon,
and in fact, the many nearby solar systems,
where we cannot rule out, of course,
that there is something like
Earth sitting in a galaxy, five billion light years away.
But we've ruled out a lot, and that's already kind of shocking, given that there are all
these planets there, you know, so like, where are they?
Where are they all?
That's the classic Fermi paradox.
And so my argument, which might very early wrong,
is very simple, really.
It just goes like this.
OK, we have no clue about this.
The probability of getting life on a random planet,
it could be 10 to the minus 1, a priori,
or 10 to the minus 10, 10 to the minus 20, 10 to the minus 30,
10 to the minus 40.
Basically, every order of magnitude
is about equally likely.
When you then do the math and ask how close
is our nearest neighbor, it's again equally likely
that it's 10 to the 10 meters away,
10 to 20 meters away, 10 to the 30 meters away.
We have some nerdy ways of talking about this
with Bayesian statistics than a uniform log prior,
but that's irrelevant.
This is the simple basic argument.
And now comes the data.
So we can say, okay, how many were there are all these orders of magnitude?
10 to the 26 meters away.
There's the edge of our observable universe.
If it's farther than that, light hasn't even reached us yet.
If it's less than 10 to the 16 meters away, well, it's within Earth's array.
It's no farther way than the Sun.
We can definitely rule that out, you know.
So I think about it like this, a priori, before we looked at telescopes, you know, it could
be 10 to the 10 meters, 10 to the 20, 10 to the 30, 10 to the 40, 10 to the 50, 10 to the
blah, blah, blah.
Equally likely anywhere here.
And now we've ruled out like this chunk. Yeah.
And most of it is outside.
And here is the edge of our global universe.
Yes, yep.
So I'm certainly not saying I don't think
there's any life elsewhere in space.
If space is infinite, then you're basically
100% guaranteed that there is.
But the probability that there is life,
that the nearest neighbor happens to be in this little region between
where we would have seen it already and where we will never see it, there's actually
significantly less than one, I think. And I think there's a moral lesson from this, which is
really important, which is to be good stewards of this planet in this shot we've had.
It can be very dangerous to say, oh, it's fine if we nuke our planet or ruin the climate
or mess it up with an Alina AI, because I know there is this nice starttrecks fleet out
there.
They're going to swoop in and take over what we failed.
Just like it wasn't the big deal that the Easter Island losers wiped themselves out.
It's a dangerous way of lolling yourself into false sense of security.
If it's actually the case that it might be up to us and only us, the whole future of
intelligent life in our observable universe, then I think it's both, it's really put
a lot of responsibility on our shoulders. Spiring, it's a little bit terrifying,
it's also inspiring. But it's empowering, I think most of all, because the
biggest problem today is, I see this even when I teach, right? So many people feel
that it doesn't matter what they do or we do, we feel disempowered. Oh, it makes no difference.
This is about as far from that as you can come up and we realize that what we do
on our little spinning ball here in our lifetime, you know, could make the difference
of an entire future of life in our universe, you know. How empowering is that? Yes, survival of consciousness. On the other, a very similar kind of empowering aspect of
the Drake equation is, say there is a huge number of intelligent civilizations that spring up everywhere,
but because of the Drake equation, which is the lifetime of a civilization. Maybe many of them hit a wall.
And just like you said, it's clear that that for us,
the great filter, the one possible great filter
seems to be coming in the next 100 years.
So it's also empowering to say, okay, well,
we have a chance to not,
I mean, the way great filters work, it just gets most of them.
Exactly.
Nick Bostrom has articulated this really beautifully too.
Every time, yet another search for life on Mars comes back negative or something, I'm
like, yes!
Yes!
Our odds for us surviving this is the best luck.
You've already made the argument and brought a brush there,
right, with just the unpackage, right?
The point is, we already know there is a crap ton of planets
out there that are Earth-like, and we also know that most of them
do not seem to have anything like our kind of life on.
So what went wrongs?
There's clearly one step along the evolutionary, at least one filter roadblock in going
from no life to space-faring life. And where is it? Is it in
front? The most there is it behind us, right? If there's no
filter behind us, and we keep finding also to a little mice
on Mars or whatever, right? That's actually very depressing
because that makes it much more likely that the filter is in front of us. And what actually is
going on is like the ultimate dark joke that whenever a civilization invents sufficiently powerful
tech, it's just just after clock and then after a while it goes poof for one reason or other and wipes itself out.
That would be, wouldn't that be like utterly depressing if we're actually doomed?
Whereas if it turns out that there is a really, there is a great filter early on that
is for whatever reason seems to be really hard to get to the stage of sexually reproducing
organisms or even the first ribosome or whatever, right? Or
maybe you have lots of planets with dinosaurs and cows, but for some reason they tend to
get stuck there and never invent smartphones. All of those are huge boosts for our own odds
because... been there, done that, you know. it doesn't matter how hard it unlikely it was,
that we got past that roadblock because we already did. And then that makes it likely that the
future is in our own hands, we're not doomed. So that's why I think the fact that life is rare
in the universe is not just something that there is some evidence
for, but also something we should actually hope for.
So that's the end, the mortality, the death of human civilization that we've been discussing
in life, maybe prospering beyond, and you kind of great filter.
Do you think about your own death?
Does it make you sad that you may not witness some of the,
you know, you lead a research group
on working some of the biggest questions
in the universe actually, both on the physics and the AI side?
Does it make you sad that you may not be able to see some
of these exciting things come to fruition
that we've been talking about?
Of course, of course it sucks.
The fact that I'm going to die.
I remember once when I was much younger, my dad made this remark that life is fundamentally
tragic.
And I'm like, we're talking about that.
And then many years later, I felt, now I feel like I totally understand what he means.
We grow up, we're little kids, and everything is infinite, and it's so cool.
And then suddenly we find out that actually, you know,
you got a certain only, this is the game over at some point.
So of course it's, it's, it's, it's something that's sad.
Are you afraid?
something that's sad. Are you afraid?
Not in the sense that I think anything terrible is going to happen after I die or anything like that. No, I think it's really going to be a game over. But it's more that it makes me very acutely aware
of what a wonderful gift this is that I get to be alive right now and
and is a steady reminder to just live life to the fullest and really enjoy it because it is finite, you know, and I think actually and we know we all get
the regular reminders when someone near and dear to us dies that the that
you know one day it's it's going to be our turn.
It adds this kind of focus.
I wonder what it would feel like actually
to be in immortal being,
if they might even enjoy some of the wonderful things
of life a little bit less,
just because there isn't that...
Finiteness.
Do you think that could be a feature, not a bug,
the fact that we,
beings are finite.
Maybe there's lessons for engineering and artificial intelligence systems as well that are conscious.
Like, do you think it makes, is it possible that the reason the pistachio ice cream is delicious?
Is the fact that you're going to die one day and you will not have all the pistachio ice cream is delicious is the fact that you're going to die one day and you will not have all the pistachio ice cream that you could eat because of that fact.
Well, let me say two things. First of all, I, it's actually quite profound, which is saying,
I do think I appreciate the pistachio ice cream a lot more knowing that I will, there's
only finite number of times, like you have to enjoy that. And I can only remember finite number of times in the past. And moreover, my life is not so long that it just starts to
feel like things are repeating themselves in general. It's so new and fresh. I also think,
though, that death is a little bit overrated in the sense that it comes from a sort of outdated view of physics and what life actually is.
Because if you ask, okay, what is it that's going to die exactly? What I'm I really.
When I say it, I feel sad about the idea of myself dying. Am I really sad that the skin cell here is going to die? Of course not,
because it's going to die next week anyway, and I'll grow a new one. And it's not any of my cells
that I'm associating with who I really am, nor is it any of my atoms or quarks or electrons.
In fact, basically all of my atoms get replaced on a regular basis, right?
So what is it that's really me?
From a more modern physics perspective, it's the information.
In processing Amy, that's where my memory, that's my memories, that's my values, my dreams,
my passion, my love. That's what's really fundamentally me. And frankly, not all of that
will die when my body dies. Richard Feynman, for example, his body died of cancer. But many of his
ideas that he felt made him very him actually live on. I,
this is my own little personal tribute to Richard Feynman, right? I try to keep a little bit of him
alive in myself. I've even quoted him today, right? Yeah, he almost came alive for a brief moment
in this conversation, yeah. Yeah, and this honestly gives me some solace, When I work as a teacher, I feel,
if I can actually share a bit about myself,
that my students feel worthy enough to copy and adopt
as part of things that they know,
or they believe or aspire to,
now I live on also a little bit in them, right?
or aspire to. Now I live on also a little bit in them, right? And so being a teacher is a little bit of what I, that's something also that contributes to making me a little teeny bit less
mortal, right? Because I'm not at least not all going to die all at once, right?
And I find that a beautiful tribute to people we did on respect, if we can remember them
and carry in us the things that we felt was the most awesome about them, right?
Then they live on.
And I'm getting a bit emotionally over, but it's a very beautiful idea you bring up there.
I think we should stop this old-fashioned materialism and just equate who we are with
our quirks and electrons.
There's no scientific basis for that really.
And it's also very uninspiring.
Now if you look a little bit towards the future, right, one thing which really sucks about
humans dying is that even though some of their teachings and memories and stories and ethics
and so on will be copied by those around them, hopefully, a lot of it can't be copied and
just dies with them with a brain.
And that really sucks.
That's the fundamental reason why we find it so tragic when someone goes from having all this information there to the more just
gone ruined, right? With more post-biological intelligence, that's going to shift a lot,
right? The only reason it's so hard to make a backup of your brain in its entirety is exactly because it wasn't built for that, right?
If you have a future machine intelligence, there's no reason for why it has to die at all.
If you want to copy it, whatever is it into some other quirk blob, right?
You can copy it, not just some of it, but all of it, right? And so, so in that sense,
you can get immortality
because all the information can be copied out of any individual entity.
And it's not just mortality that will change if we get more post-biological life. It's also the
get more post-biological life. It's also the, with that, I think, very much the whole individualism we have now, right? The reason that we make such a big difference between me and you is
exactly because we're a little bit limited in how much we can copy. Like, I would just
love to go like this and copy your Russian skills, Russian speaking skills. Wouldn't it be
awesome, but I can't.
I have to actually work for years.
I'm going to get better on it.
But if we were robots, you know, just copying pace freely,
then that loses completely.
It washes away the sense of what immortality is.
And also individuality a little bit, right?
We would start feeling much more...
Maybe we would feel much more collaborative with each other if we can just say, hey, you know, I'll give you my rush You can give me your rush and I'll give you whatever something and I suddenly you can speak Swedish
Maybe that's less a bad trade for you, but whatever else you want for my brain, right? And and
There've been a lot of sci-fi stories about hive minds and so on, where experiences
can be more broadly shared.
And I think we don't pretend to know what it would feel like to be a super intelligent
machine, but I'm quite confident that however it feels about mortality
and individuality will be very, very different from how it is for us.
Well for us mortality and finightness seems to be pretty important at this particular moment
and so all good things must come to an end, just like this conversation next. I just saw that coming.
So the world's worst transition.
I could talk to you forever.
It's such a huge honor that you spent time with me.
I wonder if mine.
Thank you so much for getting me essentially to start this podcast by doing the first conversation,
making me realize falling in love with conversation in itself.
And thank you so much for inspiring so many people in the world with your books,
with your research, with your talking, and with the other, like this ripple effect of friends,
including Elon and everybody else that you inspire. So thank you so much for talking today.
Thank you. I feel so fortunate that you're doing this podcast and getting so many interesting
voices out there into the ether and not just the five-second sound bites, but so many of
the interviews I've watched you do. You really let people go in into depth in a way which
we certainly need in the stay-in-age. And that I got to be number one, I feel super honored.
Yeah, you started it. Thank you so much, Max.
Thanks for listening to this conversation with Max Tecmark.
And thank you to our sponsors, the Jordan Harbinger Show,
for a cinematic Martian coffee, better help online therapy and express VPN.
So the choices, wisdom, caffeine,
sanity or privacy. Choose wisely, my friends, and if you wish, click the sponsor
links below to get a discount and to support this podcast. And now let me leave you
some words from Acts Tagmark. If consciousness is the way that information
feels when it's processed in certain ways,
then it must be substrate independent.
It's only the structure of information processing that matters, not the structure of the matter
doing the information processing.
Thank you.