Lex Fridman Podcast - Steven Pinker: AI in the Age of Reason
Episode Date: October 17, 2018Steven Pinker is a professor at Harvard and before that was a professor at MIT. He is the author of many books, several of which have had a big impact on the way I see the world for the better. In par...ticular, The Better Angels of Our Nature and Enlightenment Now have instilled in me a sense of optimism grounded in data, science, and reason. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.Â
Transcript
Discussion (0)
Welcome to the Artificial Intelligence Podcast.
My name is Lex Friedman.
I'm a research scientist at MIT.
Today is a conversation with Stephen Pinker.
He's a professor at Harvard, and before that
was a professor at MIT.
He's the author of many books, some of which
had a big impact on the way I see the world for the better.
In particular, the better angels of our nature
and his latest book, Enlightenment Now, have instilled
to me a sense of optimism. Optimism grounded in data, science and reason. I really enjoyed
this conversation, I hope you do as well. You've studied the human mind, cognition, language, vision, evolution, psychology from
childhood, from the level of individual to the level of our entire civilization. So I feel like I can start with a simple, multiple choice question.
What is the meaning of life?
Is it A, to attain knowledge, as Plato said,
B, to attain power, as Nietzsche said,
C, to escape death, as Ernest Becker said,
D, to propagate our genes as
Darwin and others have said, E, there is no meaning as the nihil Becker said, D to propagate our genes as Darwin and others have said, E, there
is no meaning as the Nileist have said, F, knowing the meaning of life is beyond our cognitive
capabilities.
A Stephen Pinker said, based on my interpretation 20 years ago, and G, none of the above.
I'd say eight comes closest, but I would amend that to attaining not only knowledge,
but fulfillment more generally. That is, life, health, stimulation, access to the living,
cultural, and social world. Now, this is our meaning of life. It's not the meaning of life,
if you were to ask our genes. Their meaning is to propagate copies of themselves, but that is distinct from the meaning that
the brain that they lead to sets for itself.
So do you knowledge is a small subset or a large subset?
It's a large subset, but it's not the entirety of human striding, because
we also want to interact with people. We want to experience beauty. We want to experience
the richness of the natural world. But understanding what makes the universe tick is way up there.
For some of us more than others, certainly for me, that's one of the top five.
So is that a fundamental aspect? Are you just describing your own preference, or is this
a fundamental aspect of human nature, is to seek knowledge, in your latest book, you
talk about the power, the usefulness of rationality and reason, and so on. Is that a fundamental nature of human beings,
or is it something we should just strive for?
It's both.
It is, we're capable of striving for it,
because it is one of the things that
make us what we are, homo sapiens, wise man.
We are unusual among animals in the degree to which we acquire knowledge and use it to survive.
We make tools, we strike agreements via language, we extract poisons, we predict the behavior of animals,
we try to get at the workings of plants.
And when I say we, I don't just mean we in the modern West, but we as a species everywhere,
which is how we've managed to occupy every niche on the planet, how we've managed to drive
other animals to extinction. And the refinement of reason in pursuit of human well-being, of
health, happiness, social richness, cultural richness is our our main challenge in the present. That is
using our intellect, using our knowledge to figure out how the world works, how we work
in order to make discoveries and strike agreements that make us all better off in the long run.
Right, and you do that almost undeniably and in a data-driven way in your recent book, but I'd like to focus on
the artificial intelligence aspect of things and not just artificial intelligence but natural
intelligence too.
So, 20 years ago in the book, you've written on how the mind works, you conjecture again,
my right, two interpret things.
You can correct me if I'm wrong, but you con gesture that human thought in the brain may be a result of a massive network of
highly interconnected neurons. So from this interconnectivity,
emergent thought, compared to artificial neural networks, we used for machine
learning today, is there something fundamentally more complex, mysterious,
even magical about the biological neural networks
versus the ones we've been starting to use over the past 60 years and it become to success
in the past 10.
There is something a little bit mysterious about the human neural networks, which is that
each one of us who is a neural network knows that we ourselves are conscious.
Conscious, not in the sense of registering our surroundings or even registering our internal state,
but in having subjective first-person, present tense experience.
That is when I see red, it's not just different from green,
but there's a redness to it that I feel.
Whether an artificial system would experience that or not
I don't know and I don't think I can know that's why it's mysterious
We had a perfectly life-like robot that was behaviorally indistinguishable from a human would we attribute consciousness to it or
Or ought we to attribute consciousness to it and that's something that it's
Very hard to know but putting that aside, putting aside that largely philosophical question, the question
is, is there some difference between the human neural network and the ones that we're
building in artificial intelligence will mean that we're on the current trajectory not
going to reach the point where we've got a lifelike robot indistinguishable from a
human because the way
they're so-called neural networks organized
are different from the way ours are organized.
I think there's overlap, but I think
there are some big differences that the current neural networks,
current so-called deep learning systems
are in reality not all that deep, that is, they are very good
at extracting
high order statistical regularities, but most of the systems don't have a semantic level
of actual understanding of who did what to whom, why, where, how things work, what causes,
what else.
Do you think that kind of thing can emerge as it does? So, artificial neural networks are
much smaller, the number of connections and so on,
than the current human biological networks,
but do you think sort of to go to consciousness
or to go to this higher level semantic reasoning
about things, do you think that can emerge
with just a larger network,
with a more richly, weirdly interconnected network?
Separate it and consciousness,
because consciousness is
even a matter of complex.
A really good, hard one.
Yeah, you could sensibly ask the question of whether
shrimp are conscious, for example, they're not terribly
complex, but maybe they feel pain.
So let's just put that part of it aside.
But I think sheer size of a neural network is not enough
to give it structure and knowledge.
But if it's suitably engineered, then why not?
That is, we're neural networks.
Natural selection did a kind of equivalent of engineering of our brains.
So I don't think there's anything mysterious in the sense that no systemated of silicon
could ever do what a human brain can do.
I think it's possible in principle.
Whether it'll ever happen depends not only on how clever we are in engineering these systems,
but whether we even want to, whether that's even a sensible goal.
That is, you can ask the question, is there any locomotion system that is as good as a human?
Well, we kind of want to do better than a human, ultimately,
in terms of legular locomotion.
There's no reason that humans should be our benchmark.
There are tools that might be better in some ways.
It may just be not as...
It may be that we can't duplicate a natural system
because at some point it's so much cheaper
to use a natural system that we're not going to invest more brain power and resources.
So for example, we don't really have an exact substitute for wood.
We still build houses out of wood, we still build furnished out of wood, we like the
look, we like the feel, it has certain properties that synthetic don't.
There's not that there's any magical or mysterious about wood. It's just that the extra steps of duplicating everything about wood is something we just
haven't bothered because we have wood.
Like why say cotton?
I mean, I'm wearing cotton clothing now.
Feels much better than polyester.
It's not that cotton has something magic in it.
And it's not that if there's, that we couldn't ever synthesize something exactly like cotton,
but at some point it's just not worth it.
We've got cotton.
And likewise, in the case of human intelligence, the goal of making an artificial system
that is exactly like the human brain is a goal that we probably known as going to pursue
to the bitter end, I suspect, because if you want tools that do things better
than humans, you're not going to care whether it does something like humans.
So for example, you're diagnosing cancer or protecting the weather, why set humans as
your benchmark?
But in general, I suspect you also believe that even if the humans should not be a benchmark,
I don't want to imitate humans in their system.
There's a lot to be learned about how to create
an artificial intelligence system
by studying the human.
Yeah, I think that's right.
In the same way that to build flying machines,
we want to understand the laws of aerodynamics
and including birds, but not mimic the birds.
Right.
They're the same laws.
You have a view on AI artificial intelligence and safety
that from my perspective is refreshingly rational or perhaps more importantly has elements of
positivity to it which I think can be inspiring and empowering as opposed to
paralyzing.
For many people, including AI researchers, the eventual existential threat of AI is obvious,
not only possible, but obvious.
And for many others, including AI researchers, the threat is not obvious.
So Elon Musk is famously in the highly concerned about AI camp, saying
things like AI is far more dangerous than nuclear weapons, and that AI will likely destroy
human civilization. So in February, he said that if Elon was really serious about AI, the
threat of AI, he would stop building self-driving cars
that he's doing very successful as part of Tesla.
Then he said, wow, if even Pinker doesn't understand
the difference between narrow AI, like a car and general AI,
when the latter literally has a million times more compute
power and an open-ended utility function,
humanity is in deep trouble.
So first, what did you mean by the statement about Elon Musk should stop building
self-driving cars if he's deeply concerned?
Not the last time that Elon Musk has fired off an intemperate tweet.
Well, we live in a world where Twitter has power.
Yes.
Yeah, I think the...
There are two kinds of existential threat that have been discussed in connection
with artificial intelligence, and I think that they're both incoherent.
One of them is vague fear of AI takeover, that it just as we subjugated animals and less
technologically advanced peoples, so if we built something that's more advanced than us,
it will inevitably turn us into pets or slaves
or domesticated animal equivalents.
I think this confuses intelligence with a will to power.
It so happens that in the intelligence system
we are most familiar with, namely, homosapiens,
we are products of natural selection,
which is a competitive process.
And so bundled together with our problem solving capacity,
are a number of nasty traits like dominance and exploitation
and maximization of power and glory and resources
and influence.
There's no reason to think that sheer problem solving
capability will set that as one of its goals.
Its goals will be whatever we set its goals as, and as long as someone isn't building a mega-miniaco
artificial intelligence, and there's no reason to think that it would naturally evolve in that direction.
Now you might say, well, what if we gave it the goal of maximizing its own power source?
Well, that's a pretty stupid goal to give an autonomous
system. You don't give it that goal. I mean, that's just self-evidently idiotic.
So if you look at the history of the world, there's been a lot of opportunities where engineers
could instill in a system destructive power and they choose not to because that's the natural
process of engineering. Well, if you're building a weapon, it's goal is to destroy people.
And so I think there are good reasons to not build certain kinds of weapons.
I think building nuclear weapons was a massive mistake.
But problem.
You do.
You think, so maybe pause on that because that is one of the serious threats.
Do you think that it was a mistake in a sense that it was should have
been stopped early on, or do you think it's just an unfortunate event of invention that
this was invented?
Well, I think it was possible to stop, I guess, is the question I'm going to ask.
It's hard to rewind the clock because, of course, it was invented in the context of World
War II and the fear that the Nazis might develop one first. Then once it was initiated for that reason, it was hard to turn off,
especially since winning the war against the Japanese and the Nazis was such an
overwhelming goal of every responsible person that they're just nothing that people wouldn't have
done then to ensure victory. It's quite possible if World War II hadn't happened that
nuclear weapons wouldn't have been invented.
We can't know.
But I don't think it was, by any means, a necessity.
Any more than some of the other weapon systems
that were envisioned but never implemented,
like planes that would disperse poison gas
over cities like crop dusters or systems
to create earthquakes and tsunamis in enemy countries to weaponize
the weather, weaponize solar flares, all kinds of crazy schemes that we thought the
better of.
I think analogies between nuclear weapons and artificial intelligence are fundamentally
misguided because the whole point of nuclear weapons is to destroy things.
The point of artificial intelligence is not to destroy things. So the analogy is misleading.
So there's two artificial intelligence you mentioned, the first one I guess.
The first one was the highly intelligent, or, no, it's just a great, yeah, the system
that we design ourselves where we give it the goals. Goals are our external to the means
to attain the goals. If we don't design an artificially intelligent system to maximize dominance, then it won't
maximize dominance.
It's just that we're so familiar with homo sapiens where these two traits come bundled together,
particularly in men, that we are apt to confuse high intelligence with a will to power, but that's just an error.
The other fear is that we'll be collateral damage that will give artificial intelligence
a goal, like make paper clips, and it will pursue that goal so brilliantly that before we
can stop it, it turns us into paper clips.
We'll give it the goal of curing cancer, and it will turn us into guinea pigs
for lethal experiments.
Or give it the goal of world peace,
and it's conception of world peace is no people,
therefore no fighting, and so will kill us all.
Now, I think these are utterly fanciful.
In fact, I think they're actually self-defeating.
They, first of all, assume that we're going to be so brilliant
that we can design an artificial intelligence that can cure cancer.
But so stupid that we don't specify what we mean by curing cancer in enough detail that
it won't kill us in the process.
And it assumes that the system will be so smart that it can cure cancer.
But so idiotic that it doesn't can't figure out that what we mean by curing cancer is not
killing everyone.
So I think that the collateral damage scenario, the value alignment problem, is also based
on a misconception.
So one of the challenges, of course, we don't know how to build either system currently
or are we even close to knowing.
Of course, those things can change overnight, but at this time, theorizing about is very
challenging in either direction.
So that's probably at the core of the problem is without that ability to reason about the
real engineering things here at hand, as your imagination runs away with things.
Exactly.
But let me sort of ask, what do you think was the motivation and the thought process of
Elon Musk?
I built autonomous vehicles, I studied autonomous vehicles, I study Tesla autopilot.
I think it is one of the greatest currently large scale applications of artificial intelligence
in the world.
It has a potentially very positive impact on society.
So, how does a person who is creating this very good, quote unquote, narrow AI system also seem to be so concerned about
This other general AI. What do you think is the motivation there? What do you think is the thing? Well, I you probably have to ask him but there and and he is
notoriously
flamboyant
Impulsive to the as we have just, to the detriment of his own goals
of the health of a company.
So I don't know what's going on in his mind.
You probably have to ask him, but I don't think the, and I don't think the distinction between
special purpose AI and so-called general AI is relevant, that in the same way that special
purpose AI is not going to do anything conceivable
in order to attain a goal. All engineering systems have to are designed to trade off across
multiple goals. When we built cars in the first place, we didn't forget to install brakes,
because the goal of a car is to go fast. It occurred to people, yes, you want to go fast,
but not always. So you would build and break, too.
Likewise, if a car is going to be autonomous, that doesn't, and program it to take the shortest
route to the airport.
It's not going to take the diagonal and bow down people and trees and fences, because that's
the shortest route.
That's not what we mean by the shortest route when we program it, and that's just what
an intelligent system is by definition.
It takes into account multiple constraints. The same is true, in fact, even more true of so-called
general intelligence, that is, if it's genuinely intelligent, it's not going to pursue some goal
single-mindedly, omitting every other consideration and collateral effect.
That's not artificial in general intelligence,
that's artificial stupidity.
I agree with you by the way,
on the promise of autonomous vehicles
for improving human welfare, I think it's spectacular.
And I'm surprised at how little press coverage
notes that in the United States alone,
something like 40,000 people die every year on the highways,
vastly more than are killed by terrorists. the United States alone, something like 40,000 people die every year on the highways.
Fastly more than are killed by terrorists.
And we spend, we spent a trillion dollars on a war to combat death's viterrism, but half
a dozen a year.
Whereas every year and year out, 40,000 people are massacred on the highways, which could
be brought down to very close to zero.
So I'm with you on the humanitarian benefit.
Let me just mention that as a person who's building these cars, it is a little bit offensive
to me to say that engineers would be close enough not to engineer safety into systems.
I often stay up at night thinking about those 40,000 people that are dying and everything
I try to engineer is to save those people's lives. So every new invention that I'm super excited about,
every new and all the deep learning literature
and CVPR conferences and nips,
everything I'm super excited about
is all grounded in making it safe and help people.
So I just don't see how that trajectory
can all of a sudden slip into a situation
where intelligence will be highly negative. So you and I't see how that trajectory can all of a sudden slip into a situation where intelligence will be highly negative.
So you and I certainly agree on that.
And I think that's only the beginning of the potential humanitarian benefits of artificial
intelligence.
There's been enormous attention to what are we going to do with the people whose jobs are
made obsolete by artificial intelligence.
But very little attention given to the fact that the jobs that are going to be made obsolete are horrible jobs.
The fact that people aren't going to be picking crops and making beds and driving trucks and mining coal.
These are, you know, soul-deadening jobs. We have a whole literature sympathizing with the people stuck in these menial, mind-deadening, dangerous jobs.
If we can eliminate them,
this is a fantastic boom to humanity.
Now granted, you saw one problem and there's another one,
namely, how do we get these people a decent income?
But if we're smart enough to invent machines
that can make beds and put away dishes
and handle hospital patients,
well, I think we're smart enough to figure out how to redistribute income to a portion,
some of the vast economic savings to the human beings who will no longer be needed to make
beds.
Okay.
Sam Harris says that it's obvious that eventually AI will be an existential risk.
He's one of the people who says it's obvious.
We don't know when the claim goes, but eventually it's obvious.
And because we don't know when, we should worry about it now.
There's a very interesting argument in my eyes.
So how do we think about timescale?
How do we think about existential threats when we don't really,
we know so little about the threat
Unlike nuclear weapons perhaps about this particular threat
That it could happen tomorrow, right? So but very likely won't yeah, there likely be a hundred years away
So how do do we ignore it? Do how do we talk about it?
Do we worry about it? What how do we think about those?
What is it? Do we worry about it? How do we think about those? What is it?
A threat that we can imagine, it's within the limits of our imagination, but not within
our limits of understanding to actually predict it.
But what is the it that we're finding out?
AI, sorry, AI being the existential threat.
AI can always be.
How? We've been like enslaving us or turning us into paperclips,
I think the most compelling from the same Harris perspective
would be the paperclips situation.
Yeah, I mean, I just think it's totally fanciful.
I mean, it is, don't build a system.
Don't give, first of all, the code of engineering is,
you don't implement a system with massive control
before testing it. Now, perhaps the culture of engineering will radically don't implement a system with massive control before testing it.
Now, perhaps the culture of engineering will radically change, then I would worry.
But I don't see any signs that engineers will suddenly do idiotic things,
like put an electrical power plant in control of a system that they haven't tested first.
Or all of these scenarios not only imagine a almost a magically powered intelligence,
including things like cure cancer,
which is probably an incoherent goal,
because there's so many different kinds of cancer,
or bring about world peace.
I mean, how do you even specify that as a goal?
But the scenarios also imagine some degree of control
of every molecule in the universe, which
not only is itself unlikely, but we would not start to connect these systems to infrastructure
without testing as we would any kind of engineering system.
Now maybe some engineers will be irresponsible and we need legal and regulatory and legal responsibility implemented
so that engineers don't do things that are stupid by their own standards.
But I've never seen enough of a plausible scenario of existential threat to devote large
amounts of brain power to forestall it. So you believe in the power on mass of the engineering of reason, as you argue in your latest
book of reason and science, to sort of be the very thing that guides the development of
new technology so it's safe and also keeps it safe.
Yeah, you've granted the same culture of safety that currently is part of the engineering
mindset for airplanes, for example.
So yeah, I don't think that that that should be thrown out the window and that untested
all powerful systems should be suddenly implemented.
But there's no reason to think they are.
And in fact, if you look at the progress of artificial intelligence, it's been, you know,
it's been impressive, especially in the last 10 years or so.
But the idea that suddenly there'll be a step function
that all of a sudden, before we know it,
it will be all powerful.
It'll be some kind of recursive self-improvement,
some kind of fume, is also fanciful.
Certainly by the technology that we now impress us, such as deep learning, where you train something on
hundreds of thousands or millions of examples, there are not hundreds of thousands of problems of which
curing cancer is a typical example. And so the kind of techniques that have allowed AI to
increase in the last five years are not the kind that are going to lead to this fantasy of exponential sudden self improvement.
So it's I think it's kind of a magical thinking.
It's not based on our understanding of how AI actually works.
Now give me a chance here.
So you said fanciful magical thinking.
And as Ted Talk Sam Harris says, thinking about AI killing all human civilization is
somehow fun intellectually.
Now I have to say as a scientist engineer, I don't find it fun.
But when I'm having beer with my non-AI friends, there is indeed something fun and appealing
about it, like talking about an episode of Black Mirror, considering if a large
media or is headed towards Earth, we were just told a large media or is headed towards Earth,
something like this. And can you relate to this sense of fun? And do you understand the psychology of it?
That's right, good question. I personally don't find it fun. I find it kind of actually a waste of time
because there are genuine threats that we ought to be thinking about,
like pandemics, like cyber security vulnerabilities,
like the possibility of nuclear war,
and certainly climate change.
This is enough to fill many conversations without, and I think Sam did put a spinger on something,
namely that there is a community, sometimes called the rationality community, that delights
in using its brain power to come up with scenarios that would not occur to mere mortals, to less
cerebral people.
So there is a kind of intellectual thrill
in finding new things to worry about
that no one has worried about yet.
I actually think though that it's,
not only is it kind of fun
that doesn't give me particular pleasure,
but I think there can be a pernicious side to it,
namely that you overcome people with such dread,
such fatalism, that there's so many ways to die,
to annihilate our civilization, that we may as well enjoy life while we can. There's nothing
we can do about it. If climate change doesn't do us in, then runaway robots will. So let's enjoy
ourselves now. We've got to prioritize. We have to look at threats that are close to certainty,
such as climate change, and distinguish those from ones that are merely imaginable,
but with infinitesimal probabilities. And we have to take into account people's worry budget.
You can't worry about everything. And if you sow dread and fear and terror and fatalism, it can lead to a kind of numbness.
Well, these problems are overwhelming and the engineers are just going to kill us all.
So let's either destroy the entire infrastructure of science, technology, or let's just enjoy
life while we can.
So there's a certain line of worry, which I'm worried about a lot of things that you're
hearing.
There's a certain line of worry when you cross.
You'll all have to cross that it becomes paralyzing fear as opposed to productive fear.
And that's kind of what you're highlighting.
Exactly right.
And we've seen some, we know that human effort is not well calibrated against risk.
In that, because a basic tentative cognitive psychology is that perception of risk and
hence perception of fear is driven by imaginability, not by data.
And so we misallocate fast amounts of resources to avoiding terrorism, which kills on average about six Americans a year with a one exception of 9-11.
We invade countries, we invent an entire new Department of Government with massive, massive expenditure of resources in life to defend ourselves against a trivial risk,
whereas guaranteed risks, one of mention, as one of them,
you mentioned traffic fatalities,
and even risks that are not here,
but are plausible enough to worry about,
like pandemics, like nuclear war,
receive far too little attention.
The, in presidential debates,
there's no discussion of how to minimize the risk of nuclear war,
lots of discussion of terrorism, for example.
And so I think it's essential to calibrate our budget of fear, worry, concern, planning
to the actual probability of harm.
Yep, so let me ask this in this question.
So speaking of imaginability, you said that it's important to think about reason.
And one of my favorite people who likes to dip into the outskirts of reason through
fascinating exploration of his imaginations, Joe Rogan.
Oh yes.
You, so who has, through reason, used to believe a lot of conspiracies, and through reason
has stripped away a lot of his beliefs in that way.
So it's fascinating actually to watch him through rationality, kind of throw away the ideas
of Bigfoot and 9-11.
I'm not sure exactly.
Can trails, I don't know what he believes in.
Yes, okay.
But he didn't know. He didn't. Yes, okay, but he didn't know.
Believe in that's right.
No, he's, he's become a real force for, for good.
Yep.
So you were on the Joe Rogan podcast in February and had a fascinating conversation, but
as far as I remember, didn't talk much about artificial intelligence.
I will be on his podcast in a couple of weeks.
Joe is very much concerned about existential threat of AI.
I'm not sure if you're, which is why I was hoping that you'll get into that topic.
And in this way, he represents quite a lot of people who look at the topic of AI from
10,000 foot level.
So as an exercise of communication, you said it's important to be rational and reason about
these things.
Let me ask, if you were to coach me
as an AI researcher about how to speak to Joe
and the general public about AI, what would you advise?
Well, the short answer would be to read the sections
that I wrote in the light and the day I think about AI.
But a longer reason would be I think to emphasize
and I think you're very well positioned as an engineer
to remind people about the culture of engineering that it really is safety oriented. That another discussion in Enlightenment now, I plot
rates of accidental death from various causes, plane crashes, car crashes, occupational accidents,
even death by lightning strikes, and they all plummet.
Because the culture of engineering is how do you squeeze out the lethal risks?
Death by fire, death by drowning, death by asphyxiation, all of them drastically declined
because of advances in engineering.
But I gotta say, I did not appreciate until I saw those graphs.
And it is because exactly, people like you who stamp it,
and I think, oh my god, it is what I'm inventing likely to hurt people and to deploy ingenuity
to prevent that from happening. Now, I'm not an engineer, although I spent 22 years at MIT,
so I know something about the culture of engineering. My understanding is this is the way
this is what you think if you're an engineer.
And it's essential that that culture not
be suddenly switched off when it comes
to artificial intelligence.
So I mean, that could be a problem,
but is there any reason to think it would be switched off?
I don't think so.
And one, there's not enough engineers speaking up
for this way, for the excitement,
for the positive view of human nature, what you're trying to create
is the positivity.
Like, everything we try to invent is trying to do good for the world.
But let me ask you about the psychology of negativity.
It seems just objectively, not considering the topic.
It seems that being negative about the future makes you sound smarter than being positive
about the future.
In regard to this topic, am I correct in this observation
and if so, why do you think that is?
Yeah, I think there is that phenomenon.
That as Tom Leroy, the satirist,
said, always predict the worst and you'll be healed as a prophet.
It may be part of our overall negativity bias.
We are, as a species, more attuned to the negative
than the positive.
We dread losses more than we enjoy gains.
And that might open up a space for profits
to remind us of harms and risks and losses
that we may have overlooked.
So I think there is that asymmetry.
So you've written some of my favorite books all over the place,
starting from enlightenment now to the better ages of our nature,
blank slate, how the mind works, the one about language,
language instinct, Bill Gates, big fan too. A set of your most recent book that it's
my new favorite book of all time.
So for you as an author,
what was the book early on in your life
that had a profound impact on the way you saw the world?
Certainly this book in Lightman now is influenced
by David Deutsch's,'s beginning of infinity.
There's a rather deep reflection on knowledge
and the power of knowledge to improve the human condition.
They end with bits of wisdom such as that problems
are inevitable, but problems are solvable,
given the right knowledge,
and that solutions create new problems
that have to be solved in their turn.
That's, I think, a kind of wisdom
about the human condition that influenced the writing of this book.
There's some books that are excellent but obscure, some of which I have on my
page on my website. I read a book called The History of Force. Self-published by
a political scientist named James Payne on the historical decline of violence,
and that was one of the inspirations for the better angels of our nature.
What about early on? Early look back when you were maybe a teenager.
I loved a book called One, Two, Three, Infinity. When I was a young adult, I read that book by George
Gamov, the physicist, which had very accessible and humorous explanations of relativity, of number theory,
humorous explanations of relativity, of number theory, of dimensionality, high multiple dimensional spaces, in a way that I think is still delightful 70 years after it was published.
I like that the Time Life Science series.
These were books that would arrive every month that my mother subscribed to.
Each one on a different topic, one would be on electricity, one would be on forests, one
would be on evolution, and then one was on the mind.
And I was just intrigued that there could be a science of mind, and that book I would
cite as an influence as well.
That's when you fell in love with the idea of studying the mind.
That's one thing that grabbed you.
It was one of the things, I would say.
I read as a college student, the book Reflections on Language by
Noam Chomsky.
I spent most of his career here at MIT.
Richard Dawkins, two books, The Blind Watchmaker and The Self
Is Gene, were enormously influential.
Partly for the content, but also for the writing
style, the ability to explain abstract concepts in lively prose. Steven J. Gool's first collection
ever since Darwin, also an excellent example of a lively writing. George Miller, a psychologist
that most psychologists are familiar with, came up with the
idea that human memory has a capacity of 7 plus or minus 2 chunks. That's probably his biggest
claim to fame. But he wrote a couple of books on language and communication that I'd read as an
undergraduate. Again, beautifully written and intellectually deep. Wonderful. Stephen, thank you so
much for taking the time today. My pleasure. Thanks a lot, Lex.
lecture we deep. Wonderful. Stephen, thank you so much for taking the time today.
My pleasure. Thanks a lot, Lex.