Factually! with Adam Conover - How Humans Took Over the World with Yuval Noah Harari
Episode Date: November 16, 2022Weāve got a big one this week: best-selling historian and author ofĀ SapiensĀ andĀ Unstoppable UsĀ Yuval Noah Harari joins Adam to explain how humans took over the world, discuss his fears ...about A.I., and to respond to some prominent critiques. Pick up his books atĀ http://factuallypod.com/books Promo Code: FACTUALLY Vanity URL: hover.com/FACTUALLY Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
You know, I got to confess, I have always been a sucker for Japanese treats.
I love going down a little Tokyo, heading to a convenience store,
and grabbing all those brightly colored, fun-packaged boxes off of the shelf.
But you know what? I don't get the chance to go down there as often as I would like to.
And that is why I am so thrilled that Bokksu, a Japanese snack subscription box,
chose to sponsor this episode.
What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill grocery store finds.
Each box comes packed with 20 unique snacks that you can only find in Japan itself.
Plus, they throw in a handy guide filled with info about each snack and about Japanese culture.
And let me tell you something, you are going to need that guide because this box comes with a lot of snacks.
I just got this one today, direct from Bokksu, and look at all of these things.
We got some sort of seaweed snack here.
We've got a buttercream cookie. We've got a dolce. I don't, I'm going to have to read the
guide to figure out what this one is. It looks like some sort of sponge cake. Oh my gosh. This
one is, I think it's some kind of maybe fried banana chip. Let's try it out and see. Is that what it is? Nope, it's not banana. Maybe it's a cassava
potato chip. I should have read the guide. Ah, here they are. Iburigako smoky chips. Potato
chips made with rice flour, providing a lighter texture and satisfying crunch. Oh my gosh, this
is so much fun. You got to get one of these for themselves and get this for the month of March.
Bokksu has a limited edition cherry blossom box and 12 month subscribers get a free kimono
style robe and get this while you're wearing your new duds, learning fascinating things
about your tasty snacks.
You can also rest assured that you have helped to support small family run businesses in
Japan because Bokksu works with 200 plus small makers to get their snacks delivered straight
to your door.
So if all of that sounds good, if you want a big box of delicious snacks like this for yourself,
use the code factually for $15 off your first order at Bokksu.com.
That's code factually for $15 off your first order on Bokksu.com. I don't know the truth. I don't know the way. I don't know what to think. I don't know what to say.
Yeah, but that's alright. Yeah, that's okay. I don't know anything.
Hello and welcome to Factually. Thank you so much for joining me once again as I talk to an incredible expert about all the amazing shit that they know that I don't know and that you might not know.
Both of our minds are going to get blown together and we are going to have a fantastic time doing it.
Now, if you're watching this on YouTube, I want to remind you that this is also an audio podcast, so search for Factually wherever you get your podcasts.
you that this is also an audio podcast, so search for Factually wherever you get your podcasts. If you're listening on your podcast player, I want to let you know that this episode is available on
YouTube. If you want to hear me talk to our guest today in beautiful HD. And I also want to remind
you that I am on tour right now. If you live in or near Raleigh, North Carolina, please come see me
this weekend at Good Nights Comedy Club. I'm doing a brand new hour of stand-up, and I'd love to see you there.
And finally, if you want to support this show, please head to patreon.com slash adamconover.
For just five bucks a month, you get every episode of this show ad-free.
You can join our community Discord.
We even do a book club.
It is so much fun.
Hope to see you there, patreon.com slash adamconover.
Now, let's talk about this week's episode, because it is a big one. You know, one of my favorite things about doing this podcast
is that I get to talk to some of the most intelligent, prominent, influential thinkers
in the world, and today's episode is definitely an example of that. My guest today is Yuval Noah
Harari, one of the best-selling non-fiction authors in the world. His book, Sapiens, is this
incredibly entertaining, ambitious history of Homo sapiens all the way from thousands of years ago
to today. His work is chock full of big, interesting ideas that draw from a wide variety
of academic disciplines. He is an incredible communicator. Every single page of his books
is crammed with ideas that leap off the page and stick
in your mind for years to come.
I have had so much fun reading him over the years, and the ideas in his books have really
stuck with me.
But Harari is also willing and comfortable to make broad predictions about the future
of humanity, something that, you know, not a lot of historians generally do.
And this has made him beloved by Silicon Valley.
This guy's done Google Talks, TED Talks, he's even sat down with Mark Zuckerberg.
The techno elite love him.
And like anyone with such a high profile who addresses such a broad range of issues,
Harari has come in for some criticism from the academic community.
We've actually featured some of that criticism on this show
when David Wengro came on to
talk about his book with David Graeber, The Dawn of Everything, which pushes back quite
a bit against some of the stories that Harari tells in Sapiens.
So after years of reading and enjoying Harari's work, and reading some of these critiques,
when I had the opportunity to interview him on this show, I knew I didn't want to just
do another, hey let's sit down with a great man and learn about how he sees the world type of interview. I wanted to push and prod at him a little bit to
get him to engage with some of these criticisms so that we could hear his responses to them.
And you know what? I am so gratified that he was game to have that conversation. This interview
ended up being so fascinating. I was a little bit skeptical about some of his claims. He blew my
mind plenty of times. We staked out some areas of agreement and disagreement and overall had an
incredibly fruitful, thought-provoking conversation that I'm going to be thinking about for a long
time to come. So without further ado, let's get to this interview with Yuval Noah Harari.
Yuval, thank you so much for being on the show.
It's good to be here.
It's an honor to have you.
I read your famous book, Sapiens, a number of years ago.
Influenced me quite a bit, as it has so many other people.
First, let's talk about the new book that you have out, which I do not have a physical copy of, but we'll throw some images on.
Tell me about it.
So it's called Unstoppable Us.
And it's aimed at children aged 8 to 12 telling the history of how humans took over the world.
Why do we control the world and not the chimpanzees or the elephants or the whales or any other animal?
And I think, you know, as a child, one of the most important questions that everybody asks themselves is, who am I? And where did we humans come from? And, you know, each of us
is made of thousands of bits and pieces that came from all over history and all over the world.
You think about the games we play, whether it's football or baseball or whatever,
it came from somewhere,
from some specific time and place in history.
The food that we eat, like you like chocolate.
So chocolate was first discovered by the Olmecs
in Central America about 4,000 years ago.
So every time you eat chocolate,
there is a tiny bit of Olmec in you.
And if you like sugar with your chocolate, so it's sweet.
So sugar was domesticated in New Guinea something like 8,000 years ago.
And that's also a very important part of who we are.
We like sugar.
And going all the way down to our most basic emotions and feelings,
you know, if you wake up as a child or as an adult in the middle of the night,
afraid that there is a monster under the bed, still happens to me sometime.
So this is actually a historical memory from millions of years ago when we lived or our
ancestors lived in the African savannah. And there were actually monsters in the night coming to eat
us like lions and cheetahs. And if you woke up
and ran away, you survived. If you didn't, then you didn't survive. So to understand who we are,
I think, means to understand the whole of human history.
Yeah. Now, why write this for kids? I mean, look, there's many adaptations of Sapiens,
so there's a TV show coming
out and things like that. So of course, there's, it's part of capitalism, you have some success,
you adapted into a number of different products. But in terms of making the story for kids,
what is the importance of that to you? And are you trying to correct anything that kids normally
hear about the story of humanity? Is there some mission behind the book in that way for you?
Absolutely.
Again, I mean, kids are part of this world.
They need to understand how it works.
In many countries, I'm not sure how it is here,
but in many countries they teach kids mostly the history of only their nation, their people,
which is important, but which is not enough.
Again, whether it's
chocolate or whether it's our fears or hopes, they usually come from all over the world and not,
they are like, I don't know, you love your parents. This was not something that was invented
by American culture or Israeli culture. Again, it comes from millions of years ago. All mammals have a deep connection to their parents.
This is what defines us as mammals.
You know, I don't know, a turtle or something,
the she-turtle just lays eggs on some beach at night,
covers the hole, goes to the ocean, and that's it.
That's the entire parenting.
But the baby ducks follow the mama duck around.
There are some other non-mammals.
Mammals, not just mammals, but mammals and birds.
Yes, they have a very strong connection there.
I mean, as a bird or as a mammal, you don't survive if you don't have a strong connection between parents and children.
Yeah.
So, again, if you try to understand this about us, this is not coming from American culture or Jewish culture or Chinese culture.
This is coming from millions of years of evolution and from human history.
And, you know, the other thing is that when we teach history to kids, we often kind of simplify it or avoid the difficult issues.
And the difficult issues are often the most important, like in writing the new book, Unstoppable
Us.
So we had a big discussion in the team about the issue of corporations.
Do we try to explain what a corporation is to 10-year-olds?
And the answer was, yes, it's very difficult to explain it.
It's a complicated concept, but it's essential because they meet corporations every day.
You know, you have all these books to kids about animals, elephants and sharks and whales, which is important.
Absolutely. I love them.
But how many times a day does an average 10-year-old kid in America meet an elephant or a whale?
It's quite rare.
But they meet corporations every day.
You know, Google is a corporation. TikTok is a corporation. McDonald's is a corporation.
The favorite sports team is a corporation. So they need to understand it's a basic survival skill.
I mean, okay, when we lived in the Savannah, you need to understand lions and elephants.
You meet them every day. You need to know what happens if you meet a lion. We don't meet lions today. We meet corporations. And as a
10-year-old, you need to understand how to beware of corporations, how to treat them in order to
kind of survive your daily life. Well, and you give a very specific account of corporations that I find
really interesting and really valuable because you describe them as
basically imaginary entities or abstract entities and that's connected to an entire thread in this
book and in the new one in sapiens and in the new one about uh that you know what makes humans
unique is our ability to think abstractly to have abstract concepts and to create imaginary
structures yeah and I think that is a really
interesting thing to tell kids to tell them hey a corporation isn't you know created by god or it
isn't a perfect you know thing that was just you know it's uh bestowed like the animals it's not a
natural entity yeah uh humans created them in a very specific way, which also means that we can change them, change
the laws, the rules that govern them.
And this is really, I think, maybe the most important message of the whole book, is that
the world in which we live, with its corporations and nations and religions and its wars and
everything, this is a world we created from our imagination.
And if there is something that you don't like about this world,
if there is something that you find unfair, you can change it.
You know, the laws of physics, you can't do much about them.
The laws of biology, it's not in our hands.
But the laws of a country or the laws that define the rights and duties of corporations, for instance, we imagined these things.
So if you don't like it, it's not easy, of course, to change it.
It's very difficult, but it's doable.
It's been done many times before.
You can do it again.
Well, let's talk about this title, Unstoppable Us.
I find that to be an interesting title because it sounds very positive, right? Or it's very
like, it sounds a little go team, you
know? It's a double meaning.
Yeah, well
I was going to say a bunch of the book
is about humans have caused extinctions
throughout the earth. That's one of the many
ideas from Sapiens that you elaborate on
in the book for kids. And so I find
there a tension there between
the title and what actually we have done.
I mean, the title has a tension built into it
because on the one hand,
we humans are unstoppable
in the sense that we are so powerful
that no other animal can stop us.
Not the elephants, not the alligators,
not the whales, nobody can stop us.
On the other hand,
it also has a more sinister implication yeah that
uh we cannot stop ourselves no matter what we achieve we always want more you have a million
dollars you want two you have two million dollars you want ten and um we have conquered the whole
world and uh very often we are causing destruction around the world not just to other
animals also to ourselves and there is no kind of force out there some responsible adult in charge
that can intervene and stop us if you're crossing a dangerous line if we are doing something uh we
shouldn't do if we are doing something self-destructive.
So we are really, it's our responsibility to stop ourselves, to draw our own lines,
to draw our own limits.
Nobody else will do it for us.
Well, there are some limits on humanity in terms of like what, you know, if we outstrip
the basic resources of the earth or if we cause an ecological calamity what, you know, if we outstrip the basic resources of the earth or if we cause an ecological calamity
that, you know, causes our own downfall,
you know, there are certain mathematical limits perhaps,
the number of people who fit on earth,
and that, you know, we can,
it's not as though we can overcome
every obstacle in our path, right?
Usually, I mean, even things that we think about as kind of absolute limits,
if people really get together and think about it,
they often find ways around it.
Like you think about the resources of the earth.
They seem to be finite, but every time we discover new ways to new sources of
energy, new sources of material. Like when I think about the ecological threat we are facing,
so some people understand it in terms of limited resources. One day the oil will run out,
or one day we won't have enough, whatever. And this is almost never the case.
or one day we won't have enough, whatever.
And this is almost never the case.
200 years ago, nobody cared about oil at all.
People didn't even understand that this is a potentially important source of energy.
Nobody cared about stuff like aluminum or like plastic.
We didn't know how to produce them.
When you look at history,
you see that the resources at our disposal
actually increase all the time.
We use more and more, and as if by magic, there is even more resources available because we discover new resources or new ways to extract them.
What's really the problem is that in doing so, we destabilize the ecological system.
So something like climate change, it's not a lack of resources.
It's actually the result of humans being able to discover too many resources, energy and so forth,
and using them in a way which destabilizes the whole system. But what's important to appreciate
is that we also have the power to solve this problem. Climate change is not
kind of, again, an external crisis resulting from the laws of physics, which we are just helpless
in the face of. No, we actually have the scientific knowledge and even the economic resources to solve climate change.
It's estimated that by investing something like 2% of the human budget, we can solve climate change.
We need to invest it in developing new and better energy sources, infrastructure and so forth.
But it is within our power.
It's not an unsolvable problem.
And that's, again, another key message of Unstoppable Us is realize how much power we
have, not in order to develop pride, but in order to develop responsibility.
Yeah.
develop pride, but in order to develop responsibility.
Yeah.
That as a human being, simply by being a human being, you're already far more powerful than any lion or elephant or whale.
One thing that I see today a lot in the world, in the way that we educate kids, also in the
way people think about themselves, is it too often people tell themselves
stories of victimhood
Like build their identity around a narrative of victimhood like how I personally or the group I belong to
Are victims and you see it everywhere even for the most powerful?
Groups in the world tell themselves this story of victimhood. Now, of course,
some people have a much better justification than others. But I think that for anybody,
building your identity around that, I think it's unhealthy psychologically. It's also to some
extent irresponsible. Because when you view yourself primarily as a victim, the reason it is attractive to people is because it relieves them for responsibility.
If I'm a victim, it means that the big problems of the world, again, like climate change or like global inequality or war or whatever, I didn't cause them.
It's the big powers that cause them.
I'm a victim.
I'm not the cause of the problem.
Secondly, if I don't have much power myself because I'm a victim i'm not the cause of the problem secondly if i don't have
much power myself because i'm a victim it's not my responsibility solve these problems one day
when i i become powerful then i'll do something but not now and again there are groups in society
which have far better justification for for holding this kind of view.
But when you see increasingly everybody,
everybody now wants to be a victim.
Even those in positions of power.
Even those who have enormous power.
Look at the entire countries.
Like, I don't know, I listen now to the Russian propaganda about the invasion of Ukraine.
And they present themselves the biggest country in the world,
one of the most powerful countries in the world.
We are victims. We are threatened. Look what they did to us here. Look what they did to
us there. It's a powerful justification for doing whatever you want. Exactly. It's a kind of blank
check. And if everybody adopts this position, it is extremely dangerous. So one of the messages of the book is that
we have to acknowledge our enormous power
just by being human beings
we are far more powerful
than anybody else on the planet
so it's on us
I love that message
and I'm very fascinated to hear you say it
because one of the strains
I detected in reading Sapiens
and some of your past work and I've heard other people comment on your work is that you often seem like somewhat of a fatalist or somewhat of a pessimist in your work.
Like I think about passages like the depiction of the agricultural revolution as being a trap that was sprung upon us by wheat, right?
That wheat has trapped us into this
and we are now sort of stuck with it.
Or you describe at some points in Sapiens,
you know, these imaginary institutions
that we've built for ourselves as being,
well, we can't just imagine them away now,
now that we're within them,
which to me sort of seems like it removes agency from us
a bit as a species.
The agricultural revolution piece particularly,
it's like, oh, this is something that occurred to humanity
rather than a choice that we made
and that we might have made otherwise
that we might make again differently in the future.
I'm curious if you've felt,
if that depiction is correct,
if you think I'm misrepresenting you
or if you think you've evolved over the years or...
I think two things have to be said about it. I mean, first of all, that many of these things,
it's because it's hard for us to understand the consequences of what we do. You look at the
agricultural revolution. So most of the inventions or most of the new things that people created,
domesticated plants and animals and learning agriculture,
they didn't see the consequences.
They didn't realize what would be the consequences of what they are doing.
They thought, oh, this is a great idea.
We domesticate wheat.
We have a lot more food.
Wonderful.
What could possibly go wrong? And they couldn't see all the things that could go wrong.
seeing all the things that could go wrong.
For instance, that this would lead to huge inequality within human society.
So yes, there is a lot more food, there is a lot more power, but it is monopolized by a very small percentage of the population,
the kings and the priests and the aristocrats.
And the life of the average person actually became far worse
after the agricultural revolution compared to before.
Similarly, nobody saw the epidemic diseases that this will cause.
I mean, and it's difficult.
I mean, what's the connection between domesticating wheat and having more infectious diseases?
The connection is that when you switch to agriculture, it means you switch to living
permanently in a village or town with a lot more people around because there is a lot
more food and also with all the domesticated animals, the goats, the sheep, the chickens.
And this is how you get epidemics.
Hunter-gatherers moving around in small bands of, say, 20 people or 50 people every night sleeping in a different place, having no domesticated animals, they were almost completely free of epidemics.
Most germs, they come from some animal.
They infect a person.
Then the person infects everybody around.
You have an epidemic.
Now, hunter-gatherers, they don't have goats and sheep. And even if
somebody is infected by some, I don't know, some wild bat, then he or she can infect maybe 10,
20 other people. That's it. No epidemic. But once you have agriculture, so, you know, you imagine
that the first towns and cities in history, people built them as a kind of paradise for humans.
It turned out they are paradise for germs.
If you're a human, it's not so good.
You're living in this crowded place.
If you're a germ, it's fantastic.
Yeah, if it's a germ, fantastic.
Like you jump from a goat to a human.
There are like 2,000 other humans around with all the garbage and all the sewage.
They don't move anywhere.
They can't escape you. You have epidemic nobody saw this coming and it's good to understand the mechanism because
try let we can try to avoid doing the same thing now with new technologies like artificial
intelligence we are facing the same problems of unintended consequence. Like we can, AI is going to give us enormous power.
The big question is, will this power again be monopolized by a very small elite, the new priests, the new kings, which will use this to exploit and dominate everybody else?
That's a very big question we should ask about this dangerous technology.
That's a very big question we should ask about this dangerous technology.
Similarly, okay, it doesn't cause epidemics of organic viruses, but look at our information system.
We now have epidemics of viral information.
I look at the United States.
It has the best information technology in human history.
Yeah.
And suddenly people are unable to hold the conversation.
Like, I don't know, Democrats and Republicans.
Sure.
Again, they can't hold the conversation.
Sure.
What's happening? And it's not that the ideological differences are bigger than in the past.
If you compare the 1960s to what's happening today, the ideological
differences were much bigger in the
60s. But people still
could have a conversation.
And now it's breaking down.
So it seems that
in a kind of analogous
way to the agricultural revolution,
a technology which was
supposed to make things better
had all these unintended consequences that actually made things worse.
Now, this doesn't mean we have no agency.
As humans, our agency comes from the ability to understand what's happening, like with infectious diseases.
So, okay, we now live in a big city.
Everybody crowded together.
Let's have a sewage system. Let's have a better hygiene. Took about a thousand years or two to figure out how to build
those, but to eradicate cholera and things like that. And similarly now with AI. So my message
as a historian or as a philosopher, I do tend to focus on the negative scenarios.
Why?
Because you have enough people focusing on the positive scenarios.
The people who develop AI, for instance, the people who develop social media, they naturally focus on the positive scenarios.
They don't need me as a historian coming to again talk about the positive scenarios.
It's my job to say, wait a minute, there are also some dangerous scenarios. I'm not saying that they
are inevitable. They're like a prophecy of doom. If it is inevitable, what's the point of talking
about it? It will happen anyway. The idea is let's point a finger at some of the more dangerous
scenarios, like, I don't know ai destroying
human privacy with surveillance technology and so forth and let's think together about how to prevent
this dangerous scenario well i want to press you about this a little bit but we have to take a
really quick break we'll be right back with Yuval Noah Harari.
Yeah, I want to talk more about technology because I do think that when we ā we talk a lot about technology, a lot about artificial intelligence and the sort of profits of it on this show.
And one of the things I've learned to be skeptical of is the premises that
you hear from the people who are inventing the technologies. And some of that is about what the
dangers are, you know. So, for instance, you know, when you have self-driving cars, you know, if we
accept the premises of the people who are trying to invent the self-driving cars, then we might worry about a certain set of outcomes and we might redesign, you know, our, you heard people
pitching things like, oh, let's make it impossible to cross the street except when, you know, for a
pedestrian to cross the street, except for when the light allows them, because you don't want to
confuse the AI too much and et cetera, et cetera. And in recent years, it's become clear unfortunately that the the people who were
you know uh presenting okay here is are the possible future self-driving cars will bring us
and here are the things we should be worried about well their premises were entirely wrong
to begin with that self-driving cars are not very close off that you know the technology is
fundamentally very flawed um and that humans are actually much better drivers than AI for
great many reasons and probably far off in the future.
And that, you know, the problems that lead to death in our transportation system are
human problems.
They're problems of how we designed our roads, how we designed our transportation system.
And similarly, when I speak to AI critics,
they're not worried about, oh, AI taking over the world
or that sort of thing.
They're worried about the people creating AI
doing boring old human stuff, you know?
For instance, the problem with AI art
isn't that it's going to replace artists.
The problem is that the people who are creating
the AI art algorithms are ripping off artists currently
by putting all their art into a giant database that is able to just create these sort of weird simulacrums of it, etc.
And I know that you as a thinker are very beloved by the Silicon Valley folks.
And I worry sometimes that that may be because ā do you feel that you accept their premises of this is the world that we're going towards?
I accept one premise, which is that AI is going to completely change the future.
That everything, economics, politics, society, you know, AI is a real thing, not as a science fiction scenario, not as some kind of abstract theory, but the real
thing, it's just a few years old. It's just taking its very first baby steps in the world. We haven't
seen anything yet. It should be very, very clear. And the thing about AI, it's really different
from every previous invention in human history. Every previous invention gave humans more power.
This is the first invention in history which threatens to take power away from humans.
Why? Because it's the first invention that can make decisions by itself.
You think about every previous invention, whether it's a spear point in the Stone Age, whether it's an atom bomb.
Ultimately, it's humans who have to take the decision how to use it.
An atom bomb cannot decide by itself who to bomb.
It's always a human.
AI is the first invention that can take decisions by itself about its own usage,
like an autonomous weapon system deciding who to shoot,
and also to take decisions about humans.
Like already today, you apply to a bank to get a loan.
Increasingly, it's an AI, not a human being, that decides whether to give you a loan.
And that's new.
And this is only going to increase.
And that's taking power away from us in a dangerous way.
increase, and that's taking power away from us in a dangerous way. Another thing about AI is that it threatens to completely annihilate human privacy in a way which was just technically
impossible in previous eras. Now, you always had human actors that want to kind of monitor people,
control people, follow people. You had all these dictators and kings and popes who want
to follow everybody, see what they do. They couldn't do it. If you think about, I don't know,
a totalitarian regime like the Soviet Union in the 20th century. So it's technically impossible
for the KGB to follow everybody all the time. They just don't have enough agents.
You have 200 million Soviet citizens.
You don't have 200 million KGB agents.
And even if you did,
you don't have analysts to analyze the data.
Let's say there is an agent
following everybody 24 hours a day.
At the end of the day,
they write a paper report.
They send it to headquarters in Moscow.
Every day, imagine headquarters of KGB in Moscow getting 200 million paper reports about each person.
It's worthless.
It's just a mountain of paper.
Somebody needs to analyze it, and there is not enough people to do it.
Now AI solves both these problems potentially, and it's very frightening.
First of all, you don't need 200 million agents
to follow everybody around.
You have cameras and microphones and smartphones.
You carry the agent in your pocket.
You paid for it with your own money.
You take it everywhere.
Again, a KGB agent didn't go with you to your toilet.
If you had sex, it wasn't there on the bed with you.
But the smartphone is there.
And secondly, all this data going to the cloud, you don't need human analysts to read it and make sense of it.
This is what AI does, machine learning.
So it's now very frightening that for the first time in history, it's just technically possible to annihilate human privacy completely. And you see countries which are going in that direction. You see what
is happening in China in the last few years with the social credit system. Now is the whole COVID
thing. It's going there. I look at my own country in Israel, we are building a kind of total surveillance regime in the occupied
territories to monitor all the Palestinians. This is now a key element in how Israel controls
the Palestinians is with the technology. And this is extremely frightening. Again, this is not
necessarily what the people in Silicon Valley have in mind when they think about developing the technology.
But as a historian, I think it's a good practice for anybody who develops a new technology, whether it's AI or bioengineering or anything.
Take a few minutes.
Just imagine about the politician you most hate in the world or you most fear in the world.
What would they do with the technology you are developing? I think that's a great principle, but I'm still not sure that I see what is so entirely
new about these technologies.
I mean, we have previously invented technologies that can operate without human intervention.
The thermostat, you know, a simple dumb thermostat, right, it detects the temperature in my home
and I've set it at a point and I've told it to operate, you know,
to turn on at such and such a temperature.
And I'm not sure I see a distinction of kind between...
Look, I'm more frightened by an autonomous robot
that can decide to take a shot at a person.
I'm concerned about that future.
But I would say that's still a tool
that a human has designed,
and a human is setting up to use.
There is one big difference.
The thermostat does... Unless it's faulty, it does what you instructed it to do.
Yeah.
Temperature which is 25 degrees, you start cooling, something like that.
Yeah.
The thing about AI, it learns by itself.
That's the whole thing about machine learning.
It can learn by itself.
It can learn things that no human being ever programmed it
to do. That's why it's so attractive, say, to banks. Yeah. That you give the AI just this
enormous amount of data, and it finds patterns in the data that no human being was able to find.
And this is already now, this is not science fiction about centuries from now.
Already now,
AI is learning to do things
that no human programmed it.
You see it in chess.
That today,
AI is so much better than us at chess
that you have all these problems
with human actors
when they play against other humans
and all the time the danger is
how do we know
they are not cheating and getting advice from an AI? You know how they know? If the human is doing
something extremely creative, all the red lights go on. No, no, no, no, no. That cannot be a human
move. That's an AI move. Yes, but at the end of the day, I mean, yes, the AI algorithm can come up with a decision-making process from the humans who started it in motion to begin with, who decided to set it up.
I mean, an example of this is that Facebook has made a great big deal in their policymaking about like, oh, no, the algorithms might be racist.
We need to we need to make sure the algorithms aren't racist.
How do we make sure?
And without ever looking at the social structure
and the capitalist structure of Facebook
that caused them to create algorithms that were racist.
And what about Facebook is causing racist outcomes?
Not, oh, we made some algorithms and accidentally, oh, we don't know what the algorithms are doing.
No, you guys made the racist thing.
You're part of a racist society, you know, et cetera.
Like that value is embedded in what you're doing.
When I look at the bank, you know, that might make decisions that way.
Well, banks have been discriminatory against people for, you know, hundreds of years, right?
So I understand that technology can make that worse, but I wonder if instead what we need to focus on
is our social technology, our social structures that are leading to these problems. Like,
you know, at the end of the day, humans are going to send people with guns or robots with guns to go shoot
people in other countries. They'll do that whether or not they have an AI that helps
them do it. And so which is really our concern?
I completely agree that we should not forget about all the old problems of, you know, old
boring human beings being racists or whatever. But part of the problem with the new technology with AI,
it makes it much more difficult for us to understand what is happening in the world.
What is the source of the decisions? That's true. Like when you have a human banker,
which systematically discriminates against black people or against women or against Jews,
you can understand, okay, he's racist or she's racist.
This is what is happening here.
Or he's serving at the whims of racist forces.
Yeah.
The thing with AI is that it makes decisions in a completely different way than human beings.
And we find it increasingly difficult to understand its decisions.
When you ask the bank, why did the algorithm decide not
to give me a loan? The bank says, we don't know. I mean, this is what we just trust our algorithm.
And you have some countries like, I don't know, in Europe, in the European Union,
in the new GDPR rules, they have, there is a huge, huge problem of explainability.
They have a law saying if an algorithm makes a decision about a
human, the human has a
right to an explanation.
Why didn't give me a loan? Now,
here's the problem.
Humans consciously
tend to make decisions based on just
two or three data points. That's it.
We can't hold a lot
of different
factors in our mind at the same time and calculate and calibrate them.
Usually, even the biggest decisions in our life, we tend to take on the basis of just one or two factors.
If you missed your best friend's wedding and the best friend asks you,
why didn't you come to my wedding?
And you tell them, oh, my mother was in hospital in a very serious condition. This sounds plausible to us. But if you start enumerating a thousand different reasons,
you know, my dog was a little sick and I had to take him to the vet. And no, no, it's not just
that. I also had this and I also had that. And it was raining and it was nobody would accept this this this kind of explanation now but this is the way that AI makes decisions yeah it doesn't take just one or two
factors it is able to take together thousands of factors oh why didn't we give you a loan well run
one reason is that you applied on a Friday and the, based on millions of previous cases, have detected a pattern
that people who apply for a loan on Friday have 0.2% less chance of returning the loan
than people who apply on Monday. And you go, what? This is the reason you didn't give me a loan?
If you told me, I would have applied on Monday. And they say, no, no, no, no, no. It's not the
only reason. We have 99 other reasons.
It's also because of this and this and this and this and this. And this is the way that AI makes
decisions. It's a way that we fundamentally can't understand. If the bank really wants to give you
an explanation, the bank would send you a huge book, thousands of pages of data points.
And the algorithm just went over all this data and calibrating all these different reasons
decided not to give you a loan. So is it racist? Is it? I mean, we don't even have the conceptual
have the conceptual tools to describe what kind of a decision is that. It's really alien to the way that we function. And, you know, so in the U.S., it could be about a loan.
In a place like China, it could be whether you are declared an enemy of the people, an enemy of the state. So why am I now an enemy of the state and can't do this and can't do that and whatever?
The algorithm said you're an enemy of the state.
And go argue with it.
I mean, why?
Well, one reason is that, I don't know, one of your friends has been declared enemy of the state.
So this also influences your social credit system. And why is declared enemy of the state. So this also influences your social credit system.
And why is he enemy of the state?
Oh, one reason is because he read a foreign book.
Now, it's not that immediately anybody who reads a foreign book
is an enemy of the state.
It's just one of thousands of different reasons
that human beings just aren't capable of understanding such a system
of making decisions about them.
But this is increasingly what is happening.
And in this sense, I mean, AI is often sometimes kind of imagined
as an alien intelligence, like invaders from outer space.
Like it's an alien way of understanding the world and making decisions.
But this alien way is increasingly being used in more and more fields.
And that's, I think, really an existential danger to humanity,
not in the science fiction sense that you have now robots running in the streets killing people.
It's really undermining our humanity.
We are going towards, if we are not careful, we'll end up in a world where human beings can no longer understand their lives.
See, for me, what helps me wrap my mind around it, because it's true that AI creates these outputs that you don't know where they came from.
It's a black box that even the people who created it might not know its exact workings.
But I do ā I look at the past and I see other things that sort of operate similar to a black box that ā the health insurance system.
Why is one person's case denied? Why is one person's premium this and another is that?
What helps me make sense of both cases is when I look back at the humans who put the thing in
place. If I was in China and I was randomly declared an enemy of the state, I wouldn't be
so interested in what was the AI criteria. I'd say, why am I in a situation where some people
are being declared enemies of the state?
Like, that's the fundamental problem.
I don't care if it's a bunch of guys doing it with, you know, ink and paper or if they did it with an AI.
So that's my own sort of remedy to that is to go back and look at the social structures that caused that in the first place.
I'm curious what yours is, though.
Like, what is your remedy to the threat of AI? Well, I mean, it's still within our power to regulate it.
So, you know, one, I mean, there are many kind of principles that we should adopt. For instance,
that if information that is your private information, which is being collected,
should be used to help you and not to manipulate you.
We have this rule in many other fields of life, like in medicine.
My private physician has a lot of very private, sensitive information about me.
He or she are using it to help me.
They are not allowed to use it to manipulate me, to sell it to a third party.
It's very, very clear in the case of our physicians.
So why shouldn't it be the same with, say, the tech giants and social media?
Another principle is never allow all the information to be concentrated in just one place.
This is the high road to a dictatorship.
It doesn't matter if it's called a government or a corporation.
If somebody has all
the information, they are
effectively a dictator.
So we should prevent this. We should
avoid this. Another very important
principle is that whenever you
increase surveillance
of individuals, you must simultaneously
increase surveillance of the government
and the big corporation. If they know more about us, that could be fine. If at the same time, we also know
more about them. Wait, but hold on a second. If we're surveilling the government and the
corporations, who is doing the surveilling? Because I would say, you know, ideally,
if the people get together and put together a mechanism to surveil the corporations,
well, that is de facto a government because under a democracy, that's where a government comes from.
So who is the we who are doing it?
To give a concrete example, like you have an app which you can ā I mean you're buying something.
You want to know about the corporation.
You're buying its product.
You want to know, for instance, whether it's paid it taxes honestly or not
how much tax did these people pay and
Today you need a lot of research to do that
With the same technology of AI and all these databases you can have a very easy answer that you can you can immediately
Get so it's not necessarily the government
Supervising them. It's we need to know more about, again, the taxation.
It's transparency so anybody at any moment can go look at that information.
Yeah.
And one other very important principle is that whenever you have an algorithm making decisions about people or concerning people, it must always leave room for humans to change.
It's easiest to make algorithms that assume that people don't change.
If you think, for instance, about the media, like if I'm Netflix and I'm building an algorithm
that recommends movies and TV series and so forth, it's easier if I assume that your taste doesn't change.
Then it's easier to predict what you would like to see.
It also has very important kind of other business applications.
I know what kind of new movies to produce.
I invest millions in new movies.
If I can know what people want to see, it makes my job easier.
So there is an incentive there to create a situation when people don't develop new autistic tastes.
It makes my life easier.
But it makes human life impoverished.
it makes human life impoverished.
We actually want the algorithms to encourage people to develop new autistic tastes
and not lock them into some kind of chamber or bubble.
Now, again, this would make the life of the corporation more difficult.
If I use my enormous power to recommend to people new autistic genres, then it's more difficult for me to predict what they would like to see, which means that I can invest, for instance, millions in a new kind of superhero movie because this is what people wanted to see last year.
But suddenly everybody or some people want to see these film noir
detective stories.
So I'm kind of making
my life harder, but I'm
making human life better.
And this is true in so many other cases.
Even if you look at Enemy of the State,
it's easier to
build a system that assumes people
don't change. If 10
years ago or 20 years ago,
you supported some kind of extremist position,
it still defines who you are today.
You can't change.
For the police, it's easier.
For the person, it's...
Part of what is happening today in so many areas
is that you can think about the whole of life
becoming like one long
job interview. Like if you now apply to a job, something you did 10 years ago, 20 years ago,
it suddenly pops up there and defines who you are today. You told a racist joke 10 years ago,
you yelled at somebody 15 years ago, it pops up, hey, that's who you are, we know who you are,
15 years ago, it pops up, hey, that's who you are.
We know who you are.
You're not getting this job.
And no, I mean, the more data we accumulate on people,
the more powerful all these AI tools are,
the more important it is to bake into the system the value of change, that people can change.
Otherwise, again, like
you are 20 years old, you go to some party,
you have to think,
this can meet me down the road in 20
years when I want to be,
I don't know, Supreme Court Justice.
Or when I want to, I'm applying
to this job. Well, that can happen today.
I mean that, you know, we've had many
Supreme Court Justices who had their confirmation
hearings, things from when they were children came up.
Exactly.
And when you connect this to new technology, which never forgets anything.
I mean, in the past, you also had this problem, but at least not all your life was a job interview because most of what you did was not recorded anywhere.
So you had some time off from the job interview.
Now, all of life, if we are not careful,
all of life becomes this.
There is not a moment which is kind of a pause.
You can do now anything you want.
It won't be on the record.
No, everything is on the record.
So it's also kind of like a competition for status.
So all animals have competition for status, but it's not that they have entire life every moment they compete for
status. Now with, you know, all these new social media and likes and all that, anything you do at
any moment is still part of the status competition. This is is extremely stressful you don't have any moment you can just
relax i i guess the the thing that strikes me though is these everything you're saying makes
sense but it presupposes that ai as it develops will be powerful and good and useful and where i
come from and again i'm a bit of a skeptic on this point, but all the AI that I see sucks ass.
It's fucking terrible.
I mean, talk about Netflix algorithm, Netflix recommendation algorithm.
They're very proud of it.
It's, you know, very advanced.
It's terrible.
It recommends me things that I hate.
You know, and I say this as someone who has a show on Netflix.
Everyone I've talked to who has a show on Netflix said,
I made an adult animated show
that fit these characteristics.
It didn't show it to people
who watch the other shows like that.
YouTube has an algorithm,
but we know that it pushes people towards shitty,
very divisive, very misinformation-filled content.
I'm on TikTok.
But the question you need to ask is,
what is the aim of the algorithm?
If the algorithm's aim, like in many cases is, is to keep you as long as possible on the platform, it doesn't care that it pushes you to see all these shitty content and conspiracy theories.
And true, maybe by Netflix's.
And that's the danger.
The algorithm, again, it wasn't given the job of dividing society against itself.
It wasn't given the job of making people more extreme in their opinions.
It was given the job of making people stay longer on the platform.
And the algorithm discovered largely by itself the weaknesses of human psychology and the weaknesses of individual psychology.
The algorithm, just by trial and error discovered that rage is
good. And
you know, kind of middle of the way
opinions, they are less good
for attracting attention.
And the algorithm
also managed to discover the individual
weaknesses of each person. So if
one person likes to watch, I don't
know, car accidents,
they will show him more and more car accidents.
It gets to know you personally.
And that's a new thing in history.
I mean, throughout history, you had these kind of politicians and leaders and so forth that they are trying to grab people's attention by, for instance, inspiring rage or fear or whatever.
But they usually aimed for a kind of lowest common denominator.
They couldn't fit their message to different people individually.
Now the algorithms get to know you personally.
I don't care about all the other people, just you.
What would grab your attention? And this is a kind of
power that we never encountered before in history. You know, there were always people who knew us
very well, but they are like our immediate family members. Like my mother knows well my weaknesses,
but she's just my mother. She doesn't know the weaknesses of all
the other millions of people around. Sure. But the algorithms, to the extent that they know us,
know us very shallowly. I don't feel, for instance, everyone says, I'm on TikTok, I post on TikTok,
people scroll it, and people say, oh, it knows me so well. I feel that's an illusion. I don't feel
that it actually knows anything about me.
I think it's very narrowly optimized towards something,
and it might be well optimized towards that,
for that one company's business purposes.
But I don't work for that company.
I don't know what its business purposes are.
And a lot of companies are developing algorithms and AI
that aren't even good at what they want it to do.
And so I hear you talk about this and I don't think, I think we have more points
of agreement than disagreement, but I'm a little bit more worried about a world where
we accept all of the, you know, technologists' claims of AI is coming, it's going to be super
powerful, we can't stop it, we need to redesign our society to take account for it. We need to
regulate it in all these particular ways that they propose. And then what they actually build
ends up being just shitty. Like just the algorithms don't work that well. The self-driving cars don't
drive that well. And the rest of us are, you know, sort of compensating in order to, you know,
avoid the worst excesses where we're putting bars up at the crosswalk
so that we can't cross into the street anymore,
when in fact everybody would have been better off
if we had all kept driving our own cars,
or better yet, built public transportation,
which is a low-tech solution
that doesn't require
any sort of algorithmic
voodoo to be done about it.
I don't know how any of that strikes you.
I think, first of all,
that we haven't seen anything yet. I mean the technology
is just making its really
first baby steps.
So I wouldn't kind of try
it. They've been working
on AI for like 50
70 years or so. That's
very short time in historical terms and
even that 50-70 years for most of the time
it was just theoretical.
I mean, the actual applications out there, it's really just the last 10 years that it
has become a kind of major force.
And that's kind of, you know, looking from the long-term historical perspective, you
think about the Industrial Revolution.
So it's like coming to somebody in 1800 and saying, you know, this industrial revolution, it's just hype.
OK, so you have this kind of steam engines with coal.
Do you really think that will replace horses with something which runs on steam?
Really?
I mean, so it takes time.
And of course, you can say the industrial Revolution, eventually, we've figured it out. We have learned with all the doomsday prophecies about industry, that it will replace
people in the workplace and will create this problem and that problem. We eventually figured
it out. It never replaced people. It just put people to work in the factories doing different
things under worse conditions. I have that same concern about when you hear, oh, AI will replace everyone's jobs.
Well, I think they're going to figure out some way to pay people minimum wage to do
bullshit no matter what.
They'll just be doing it maybe under AI supervision or something along those lines.
So that's the problem, I think, that over time we learn how to kind of adjust to these
new technologies, but the tuition
fees can be extremely high
and we don't have a lot of margins
of error. If you look at the industrial revolution
so yes, in the end
we kind of figured out how
to harness the power of industry
for the benefit of most
people, not just a tiny elite.
But on the way there
people experimented with
things like communism, with things like fascism, with things like imperialism. Communism, fascism
and imperialism were three kind of experiments in how to build an industrial society, which cost
the lives of hundreds of millions of people.
And the same danger is with the new technologies of the 21st century, that we again experiment with new totalitarian regimes, with new types of imperialism.
Like we had the old imperialism of the 19th century.
Now we increasingly have this kind of data imperialism in the 19th century, now we increasingly have this kind of data imperialism.
In the 21st century, to control a country, you no longer need to send in the soldiers.
We increasingly just need to take out the data. Imagine a country somewhere in the world in 20
years when somebody in Beijing or in San Francisco has the entire personal data of every politician and journalist and judge and military officer in that country.
Is it still an independent country or did it become a data colony?
And similarly, you think about controlling people's attention.
So it's not just the algorithms.
Even forget about the algorithms.
It's just the ability to kind of control people's attention. So it's not just the algorithms. Even forget about the algorithms. It's just the ability to kind of control people's attention. Like, I don't know, China invades Taiwan,
say in 2025. You're in a country where half the people get the news from TikTok and the government
needs to decide whether it is with the Chinese or with the Americans. And half the population, which are on TikTok,
they see these very catchy images of crowds in Taiwan waving Chinese flags
and being very happy about that the Chinese are coming to liberate them.
And you have all these explanations about how Taiwan was always part of China
and things like that.
The other part of the population, which gets news from, I don't know, Facebook,
they see completely different images.
Like, I don't know, the Taiwanese sinking Chinese ships with torpedoes
or crowds in Taipei vowing to fight to the death.
And this is now a power that most countries are no longer able
to kind of control their information
space, either data being taken out or information being sent in.
So this is a completely new kind of imperialism.
Yeah.
And I'm not saying that this is deterministic.
It's bound to happen.
I'm not saying that we can't regulate it, but we need to kind of, okay, everything we know about imperialism in the 20th century, it's good to know that.
But imperialism in the 21st century may look very different because of the new technologies.
And the technologies will never be perfect the same way that the technologies of the 19th and 20th century were not perfect.
But they completely changed human life, very often in unexpected ways. Like, again, going back to the agricultural revolution, nobody realized that domesticating wheat and goats would lead to epidemics.
What's the connection?
So it's the same kind of issues that we
will be facing in the 21st century.
I'm really glad you brought it back to the agricultural revolution
because I want to ask you a little bit more about that.
But let's put a pin in AI and
take another quick break. We'll be right back with more Yuval Noah Harari.
So we're back with Yuval Noah know harari um uh we had uh look this this book made a big impact
on me i read it a number of years ago and so much of what you wrote about really stuck with me
and and planted questions in my mind that you know only paid off years later and and your work
has actually come up in prior podcast episodes that we've done over the last couple years uh
one i'm really excited to get your reaction to. We had David Wengro
on, who wrote with the late David Graeber
a book called The Dawn of Everything.
A wonderful book. I really, really enjoyed it.
I really loved this book, and I
loved the
sort of contrarian position they take towards
not just your book, but many other folks
who have written histories of humanity.
And one specifically was about the
agricultural revolution, which you depict, as do many others,
as a trap that wheat lured us into by tricking us into propagating itself and building our
societies around it.
And that that was a big inflection point in human history.
The account that they give, and you've read the book, but just for folks who haven't heard
the interview, know that there was
no clear inflection point that
humans for thousands of years would experiment
with agriculture, do it some parts
of the year then abandon it. There were
societies that would begin
farming and do it for a couple hundred
years and then stop and go back to hunter-gatherer
and etc. And their overall
critique I think is that
contrary to the accounts we're
often given of you know humanity was like this the humanity was in happy bands hunter gatherers
then this happened and then we were stuck this way and history is a line it goes in one direction
and we're sort of stuck where we are and their overall critique is humans have experimented with
many different forms of organization with different sometimes they will create a state and then abolish it or create hierarchies
and abolish them and that we're not so trapped as we think we are and that we
can have greater imagination about the way that we build the world and they
have a theory for why you know at present we feel more trapped in these
states but I'm curious what your reaction to that argument was reading it
I think what they say is less different
than what most researchers said previously.
Nobody that I know imagined the agricultural revolution
as this single moment when everything changed.
I mean, it is a process.
It took thousands of years.
There was indeed a lot of experimentation.
But I do think that it's kind of
beyond a certain point. There is no going
back. It's not that everybody
adopted agriculture.
It's that some
of the societies that adopted agriculture
became so much
more powerful that
even if other societies said
no, we don't want that, or we want to go back,
it became impossible, because there were just too many farmers, and they're just too powerful.
So it was like, either join us or disappear. And we saw it happening all over the world.
And with regard to the question of you know human experimenting with different things
one of the things that struck me about the book
was that
they kind of argued
that humans
can have
more free societies
than we currently have
but
what was lacking for me was the actual historical examples. Because when they say there
can be really free societies, I expected them to start with, you know, just look at New Zealand,
look at Canada, look at Denmark. But they said, no, no, no, no, no. These are actually oppressive
societies. It's just a veneer of freedom. Actually, Denmark is an extremely oppressive regime.
We don't want Denmark.
So I said, OK, that's interesting.
So what is your example of a large-scale free society?
Because when you look at small-scale societies, again, like whether it's hunter-gatherers or whether it's small agricultural villages, I get that.
It's easier when it's 5,000 people as opposed to 5 million.
For me as a historian, the big question is always about scale.
Many things that work well on a small scale, beyond a certain level, they change their
character.
So the kibbutz was quite nice.
The Soviet Union, less so.
Union, less so.
And so I was wondering, what is their big example of a really free, large-scale society?
Large-scale, I'm talking about millions of people living together.
And when they gave examples from ancient history, so again, what was striking is the usual suspects are not there.
They don't talk about Athens. They don't talk about Republican Rome. And I get it because in Athens, you had just 10% of the population having
political rights and you have all the slaves and all the women that don't have rights. And so,
and the same is in Rome. So, okay. They're anthropologists and those are the classical
examples and they're wanting to go further afield and tell
us about all the civilizations we don't know
about that haven't been so well covered.
This was like, I was waiting for, so what's the example?
And then they came up
with a list of examples
which, for me, were extremely
unconvincing because
they just don't have the data, and
they replaced the data with imagination.
So, they give us an example, Teotihuacan, which is, again, it's not a small tribe or a small
bunch of agricultural village.
It's a huge city with a huge territory.
So this should be a good example.
But we don't actually know how Teotihuacan society looked like.
we don't actually know how Teotihuacan society looked like.
So they give archaeological evidence that in Teotihuacan there were public housing.
Now, we don't know that for sure, because we have the kind of plan of the houses, and they all look kind of the same, so it gives the impression it was built as a kind of massive
public effort, and not every person building their own house.
Okay, I'll go with that.
But we had the same thing in the Soviet Union.
Like if somebody in 2000 years,
we said the Soviet Union was this amazing,
wonderful free society
because we had archeological excavations
in Moscow and Leningrad
and we discovered all this public housing.
No, no, no, no, no, that doesn't cut.
And we don't really know
what was the political system of Teotihuacan.
We are not sure if they had slaves, for instance, or not.
So instead of taking an example we know a lot about
from ancient history like Athens,
which is problematic because when you look at the details,
you know, if Denmark is not a free society, so you think ancient Athens was a free society?
No, no, no.
So they go to Thotivakan or they go to Uruk in the pre-dynastic period.
Now, we know very little about how Uruk actually functioned in the pre-dynastic period.
So it's a kind of,
you have enough details
that it's not just your imagination.
You're not inventing something
from completely.
But then you don't have,
you have just a few bones
that you can kind of reconstruct
a society around,
but most of it I felt
was not convincing enough.
So I agree with many of their critics of what they say is previous theories, but the kind
of crown jewel of the book, which is humans can construct large-scale societies which are really free
and not these kinds of ersatz freedom like in Denmark or in Canada.
I found this, if they're making it as a philosophical claim about a possible future,
you know, anything is possible.
But if they're making it as a historical claim that humans experimented with different systems, and we have have any example of what they define as a really free society.
We have the Denmarks of the world.
Yeah, that's about as free as we can get under.
But we also know that, I mean, Denmark living under capitalism,
living under even democracy,
there's a certain amount of despotism still under those systems.
You're not perfectly free.
Every person must take orders from other people.
You're not free to create your own society, etc.
But part of, I mean, again, as a historian, I'm very suspicious of utopian thinking.
They look nice.
Often in history, they tend to lead to very dark places.
And you see it many times in history, People come with these ideas of freedom and love and whatever,
and somehow it gets to gulags and concentration camps and guillotines and mass murder.
And there is a reason for that.
Because if you have a utopian vision that a utopian society is possible,
we need to find a way to get there,
then you start seeing your political rivals not simply as people who think differently from me, you start seeing them as evil.
Because they don't just have some other political views.
They are preventing the establishment of utopia.
We could have paradise on earth if it wasn't for these people. And especially
when things don't turn out as you expect, again, like in the Soviet Union, then you
start blaming these enemies, these evil people more and more for the failure to achieve utopia.
I mean, if you try to do a revolution and it fails, then one thing.
But you do the revolution.
You gain power,
and yet you fail to create
this wonderful utopian society
that you promised.
So what do you do?
You don't say,
oh, we made a mistake.
We can't do it.
You say, no, it's these people.
And because you have the power,
you can send these people now to the gulag.
Yeah.
And, you know, the same thing, I don't know, you look at Christianity. So you have the power, you can send these people now to the gulag. Yeah. And, you know, the same thing, you know, I look at Christianity.
So you have these Christians in the early centuries and they are this tiny oppressed minority.
And they say, we have the recipe for kind of the kingdom of heaven on earth.
And it will there will be no more oppression and no more hatred, only love.
And then they gain power in the Roman Empire.
The worst thing that ever happened to them.
They actually conquered the empire.
Yeah.
And then you don't get the kingdom of God on earth.
You don't get this wonderful place full of love and compassion.
Yeah.
You get the Inquisition.
Look, I've been to Salt Lake City.
It's not my favorite city on earth, even though
that's the Mormon version of heaven on earth.
You know? And the thing is,
it's not so...
You, if you accept
that utopia is impossible,
then you
can start looking at a place like Salt Lake
City and say, well, you know, compared to most places
in East, if I had to choose where to be born as a human being in throughout human history,
and one of the choices is Salt Lake City in the early 21st century, I would take it.
It's probably better than most places in history to be born. But it's not utopia.
but it's not utopia and that's part of
being human and part of
accepting history that
there are very few
laws in history but
one of the laws of history
is that there is no redemption
it's absolutely
as far as we know it's not possible
to build a perfect society
on earth even the Christians and many of the other religions,
they got it.
And this is why they kind of postponed it.
They kind of outsourced it to the afterlife.
I mean, the early Christians,
they promised heaven on earth.
But when they eventually,
they realized this is not working.
So they said, no, no, they said no no no no it's not
going to happen on this planet in this life it's later it's after after you die this is when you
get it yeah and and we understand that as being somewhat or at least i understand that as being
somewhat of an inimical worldview because it means you don't ever improve the world that you
that you currently live in because you're always worried about the world later. And look, I agree with you about utopian thinking
and exactly that problem.
But what, and by the way,
I also won't get into it with you
about the historical accuracy
or the evidence in Graeber and Wengro's book
because I'm not an anthropologist or historian.
And I know there's factual criticisms of that book
as there are of, you know,
plenty of other works in this world.
Just one thing
sure i i mean the book is is wonderful yeah it has a wealth of information it has a wealth of
very new and fresh thinking it's one of the best books that i've read in recent years and i would
definitely recommend uh i mean i said also about my book you don't have to agree yeah with the
main thesis of a book in order to for it to be a good book.
Very often it's the questions that are raised which are more important than the specific answers being given.
And the question that it raised for me that made me think about your work, or at least my reading of Sapiens differently, is that what it raised for me.
I didn't see it as a utopian book. I saw it as
questioning the view of history that things had to happen as they did.
Nobody argued. I mean, I think this is one of the things that I didn't like so much about the book.
They adopt this very contrarian view. And so they build strong straw men and then attack them when
this is not what previous researchers actually said.
I mean, I can certainly tell you about myself that I never had a deterministic view of history,
that things had to happen the way that they did and there was no other way. I don't think that
about the agricultural revolution. I don't think that about the rise of Christianity. I don't
think that about the communist revolution. Just the opposite.
What about the issue of complexity?
You said when societies get large and complex enough that it tends to lead to, how did you put it earlier, like a certain lack of freedom in them. Yes.
I mean, again, I don't think you have.
That's a bit of determinism, isn't it?
I don't think you have.
I mean, in this sense of determinism that large scale societies are not utopian, then yes, this is a determinism that I own too.
But there are so many different ways to build a large scale society.
Again, it can be Denmark.
It can be the Soviet Union.
It can be communist or capitalist.
It can be Christian or Buddhist or secular.
And this is not deterministic.
People have choices about these things.
Many of the actual terms of events are extremely unexpected.
Like the fact that Christianity took over the Roman Empire, I think is extremely unlikely event.
Yeah, for real.
That if you kind of rewind the movie of history, like you rewind it and press play, you rewind it and press play a hundred times, maybe twice, the Christians take over the Roman Empire and then spread to becomeā¦
A Jew in a colony establishes a religion that goes all the way back to the seat of the empire and takes over.
It's pretty far-fetched.
Yes, and there are so many other options.
I mean, the Roman Empire in the second, third century, it was a bit like California today.
So many kind of Eastern gurus and mystery cults and conspiracy theories.
And these people believe in that goddess.
And these people have this kind of meditation and whatever.
Such a huge supermarket of beliefs and religions and cults.
And out of all this soup that the Christians would take over,
I don't think it was deterministic.
I don't think that the communist revolution
was deterministic.
I don't think that the American revolution
was deterministic.
You could still have the King of England.
I mean, they could have failed.
I mean...
Yeah.
Well, and that's what I really appreciate,
especially about the ending of your new book, which you write a lot.
You end talking about animals, right, in the book and how humans are so responsible for the extinction of so many animals,
which I imagine is maybe a little bit of an upsetting message to give to kids, that it's our fault that so many animals responsible
message because i mean basically i mean you think the lions and whales are big you're much bigger
than them you're responsible for them yeah it's true um and you end you end on a message though
of our agency and our ability to shape the world that we live in and our responsibility i mean
yeah can you talk about that a little bit more yes i mean again the key message world that we live in and our responsibility. I mean, yeah, can you talk about that a little bit more?
Yes. I mean, again, the key message is that we created the world in which we live,
not the laws of physics, not the laws of biology,
but politics, economics, religion, all these things are our creation.
We made the crucial choices.
And since people made the world what it is, people can change it. If there is something unfair in the world, if there is something dangerous in the world, like the ecological crisis we are facing, it's not the laws of nature, it's us. And we have the responsibility and also the power to make better decisions.
And one thing very important to emphasize, I don't think it's a responsibility of kids.
I think there is a dangerous tendency to put like...
Greta Thunberg and the kids are going to change the world.
Which is abdicating our responsibilities as adults.
Which is abdicating our responsibilities as adults. Yeah.
The responsibility of kids is to be kids, to learn, to go to school, to make friends, to read books, to play.
If they also want to have a political impact, if they want to have demonstrations and write petitions, and absolutely, they should have an opportunity to do that.
And we should listen to what they have to say
But it's not their responsibility
It is our responsibility as adults
To solve the big problems of the world
When the kids are older
When they are 20, 40, 60
Then yes, then it will be on them
So part of our job as adults
Is to educate children, to give them not just the information
because they are flooded with information, but to give them a kind of broad picture of
the world, history of who they are. Because, you know, if you give them the wrong messages, then it's so complicated afterwards to kind of unlearn
all these things.
Like, I think about myself as a kid, so growing up in Israel, so I got all these messages
that Israel is the greatest country in the world, of course, and Jews are the greatest
people in the world and we are superior to everybody else and so forth and so on.
And it took me years
to understand that this is not true.
Well, it's not.
It's not. No, that's because in the United
States I learned that we're the greatest people.
So it can't be true that you were.
So, I mean,
it's not such a simple thing, but it takes
years to unlearn
these kinds of wrong messages.
And so it's part of our responsibility as adults to give kids years to unlearn these kinds of wrong messages. Yeah.
And so it's part of our responsibility as adults to give kids a better and more balanced and more scientific view of the world.
Yeah.
So let's end here.
You're not a utopian.
You've made very clear.
But you do believe we have a responsibility and the power to change our society and change
the world, right?
It goes hand in hand with not being a utopian. Yeah. Well, do you have a vision for what you feel, if we take that message seriously,
if we educate the children that way, for what a world that we can move towards in the near-term
future that is freer or less dangerous or etc. Do you have a vision of what you feel that we can move towards?
Yeah, I mean, one very important thing
is to have a more balanced understanding
of the relation between our identity
as belonging to a certain group of people,
like a certain nation,
and our identity as belonging to the entire species,
to humanity.
You have a lot of politicians who are telling you that it's a choice.
You can't have both.
That, you know, either you're a patriot and you're loyal to your country and nation,
or you're a globalist.
Yeah.
That you're unpatriotic and you're a traitor and whatever.
That's a slur, as people use it.
Yes.
Oh, globalist.
Yes. And this is complete nonsense. That's a slur as people use to say, oh, globalist.
Yes, and this is complete nonsense.
It's a kind of categorical mistake.
There is no contradiction between patriotism and globalism,
between being loyal to your nation and being loyal to humanity as a whole,
the world as a whole.
Because patriotism is not about hating foreigners.
Patriotism is about loving your compatriots.
And there are many situations when in order to take good care of your compatriots,
you must cooperate with foreigners.
Climate change is an obvious example.
You want to protect the people in your country
from the disastrous consequences of climate change.
The only way to do it is to cooperate with other countries.
Similarly, you have a pandemic.
The only way to really prevent the next pandemic or to deal with a pandemic is by cooperating with other countries.
This doesn't mean that you need to establish now a world government and kind of abandon all local traditions and but
no absolutely not you still have all the nation states most of your taxes they still go to provide
health care and education for people in your country you are not obliged to accept i don't
know unlimited immigration into your country absolutely not it. It simply means that you're cooperating with the other
nations, with the other people in the world on problems which are common to all of us.
And at the end of the day, yes, we are Americans, we are Israelis, we are Chinese,
we are Ukrainians, we are Russians, but we are also homo sapiens. The things that we have in common, the problems that we have in common,
and especially given the immense power that we have in the 21st century,
really the only way to avoid destroying ourselves with all our power
is to learn to cooperate better.
And this is the thesis of, or at least part of the thesis of the new book,
is that humanity's superpower over other animals is just that, is cooperation.
That's a wonderful message, and I'm really happy that you're giving it to the kids.
And thank you so much for coming on, for getting into it with me today.
Thank you.
Really wonderful having you.
Thanks so much.
Thank you.
Well, thank you once again to Yuval Noah Harari for coming on the show.
If you want to pick up his book, you can get a copy at factuallypod.com slash books.
That's factuallypod.com slash books.
And when you do, you'll be supporting not just this show, but your local bookstore as well.
And once again, I want to remind you, if you're watching this on YouTube,
please subscribe to the show in your favorite podcast player.
You can get it no matter what app you use.
I want to thank our producer, Sam Rodman,
our engineer, Kyle McGraw,
our editor, Noah Diamond,
and everybody who supports this show
at the $15 a month level on Patreon.
That's Adrian, Akira White, Alexey Batalov,
Allison Lipparato, Alan Liska,
Anne Slegel, Antonio LB,
Ashley, Aurelio Jimenez,
Benjamin Birdsall, Benjamin Frankart,
Benjamin Rice, Beth Brevik, Kamu and Lego, Charles Anderson, Chase Thompson- Jimenez, Benjamin Birdsall, Benjamin Frankart, Benjamin Rice, Beth Brevik,
Kamu and Lego, Charles Anderson, Chase Thompson, Bo, Chris Mullins, Chris Staley, Courtney Henderson,
Daniel Halsey, David Condry, David Conover, Devin Kim, Drill Bill, Dude with Games, Eben Lowe,
Ethan Jennings, Garner Malegis, Hillary Wolkin, Horrible Reeds, Jim Myers, Jim Shelton, Julia
Russell, Caitlin Dennis, Caitlin Flanagan, Kelly T, Kelly Casey, Kelly Lucas, Kevlar, KMP, Lacey Thank you. If you want to join them,
head to patreon.com slash adamconover.
Just five bucks a month.
Gets you free podcast episodes
and everything else you could want to know. If you want to join them, Thank you so much for listening or watching.
We'll see you next time on Factually.