Factually! with Adam Conover - The Problem with Mind-Uploading and Transhumanism with Susan Schneider
Episode Date: August 19, 2020Is it really possible to upload your mind to the cloud, or is the entire idea fatally confused? Philosopher Susan Schneider joins Adam to explain how your brain differs from a computer, the l...imits of AI, and why philosophy must be an essential part of the study and development of artificial minds. Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
You know, I got to confess, I have always been a sucker for Japanese treats.
I love going down a little Tokyo, heading to a convenience store,
and grabbing all those brightly colored, fun-packaged boxes off of the shelf.
But you know what? I don't get the chance to go down there as often as I would like to.
And that is why I am so thrilled that Bokksu, a Japanese snack subscription box,
chose to sponsor this episode.
What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill grocery store finds.
Each box comes packed with 20 unique snacks that you can only find in Japan itself.
Plus, they throw in a handy guide filled with info about each snack and about Japanese culture.
And let me tell you something, you are going to need that guide because this box comes with a lot of snacks.
I just got this one today, direct from Bokksu, and look at all of these things.
We got some sort of seaweed snack here.
We've got a buttercream cookie. We've got a dolce. I don't, I'm going to have to read the
guide to figure out what this one is. It looks like some sort of sponge cake. Oh my gosh. This
one is, I think it's some kind of maybe fried banana chip. Let's try it out and see. Is that what it is? Nope, it's not banana. Maybe it's a cassava
potato chip. I should have read the guide. Ah, here they are. Iburigako smoky chips. Potato
chips made with rice flour, providing a lighter texture and satisfying crunch. Oh my gosh, this
is so much fun. You got to get one of these for themselves and get this for the month of March.
Bokksu has a limited edition cherry blossom box and 12 month subscribers get a free kimono
style robe and get this while you're wearing your new duds, learning fascinating things
about your tasty snacks.
You can also rest assured that you have helped to support small family run businesses in
Japan because Bokksu works with 200 plus small makers to get their snacks delivered straight
to your door.
So if all of that sounds good, if you want a big box of delicious snacks like this for yourself,
use the code factually for $15 off your first order at Bokksu.com.
That's code factually for $15 off your first order on Bokksu.com. I don't know the way. I don't know what to think. I don't know what to say. Yeah, but that's alright. Yeah, that's okay. I don't know anything.
Hello, welcome to Factually, I'm Adam Conover, and you know we hear all the time that the brain is a computer, right?
And so, just like a computer, supposedly, you'll one day be able to download your mind, your actual self, into the cloud.
Or maybe upload it into the cloud, I'm not really sure which way it's supposed to work.
And then you just use, I don't know, your estate, the interest that comes off your investments to pay Elon Musk or whoever for upkeep on the server farm where your brain will live forever and you'll achieve immortality.
Since your brain is hardware and your mind is software, according to this metaphor, downloading our brains is just a technical problem. So with enough RAM and the right cable plugged into your cerebellum, well, surely you can just port yourself from your brain
right into a hard drive somewhere,
the same way you'd port Minecraft from a PC to your iPad, right?
Well, this brain-computer metaphor is useful,
but it's just a little bit short of being entirely, uh, true.
Our brains are not really computers.
And this metaphor is really deep in our culture. So I know it might
be hard to believe, but let me try to break it down for you in a few different ways. Just going
to Wikipedia, let's just make it really basic. The definition of a computer is that it's a machine
that can be instructed to carry out any sequence of arithmetic or logical operations automatically
via computer programming. And your brain is not that.
It is simply not a piece of hardware that is designed to run any arbitrary piece of
computer code.
If it were, I could walk up to you and go, 10, print hello world, 20, go to 10, and get
you stuck in an infinite loop.
But good thing for you, I can't.
Or let's put it a different way.
When computers are built, they're empty.
They can't do anything until a piece of software is installed on them, right?
But as the research psychologist Robert Epstein writes in Eon,
as humans, we're born with attributes like reflexes, senses, and learning mechanisms,
all attributes that organisms use to operate.
But we're not born with attributes that computers use to run software,
things like processors, subroutines, encoders, and buffers.
Humans are not only not born with any of those things, we never even develop them.
And look, if you just think about it for a second, you'll realize that your brain is
not like a computer, okay?
If you think back on your life as a person, you might remember that you've never run a
computer program on your brain.
You've never installed more RAM.
never run a computer program on your brain. You've never installed more RAM. You've never upgraded your CPU. And your brain has never done things that a CPU does, like, for instance,
retrieve data from long-term storage and process it via an algorithm. Computers and brains both
have things that we call, colloquially, memories, but they operate entirely differently. Computers
store perfect copies of data,
whether the power is on or off, and they just sit there until retrieved. But your brain maintains an
ever and uniquely shifting recollection that's only there as long as you're alive,
and that constantly changes. And most importantly, our brain is simply not divided into physical hardware and abstract
software.
It is, to use a more specific term, wetware.
It's a massively complex physical organ in which both of those are irreparably entwined.
And the truth is that despite all the scientific advancements of the last century, we still
know surprisingly little about our brain.
As one neuroscientist puts it,
neuroscience still largely lacks organizing principles or a theoretical framework for converting brain data
into fundamental knowledge and understanding.
In other words, we currently stand at the pinnacle
of human knowledge about the brain,
and still, we don't really know shit about it.
And that's why we employ this reductive metaphor
of the computer to make the brain more intelligible.
I mean, after all, that's the point of metaphors.
They help us intuitively understand complex things.
But the truth is, throughout history, humans have been using whatever technology was lying around at the time as a metaphor for understanding our brains and minds.
minds. Thousands of years ago, the development of hydraulic engineering led to the notion that our physical and mental states were the products of the balance of different fluids flowing through us,
different humors. And as mechanical engineering took off, Thomas Hobbes hypothesized that thinking
came from tiny mechanical motions in the brain. We've since had theories about the brain comparing
it to a telegraph or to an electric field before settling some decades ago
finally on the computer. But look, while metaphors like that can be useful, they can help us
understand and develop our intuitions, they can also be dangerous because they can obscure what
we actually don't know and they can provide seemingly easy answers that are actually flawed.
Like, let's return again to that idea of uploading your brain.
The computer metaphor makes things like uploading your mind to the cloud seem simple and intuitive
when the reality is that we lack answers to the fundamental questions that would tell us whether
or not that idea even makes any fucking sense. Questions like, what is consciousness? What is the self? And how does
our neurological makeup, how does the physical nature of our brains create the feeling of being
conscious or being a particular person? These ideas don't have easy answers, but philosophers
have been thinking about them for thousands of years. So if we want to make sense of ideas like
mind uploading, we need to employ philosophy.
Well, we have found the perfect guest to help us dive into that topic today. Susan Schneider is a
philosopher and cognitive scientist. She's at Florida Atlantic University, but she's worked
with the Library of Congress, the University of Connecticut, and with NASA, if you needed higher
qualifications than that. Her recent book, Artificial You, AI, and the Future of the Mind
is a wonderful exploration of these issues.
So look, let's jump right into the interview.
We are going to some mind-expanding places today.
Let me tell you, please welcome Susan Schneider.
Susan, thank you so much for being here.
Thanks for having me.
So what is transhumanism?
Let's start there.
What is this perspective or belief and how widespread is it?
Well, it's pretty widespread these days.
It's a social and cultural movement or set of views that says that science and technology will transform the world.
So I'm on board with that part.
will transform the world. So I'm on board with that part. In particular, transhumanists are super interested in radical longevity. So in principle, it would be great to hang around
until the heat death of the universe if possible. In the short term, it would be wonderful to use
medical innovations to give us radically longer lives. In addition, they appreciate the idea of cognitive and perceptual enhancements
to reach higher levels of consciousness,
to have better cognitive capacities,
and even new sensory capacities.
So if I can get a brain chip, for example,
that would enable me to have echolocation,
it's my right to do so.
And in the future, there will be...
You'll take my brain chip out of my cold, dead hands.
Right, right.
They like science fiction, just like I do.
I do too.
Yeah, and so for listeners who've read cyberpunk,
the future may be like a cyberpunk novel in which people have elective brain enhancements.
Now, often in cyberpunk, things get dystopian.
You know, we lose control of our brain chips.
I talk about these kinds of scenarios in my recent book because, you know, I'm always concerned about the way things can go wrong.
Something that people will now appreciate, you know, global pandemic.
We all appreciate how things can go wrong.
Very much so.
Yes.
But you're a you're a skeptic of transhumanism, as I think I am as well, or at least a lot
of times when I hear those ideas, they don't feel very well
thought through to me. Like, can we talk about, can we just start with the idea of uploading your
brain into a computer? Cause this is one of the most popular ones and you hear this one a lot.
And look, I, I said in the show before, I feel bad bringing this up all the time. I took four
years of philosophy in college, right? So I'm not, I don't know that much. You're a philosopher,
correct? Yes. We're both crazy. We I don't know that much. You're a philosopher, correct?
Yes, we're both crazy. We're philosophers. This is great. We're going to have a lot of fun.
But I've always felt like, OK, this is a fun idea. I love it in science fiction.
But it's but it's incoherent because like if you were to be sitting next to a computer, they say, OK, we're going to upload you into the computer now.
Right. And then they do it. They put the, uh, the colander on your head. Right. And they go, right.
You wouldn't suddenly wake up inside of a computer. You would just be sitting next to a computer
that would start going, Oh, hello, I'm Adam. And you'd be like, wait, no, I'm still here.
And now there's just a computer that thinks it's me. Um, and that's just at a basic level. It doesn't track this idea.
Yeah.
I mean, I hate to burst people's bubbles
like if they're transhumanists.
And let me just say,
I'm kind of in broad strokes of transhumanists.
Like the stuff I mentioned earlier
about like radical longevity
and enhanced cognition,
I'm good with that if it can be done. The question
is how to get there from where we are now and how to do it in a way that, you know,
respects individuals' rights to privacy and human flourishing. But you bring up, you know, yes,
the most popular form of how to get there from here the answer for many transhumanists
is to pursue uploading technologies um and so your listeners uh if they've seen the recent tv
show upload which is pretty funny actually or i think there was a black mirror episode
on this um and also the movie Transcendence.
That wasn't very good.
This is one of the,
this is like an old idea in science fiction.
There's so many books and movies
that are based on this idea.
I think it's wrong-headed,
pardon the pun,
because in fact,
that's exactly what the problem is.
So, okay, so here's what uploading is.
It's like what you said, you know, say you decide.
Say, I'll use a story from Robert Sawyer's Mind Scan, great novel.
He's a science fiction writer.
So his protagonist learns that he has a brain tumor and he only has a few months to live.
So he goes to this place called Immortex that claims that they will upload
his brain. And so what that means is they're going to sit him in a super scanner. Okay,
because this is science fiction. So sort of imagine an MRI on steroids, like, you know,
400 years into the future. And it's able to scan every detail of his neural configuration,
capture all the details about the temporal evolution of his brain, including like hormones
and whatnot. Sort of, you know, a future neuroscience delivers the details of how to do it.
Okay. And so he says, yes, I want to live, you know, until the heat death of the universe. And so he signs away his legal rights because he plans on uploading his brain and he doesn't think his brain or body, right,
on the scanner will still be him. He thinks he'll be transferred somewhere else. So he wants to make
sure he keeps all his money, is married to the same person and so on and he arranges to send
his old body and brain because it's still working uh to a sort of uh convalescent hospital on mars
okay all right well he goes in for the procedure, you know, and I guess just another case of having a shit lawyer and not taking philosophy. He basically wakes up after the procedure, but he's still on the scanning bed. He's like, what? What? Okay. Now, what went wrong, right? Okay, what went wrong is fairly obvious. I mean, how many of your listeners would say that there's a strong chance
that your earthly survival is intimately connected to the survival of your brain?
Right, I would say so.
I'm raising my hand.
Yeah, I am my brain for the most part.
Yeah, and I don't even need to make the strong claim that I'm identical
to my brain. I could even be religious and say I have a soul, but my earthly survival is intimately
connected to having a brain. Well, okay. So think about it. In the process of uploading,
that new being does not have your biological brain. Your biological brain has been copied.
Yeah. Okay. And maybe it is a high fidelity copy. Maybe that upload will be asserting it's you and
that it does deserve your money and it is married to your spouse. Okay. But I think Sawyer very Boyer very vividly illustrates that we shouldn't believe that uploading works in the sense of being a means for our own survival.
We should be very, very cautious.
Yeah, it's the best case scenario.
Just to bring in another piece of science fiction is Christopher Nolan's The Prestige.
Christopher Nolan's The Prestige.
Have you seen that movie where the magician duplicates himself in the middle of a trick and then has to murder the clone of himself over and over again?
Have you ever seen this movie?
Yeah, I think a long time ago, and it's really good.
It's a very good movie.
I'm sorry for just spoiling.
I should have said I spoiled part of The Prestige.
Spoiler alert.
Sorry, this is not that kind of show.
I don't respect spoilers on this show.
But yeah, the best case scenario is that like your consciousness, there's now two of your
consciousnesses, right?
Which are, which both exist.
Like the idea that, and this is the thing, this is just a simple thought experiment,
right?
We haven't even gotten into the neurology of it or, you know, the psychology of it.
We're just talking about like the most basic thought experiment of like, where is your consciousness seems to violate like the claims that are being made by,
you said you hate to burst transhumanist bubbles. I love diverse bubbles. So, you know, it seems
like a very easy bubble to burst or it seems like these ideas are very poorly thought through.
it seems like these ideas are very poorly thought through.
Yeah.
I mean,
I guess I want to be nicer than that.
Like I want to be nice to my transhumanist friends. Cause I like them.
And you know,
I like the overall view,
but yeah,
I don't think that they should be very confident that uploading is a way for
one to survive.
Instead.
I suggest, I suggest a view I call metaphysical humility, which in other words means, bluntly, we don't know what the fuck is going on
when it comes to personal identity, right? So personal identity is a literature in the field
of philosophy, contemporary philosophy. It goes back actually, people have talked about it,
like, you know, Hume and Locke have influential views. You know, so it's in the field of
metaphysics in particular, which seeks to understand the fundamental nature of reality.
In the words of Morpheus, what is real? So, you know, that's a super cool thing to think about.
I love metaphysics. In fact, I am a metaphysician.
Don't ask me to read your horoscope.
I mean, like a true academic metaphysician.
I talk about the nature of reality.
Anyway, I can tell you, honestly, that literature is vexing.
There's no easy solution in sight.
In fact, that's how it is with philosophy in general.
In fact, that's how it is with philosophy in general.
Like, I mean, some of the most fun issues to consider, issues about whether we have free will, the nature of the mind, whether there's a God, all of these issues have absolutely
no uncontroversial solution.
And theories of the nature of the self or person and what allows us to survive, wow, right?
There's so much controversy.
I mean, can you really rule out the existence of some afterlife?
I mean, not really.
I mean, how do you find out?
But can you really prove it?
No, I don't think you can definitively from an intellectual standpoint prove God's existence.
think you can definitively from an intellectual standpoint prove god's existence um but a sensible approach which i think a lot of people agree with is that our brain is essential to our survival on
the planet okay at least on the planet and if you're scooping out parts of biological brain, as you would do with like inserting a ton of brain chips, which is another issue that transhumanists like.
Right, augmenting our brains with our technology.
Yeah, and this is the wave of the future.
You know, or if you're trying to upload the brain to survive.
I mean, again, you may, you know, 300 years from now, create a high-fidelity computational duplicate of you.
It may even, you know, have something like your memories
and your personality, but sadly, it's not you.
If you want to live a really long time,
stick to biological enhancements.
That's what I suggest.
Nanotech.
I mean, you know, and even robotics like nanoscale AI might work. But, you know, the wholesale replacement of your brain is akin to a brain transplant. And that is, in fact, a candidate for a Darwin Award. That's what I think.
Darwin Award. That's what I think. Well, yeah, I mean, God, you said so many things there that I want to I want to ask you more about. I mean, one is, yeah, what you're getting at is this question
of enhancing the human brain or mind or uploading it really cuts directly to the question of like,
what is the self, which, as you said, is one of the oldest philosophical questions. I mean, it goes Buddhism has tangled with this question for for thousands of years.
And that's one tradition that's entirely separate from the, you know, quote, Western tradition.
And, you know, is it is is myself identical with my brain? Is myself a continuum of experiences that I have?
Is you know, where does consciousness arise from? Like,
these are, these are questions that multiple generations of the smartest people on the
planet have spent their lives trying to figure out. And we don't have, as you say, a consensus,
a consensus answer to them. Yeah. And so in a recent book I wrote called Artificial You,
And so in a recent book I wrote called Artificial You, I go through some of the leading solutions to the nature of the self, like what constitutes survival over time.
And yeah, I talk about these different views.
And I'm glad you brought up some of the options here because, you know, according to some people, there is no such thing as a surviving self anyway.
Right. And that view has been attributed to the Buddha, as well as developed by a very well-known contemporary philosopher, Derek Parfit. And
I mean, honestly, I think that's a serious candidate. And in that case, I guess you could
upload in that case, if you believe that,
because who cares if you really survive or not, because there's no survival.
Yeah, this is the idea that your self changes from moment to moment and is like constantly
being permeated by the things around you and that there is no one unitary self that sort of all is
one at the end of the day. Yeah, it's a great view. And sometimes I think it's ambiguous between
two different positions that are actually important to distinguish. One is that there's
a self from minute to minute, but it's ever changing. In that case, there's a self. But
I think what Parfit might've meant was that there really is no such thing as a self at all.
Just bundles of impressions, perceptions. And, you know, what I really wonder here,
because there are about five or six, I think, serious contenders for theories of the nature of the self. I wonder how we're ever going to
determine what answer is correct, if any. It may be that it's a case of what philosophers talk
about as underdetermination by all the evidence. So, you know, we have different theories of the
nature of the self, And the evidence is perhaps
philosophical argumentation together with kind of physical facts about the universe.
And I don't think any of that, like high quality philosophical argumentation together
with facts about the universe, the laws of nature, whatnot, will uniquely pick out a theory of
personal identity. I've always been meaning to write a paper arguing that. My time is
underdetermined. But that really, like, where you fall on that question, like which of those five or six options you pick has really massive consequences for what transhumanist procedures you might want to undergo.
Yes, that's what I've been telling people.
I mean, I presented that to Congress.
I love that you talked to Congress about the nature of the self.
That's wonderful.
Yeah, I was hoping Trump could come so I could get really below the belt.
Like, you're not even a candidate for any of these theories.
No, I'm just kidding.
I'm sorry.
I shouldn't say political things on podcast.
Oh, say whatever you like.
Oh, good.
Okay, yeah.
But anyway, they were really receptive.
It was funny because it was before one of those big impeachment hearings
and there were police everywhere protecting the witnesses.
And I was like, where's my police?
They should be protecting me because I'm about to do it.
Yeah, exactly.
You're undermining the foundations of our fucking consciousness.
People must be trying to stop you.
Incredible. So they were really nice. And I'm still working with some of them fucking consciousness. Where's people? That's right. Incredible.
So they were really nice.
And I'm still working with some of them on philosophy.
They text me philosophical.
You get late night texts from AOC going like,
Hey,
so what's a,
what is consciousness though,
dude?
I get people asking me questions about consciousness.
Yeah.
That's what I was studying
was, uh, when, you know, my, my, uh, brief years doing it was what is the, what is the nature of
consciousness? What is it? What is its connection to the brain? And, but I think that you're right
that there is obviously some connection between ourself and our consciousness and the brain
and like the literal physical thing of the brain. Yeah.
And you don't even need a theory of consciousness to say that.
Right.
I mean,
I wrote a piece.
It was so funny.
I wrote a piece for financial times.
They asked me to write an op-ed.
I'm like,
I'll give you guys an op-ed.
I'm going to tell you a story.
What's the story?
A bunch of bankers reading it.
So,
um, there's this science fiction writer greg
egan who writes these short stories and he writes this one um and it's about the the society where
people are getting chips implanted in their brains that back up all the details. And eventually when they get to a certain age, the chip's called the jewel.
The chip basically takes over and they scoop out their biological brains.
Wow. Yeah. And I say, Hey, guess what? Neuralink, guess what? Google, you know, Facebook, a lot of
these companies are working on, um, BMIs that literally aim to go inside the head
and peripheral nervous system. I'm like, if you start scooping out parts of brain, at some point,
the self there will no longer exist. And I call that phenomenal brain drain.
And that actually made world news. It got written up like a lot. Yeah, it was really funny. If you
like for that whole week, if you Googled Neuralink,
it came up first. Now I hope Elon's thinking because I'm all for chips, you know, like there
are people who have terrible deficits. Like there's an artificial hippocampus right now.
You know, it's a brain chip and it can help people lay down new memories it's in phase
two clinical trials in humans ted burris has been working on it for like 15 years and all the power
to him and these technologies can help lock in individuals as well uh darpa's on it right
but only to help people not to create super soldiers. Okay. Well, that's what they tell us, but all right. I'm joking.
I'm being sarcastic,
but yeah,
super soldier city.
but,
but anyway,
if those technologies go too far,
it will actually be,
someone will go to buy those technologies to get smarter and they'll be
killing themselves,
but they won't really know it.
Yeah.
Yeah.
Like they'll,
they'll,
they'll flip the switch, they'll flip the
switch and they'll suffer identity death and they won't know that it's going to happen until it
does. And then what, there's going to be a jewel piloting their bodies around going like, hello,
I'm Susan. But really it's, uh, now we're so stupid that we're creating all this AI and neurotechnology and we're going to
exterminate people for philosophical reasons because we're not interdisciplinary enough.
I can't even fucking believe it. That's incredible to me. It's crazy. It's so aggravating. I'm so
sick of talking to, I talk to these tech CEOs all day. Their publicists are calling me, you know, getting meetings set up and all this. And I don't know why. I think they want me to talk about their views or chat with them. And I what these views are based on and what a lot of science fiction, like not the really good science fiction you're talking about that takes the idea seriously.
But, you know, the more basic kind of science fiction, right, where like, you know, what's, you know, just just a basic movie where someone's brain gets uploaded into a computer, right?
Your average summer blockbuster and like, oh, I'm a computer now.
Robocop, right?
That kind of that kind of fantasy at root there.
There's a form of dualism there, right?
Which is the idea that which is just sort of the old folk idea of, hey, there's a soul
that's separate from your brain.
That's your consciousness.
And that's where that goes, right?
And so that can just be transplanted from one to another or like, it's like Freaky Friday,
right?
My consciousness goes to my mom's body. mom's consciousness goes to my my body and the
weird thing is dualism is like sort of the one position that like nobody really holds in philosophy
like really strict dualism that those things are completely separate is kind of incoherent
like it doesn't it or at least that was my experience instead you you're you know much
better than i do but there's still some sophisticated
dualisms out there um in both religious and non-religious guises but i agree with you entirely
i think there is an implicit dualism in transhumanism and there have been some papers on
this too um you know in a way like if you read like cursewiles articulation of transhumanism
in his books there's a new heaven.
Everything lights up, but it lights up with intelligence throughout the universe.
And it's a beautiful vision.
And I honestly see it as being part in an extended intellectual sense of the Judeo-Christian tradition.
Now, what I really think is going on in a little more detail, and I think this will
speak to a lot of your listeners who are technophiles, is the idea that the mind is like
a computer and that we survive because our program survives. And I take up that view in one of the last chapters of my book, and I say, I'm sorry, even though I'm a cognitive scientist, and my earlier book was on the computational structure of the brain, it was called The Language of Thought, A New Philosophical Direction, I can tell you this self or person is not a program.
is not a program. A program is, if you think about it, okay, a computer program is like a mathematical equation. It's what philosophers call an abstract entity. So the number two is
strictly speaking, not anywhere. And it doesn't cause anything. The number two, you might say,
you might think of it as an inscription, like something written on the piece of paper,
but that's different. That's the inscription of the number. That's not the
same thing as the abstract number. Yeah. So anyway, your mind isn't abstract. It's not a
program. It's not lines of code. What it is, if anything related, is something like a program
instantiation or implementation of a program.
It's the thing that can be described as running a program.
But notice that that doesn't tell us anything about under what conditions we
survive because it begs the question, what are we?
What kind of thing is running the program?
That's the question theories of personal identity means solve.
And so cognitive scientists,
they say the mind is a program.
I used to say it,
but don't say it.
And don't believe that if you have a chance to enhance your brain,
it's okay.
You can upload because you're a program.
That's bullshit.
Think of it.
Like if your mind's a program,
you'd be like Smith in the Matrix.
Okay.
You remember Smith?
There were like, you know, hundreds of Smith instantiations.
Which one is really Smith?
Well, clearly we're not like Smith in the Matrix.
There aren't like hundreds of us in principle.
There's only one of us that's going to survive. I want to see if I can chew up what you said and spit it back out to make sure
I'm understanding it correctly. Like what you're saying is, you know, there's this belief that the
mind is a program, just like a very simple program. One plus one equals two. I can do that on an
abacus or I can do it on a computer. Right. But it's like it's just a formula floating through the air. Right. And the same thing for a hello world software
program. I can do that on an old IBM or I can do it on my iPhone. Right. But you're saying that
like what our self is is not an abstract program like that. It's also a physical thing. Like the
brain is also physical. And so you can't just abstract it fully away in that way, because like the thing that it is physically in the brain is so specific.
It is. Am I am I getting that right?
That's exactly right.
That's exactly right.
You're not just a program. You're also meat.
Yeah, you're a concrete thing.
And concrete objects like my coffee cup, my brain, they're causal.
They cause events in the world.
We react to spatio-temporal events.
We're in space-time.
Programs like numbers, like equations, aren't causal by definition.
And they're not in space-time.
They're abstract.
I mean, that goes back to work in the field of philosophy of mathematics, which is a branch of metaphysics.
And it's an important distinction.
And people are just running over that distinction like crazy right now in physics as well.
So it's getting really confusing.
Like, you know, people are saying things like the universe is a program or the universe is an equation.
Like Max Tegmark, as much as I like him,
you know, intellectually, he's a great guy. And personally, he's a nice person.
But I don't think that it's right to say that the universe is an equation, okay? Because that is an
abstract entity and it doesn't explain why we're here. There's a universe of change. We can see it all around us just by
introspecting our conscious experience and by looking at the laws of nature. If the universe
is an equation, there's no way to talk about causation, space time, any of the good stuff that we need to explain. So there has to be something that drives the equation.
So Stephen Hawking asked a really important question.
What breathes fire into the equations?
That's what we need to figure out
if we want to figure out the nature of the cell
or the fundamental laws of physics.
And I think it's being ignored.
Well, we got to take a really quick break.
I got to have so many more.
I could keep talking about this for so long,
but we got to take a really quick break.
And then we're going to pick this back up.
We're right back with Susan Schneider.
Okay, we're back with susan schneider um so we were just left off talking often i use the break to switch to a new topic but i want to stay on the same one this time because i was really loving
this we were on the idea of the self not just as a program but as a physical thing right um and that that makes sense to me by the way because when you're talking about a program, but as a physical thing. Right. And that, that makes sense to me, by the way,
because when you're talking about a program, it's like, yeah, sure.
Like iOS is a program like windows is a program. Right.
But like, it also matters what it's running on.
Like, is it running on an old iPhone, a new iPhone? Is it,
is windows running on a gaming PC or on a surface?
How many monitors does it has? Like where is it? You know, like, is it,
I'm talking to you on a
dual monitor setup on a desk which is very different like these we forget that these things
are physical and until they're running nothing uh you know they're they're uh they're not they're
not fully instantiated yet right um but i also wonder what about the differences between our
brains and computers like it feels as though we've allowed this metaphor to really dominate the way that we think about our minds.
But like our brain is not a computer like it's it's not a computer.
All of the ones that we've built, it's not like a general purpose Turing machine.
Right. It's like a very complex like network of neurons that is designed very differently from the physical
computer that I'm talking to you on. How much does that affect your views if it does at all?
Oh, that's a great question because I think there's so much confusion about whether the
brain is a computer because there are so many different notions of what a computer is.
there are so many different notions of what a computer is. So, all right. So one thing you might ask is the brain like my laptop or an ultra sophisticated mainframe computer. Uh,
and the answer is obviously no. Um, so, you know, the brain is not a von Neumann machine. Okay. But there is an extended sense in which the brain seems to be computational.
So in cognitive science, one of the elements of the computational paradigm,
I think, is a sort of explanatory approach to looking at cognitive capacities in terms of how they work causally by decomposing
them in terms of constituents and explaining the causal interrelation between the parts.
And that's called functional decomposition. It's an explanatory method. That's one sense in which the brain is computational.
Another sense is there's a whole field of neuroscience called computational neuroscience.
I mean, almost every college or university has classes in it.
And that's where you get into the computational nitty-gritty of how neurons operate.
And you can mathematically map out their behavior.
And you can take different neuroanatomical components of the brain,
and there are actually proprietary algorithms that they run.
And, you know, different parts are more amenable to computational description, but I think that's only because we know more about those parts. So we know a lot about the hippocampus right now.
So that's part of the brain that encodes new memories. And you can take apart the hippocampus and you can explain it in terms of little areas like area CA1. And it has a proprietary algorithm.
It's pretty well understood.
And you can kind of use proprietary algorithms
to build up the function and structure of the hippocampus.
So the brain's computational in that sense, I think.
All right?
But that doesn't mean the mind is a program,
to go back to that. I mean, first of all,
when people seamlessly go from talk about the brain to talk about the mind,
they're just being loose. Don't take it too seriously because when philosophers talk about
the mind and when religious people talk about the mind, they mean something
pretty deep, right? Yeah. Yeah. And it's not at all clear that the mind is the same thing as the
brain, right? Or that the mind is a program. So, but anyway, I do agree with the view that the
brain is computational, but I think I disagree with the idea that the mind is essentially a computational
device. Uh-huh. I, that for the reasons I talked about involving the program issue. Yeah. One thing
that really lights up for me is that a lot of times when people are having these conversations,
especially people without, you know without philosophical or psychological training, they're using these words so loosely.
Like when two people can be having a conversation and using the word mind and mean completely different things by that word or when they're using the word computer or using the word program.
And that's what I mean when I say that these are often these ideas are poorly thought out because they haven't even really defined the terms.
Like I would agree that, you know, the brain does computation.
But a lot of times when we use this word computer, we're we're literally talking about it.
The brain being like like the computer on our desktop when like it clearly is not like my my brain cannot run Minecraft.
Right. Like like a trait of a
computer as we normally describe it is that it can run any computer program, right? There's like
a famous thing in computers that everyone, people port doom to every type of computer.
You can play doom on a graphing calculator, right? You can play doom on a virtual computer inside of
Minecraft, right? But I don't think anyone has ever ported doom to the human brain. And I don't think
anyone ever could because it's not, it does computation, but it's not that type of computer,
right? Or am I wrong about that? Well, I suppose we could instantiate doom if we had like some sort
of cool brain machine interface, but we'd be doing so much more. I mean, like, I mean, that,
I guess that's the thing, right? I mean, it would be more like a subroutine in a larger,
ultra sophisticated program. It's not like when we were running doom, we were also shutting off our
auditory cortex or, you know, other parts of the brain. Yeah. Yeah. So, I mean, philosophers like to talk about this
doctrine of multiple realizability. We've kind of been dancing around that a lot in this conversation.
Yeah. And, you know, the idea is supposed to be like, hey, you know, information can be
instantiated in so many different ways right um so consider
if i invite you to a party oh that's a thing of the past sorry but um you know with a cell phone
text or with a phone call different formats uh different physical implementations right but
the information conveyed is the same. And the
idea is that, well, then isn't thinking like that? I mean, you can have the same thought
if you were Lieutenant Commander Data with a silicon brain, or if you had a biological brain.
And I mean, in a sense, that's right. And in another sense, it's wrong.
I could get into that.
But I mean, the important thing is that that alone doesn't support the idea that your mind is a program.
Right.
It just says that you're able to have similar thoughts or the same type of thoughts as some other sort of being.
Yeah.
And that's not the same thing as saying you,
your brain could survive a format change,
you know,
so that you went from being a neurally based to being silicon based.
Yeah.
Even if we were to construct
an exact silicon duplicate of the brain
that duplicated every single neuron,
that doesn't mean that you could travel
from one to the other.
Yeah, sadly.
See, I mean, this gets back to transhumanism
and their sort of technophile's' route to immortality, right?
Yeah.
I mean, it's starting to feel like wishful thinking, like theology without a great argument to back it.
You know, not to diss theology in particular, but just, you know, different views about the afterlife that fail to have
justification, like an analytical justification. That's what I mean. And I know plenty of theologians
do try to offer analytical justifications, but the program view and the uploading view still
lacks it. And it bugs me out because the people advocating it are often really good philosophers,
And it bugs me out because people advocating it are often really good philosophers, but they just, they don't go into the details of their views.
They don't explain it.
Maybe the workability of their conception relies upon a highly unlikely view of personal identity. And that's somewhat disingenuous.
Yeah.
And that's somewhat disingenuous.
Yeah.
It's starting to seem to me that really what we've done is replace the old kind of dualism, the Judeo-Christian dualism of like there's a stuff called the soul and that just inhabits this physical meat and then later it'll go somewhere else.
Right.
We've replaced that dualism with a dualism based on computer slash program. Right. We've replaced that dualism with a dualism based on computer slash program. Right. Where we've taken that computer metaphor and turned it into a new dualistic way to understand ourselves.
Yes. And as you can hear, Teddy, the poodle agrees.
And he also adds that some some of the transhumanists are definitely implicit dualists.
adds that some of the transhumanists are definitely implicit dualists.
And some of them actually think that there's no such thing as a self.
As we were talking about earlier,
like that we were talking about that view that is held by some Buddhists as well as by the contemporary philosopher, Derek Parfit.
And in that case, it's not that they are implicit dualists.
It's that they're like, there's no survival of this self.
There is no self, so go ahead and upload.
And I think some of the people, the transhumanists,
actually kind of dance between the different positions
without even telling us.
So, like, I suspect Bostrom, Nick Bostrom,
in his wonderful papers is sort of oftentimes thinking Parfitt's
right. And the funny thing is they were roommates at Oxford, I hear. Yeah, that's what I hear,
housemates. Yeah. So I wonder if he ever talked about this, right? But I mean, these are issues
you can't relegate to a footnote or not talk about. Like they have to be upfront because otherwise, you know,
not all of us have time to sit around and read philosophy.
Like the members of Congress I work with are, you know,
believe it or not, well-meaning individuals, you know,
and they don't have time to sit around and read books all day.
And, you know, neither do busy tech CEOs or AI researchers.
And so I think it's really important so that they don't, you know, project the wrong picture of the future of the mind.
They're all very influential.
This is you're really presenting this as, you know, I've talked on this show before about
philosophy and its utility, and we've talked about, for instance, bioethics. For instance,
it's a classic field where philosophy has a lot of real-world things to say
that can affect what we choose to do in the real world. And you're really making the case that
this field of philosophy is as well when we're talking about
what our next technological
developments will be. Philosophy really has something to say here and is a necessary part
of this conversation. It is. And not just ethics either. I mean, some people call me an AI ethicist
and I'm happy to be called that. But honestly, the ideas I have come also from philosophy of mind, which I know you have background in, and metaphysics.
Yeah.
And I think all of these disciplines have so much to say.
And to this end, I'm changing jobs.
I've taken an endowed chair at Florida Atlantic University, and I will be opening a center based on these views called the Center for the Future Mind. So I'm really excited about that. Very cool. Yeah, yeah. And
a big part of that will be public engagement. Because first of all, I like working with the
public. I like working with people like yourself and, you know, tech CEOs and members of Congress, reporters. But I mean,
I think it's so important that if these issues do have a bearing on the future of humanity,
that we get out of the ivory tower. Yeah. Yeah, definitely.
Well, we've spent so much time talking about transhumanism. I just want to talk about AI a
little bit, because that's sort of an associated topic here that we haven't gotten into. Do you what are your views on AI very broadly? That's a huge
question. But where do you normally start? Where do you differ from what most people say about AI
in the media? Okay, yeah. And your listeners could turn to a recent Scientific American interview on
my work, if they want more detail or my book.
But I'll tell you real quick.
First of all, you know, there's a lot of talk right now about the scope and limits of artificial intelligence and, you know, whether we'll achieve artificial general intelligence.
And about super intelligence and whether we can control it.
About algorithms being discriminatory. I mean,
there's just a whole Pandora's box of issues. And yeah, I have positions on all of them, of course.
I mean, one thing I could just put out there is I think there are a lot of misunderstandings,
both with the public and also among AI researchers themselves. I mean, so one thing is,
you know, we don't need to camp out on when we'll achieve artificial general themselves. I mean, so one thing is, you know,
we don't need to camp out on when we'll achieve
artificial general intelligence.
I mean, generality, as I say in the Scientific American piece,
is a matter of degree, and it's almost here, you know,
in terms of just flexible algorithms
that are getting increasingly domain general.
So OpenAI's recent GPT-3,
for example, is very impressive. And I think it's just going to keep moving forward. That's not to
say we'll have human level AI that looks like Commander Data and, you know, acts like him
in the next 10 years. I don't bother with projections, but it's moving quickly. And I do think we need
to think about what I call savant systems before we even talk super intelligence. So savant systems
are systems that can do things way better than us in some ways, like savants compared to normals.
And they also have strange deficits.
And I think the first AGI systems
are not going to be,
you know, what philosophers talk about
as functional isomorphs of humans,
you know, identical to us
in brain structure and function.
Hey Susan, what's up?
I'm AI Bill.
Like, you know, hi.
Yeah, yeah.
I mean, they'll be good conversationalists,
by the way.
They already were seeing
it.
Boy, the next-gen stuff
that's coming up, too.
But, I mean, there'll be
deficits. They'll be like Savant.
And they'll be way better at us
calculating
and remembering and whatnot. Well, in some ways,
but, you know, in other ways, you know, their causal reasoning will be off. They won't have,
you know, theories about entire domains. I mean, we're seeing that kind of stuff already.
But there are different ways of fixing it, all compatible with the idea that the brain and
you know intelligence is computational so i think it's going to happen we're going to see the
development of artificial intelligence but it will be more savant systems quicker than
super intelligences but here's a punch line don't think they'll be conscious. That is an empirical question that we need to investigate for zillions of reasons that I articulate in my book. And do think that they'll be as dangerous, if not more so, than super intelligences. What's worse than something intelligent in every respect,
something powerful and intelligent in lots of ways and stupid as heck in others.
Yeah.
Stupidity coupled with power.
Oh,
gee,
that reminds me of someone I know.
That's a dangerous mix.
Yeah.
Or something that's hyper focused on, you know, the the classic thought experiment about like the what the the AI that just wants to make paperclips.
All right. Do I have that right where it's focused on doing one thing and on optimizing that and is value that it's looking for.
Right. It's kind of like having the sole goal of winning an election.
Well, well, tell me about you said something in there that I want to hear more about
consciousness being an empirical fact that we would need to determine now. Now, that to me is
sort of one of the core questions of consciousness, because like I look at my dog, right? And I'm like,
I think my dog has consciousness. She seems to dream. And that's a pretty good way that I have
to determine whether or not something is conscious, I dream too. And that sort of seems like
there must be some kind of internal representation, right? If she's, if she's had, or I, sometimes I
can tell that she's thinking about something that's not in the room and I'm like, okay,
there must be a little picture in her head. That sounds a lot like consciousness. So I think I can
guess that there's something it feels like to be my dog Annie. But how would I have any sort of,
I'm not sure if that's really empirical or not.
And how would I have any empirical test
for consciousness in an AI?
Okay, so there's two different tests that I've developed
because yeah, I think it's entirely an empirical issue.
I mean, I think it's conceptually possible,
like, you know, in the mind's eye
and thought
experiments that philosophers talk about for there to be, you know, uh, sentient machines.
Okay. Now clearly not non-human animals are conscious. I mean, they have, you know,
physiological similarity to us and we can introspect and we can tell that we are,
us and we can introspect and we can tell that we are. But when we're talking about devices that are not evolved, right, but are built to fool us into believing that they're friendly, personable,
and even sentient, we have to be careful. Right. And I don't, I mean, careful,
have to be careful. Right. And I don't, I mean, careful. You don't want to assume something's conscious when it's not. Yeah. Okay. Just like you don't want to mess up and not think it is
because the reason that we protect humans and non-human animals from suffering is that we
judge that they can suffer. Yes. And if we start putting machines in that category
when they shouldn't be there,
they're going to be an inevitable trade-off.
Like philosophers like to talk about trolley problems.
So suppose on one side of the track,
you have three AIs that you mistakenly think are conscious,
but they really can't suffer. It doesn't feel
like anything to be them. And then the other side, you have two kindergartners
and you have to pick a side, the trains rolling forward, which do you choose? Well,
if I believe machines are conscious and I'm wrong, I'm killing like kindergartners. Oh,
oh, oh, right. I mean, I'm killing them no matter what. What if you're trying to fight, what if you're trying to fight what if you're
what if you're trying to fight climate change so you're like i better turn my computer off at the
end of the day to save energy but then you've got a program on your computer saying no don't turn me
off you'll kill me right like how how 2000 right yeah was it how 9000 sorry i forget i forget i
forget but but uh yeah no i mean that's what's at stake. And so, okay, so tests, test-wise.
Okay, there are a couple ways we'll know. Okay. I mean, we won't know with certainty, but I think
we kind of in a looser sense of know, we'll have a sense. So one way is, this is so silly, but I
think it's the best we can do. If we do have natural language programs and they're starting to look really
smart, then we need to ask them questions actually from philosophy to see if they're
conscious or not. So, you know, do you think a mind can swap with a different body? Like,
can you, like Freaky Friday? Do you think the mind can leave the body? Do you think it's possible to
survive the death of your brain, your artificial brain? Now, these are all questions that presuppose
a lot of heavy lifting on the part of the AI. It presupposes the sense of self. It presupposes
natural language abilities. So you don't want to run it on all sorts of machines. You have to be
very careful. The other thing is you only want to run that test at the R and D stage because if say, I like think of Hanson robotics, Sophia, I don't
know if you've ever dealt with that, but I've been on TV shows with it and it's got baked in answers.
Yeah. And the viewers always think it's human, like know and in reality it we every time i'm filming with them
we have to turn the the video off to fix it non-stop it breaks it it's yeah but programmed
in stuff it's like the wizard of all there's someone in the back it's a magic trick it's a
magic trick it's fun to get people thinking but it's somewhat disingenuous too,
you know, because we really can't bake in answers to questions about consciousness. So that's why it has to be done at the R and D stage because you don't want to run it on Sophia because some
programmers written the answer. I feel very sentient. Um, you know, so you have to be
careful. Okay. So there's another test though, which I think is
perhaps even more useful. And I call that the chip test. And I gave that, I elaborated on that
in a recent TED talk and in the book. And that one involves just simply watching what goes on
with brain chip innovation. So as we begin to replace parts of the human brain with chips,
like in the artificial hippocampus case,
when we get to the replacement of parts of the brain
that are part of the neural basis of conscious experience,
will we see strange deficits like in Oliver Sacks' stories?
Yeah.
You know, will people lose visual consciousness, for example?
And if again and again, our technology just keeps failing and we try all kinds of chips and all
kinds of architectures in the chip, I think we have reason to believe that machine consciousness might not work. That is to say
that machines that are built from those kinds of components don't have the right stuff for
consciousness. And I think that will happen. I think we'll get a better sense with that. Now,
it's not like an axiomatic proof. It's not what philosophers love, but I think it's useful.
It tells us something about what's technologically possible.
But one of the things that your answer there made me think about was how we are actually more
primed to believe that our machines are conscious than might actually be the case. Like you talk about
this robot, which is so many people swallow, right? So many people see and go, oh my God,
AI is here and it's going to kill us. Elon Musk goes and sits in some chair at some
conference and says, oh yes, we must be worried about artificial intelligence killing everybody.
And everyone's like, yes, it's true. It true and uh you know we have this faith in our devices i mean look again at uh you know how many people
are killed because they believe that you know self-driving features on cars work better than
they actually do right like that's that's a problem in self-driving cars that we believe them
we believe they have they have abilities that they actually don't and so uh and you know
i also look at gpt3 which is that incredible text generation ai right that it's it's very
impressive but at the end of the day it's it's kind of a form of magnetic poetry right you give
it a bunch of input that you as a human decide to give it and it outputs some very credulous text
right um it's not it's not actually conscious? But it could fool a lot of people into
thinking it was written by a human because we are so, we're, we're very credulous about these
sorts of things. And so I wonder, yeah, how does that fit in? It's a train wreck. It's a, first of
all, thanks for reminding me not to drive, let my Tesla drive me home from the bars.
I mean, people have died this way. Oh yeah. No, I've heard all the bars. I mean, people have died this way.
Oh yeah.
No,
I've heard all about it.
I mean,
it's always in the headlines,
right?
Like these people who have to be pulled over because they're passed out
behind the wheel.
Right.
Yeah.
The cops have to get in front of the car.
The car's doing it.
No,
Elon says the car's driving me home.
It's safe.
And I was at the bar with my GPT-3 program,
and it was sounding really sexy.
Yeah, God, people are going to be so confused.
And I'm really freaked out, too, as a professor,
because I'm just going to get so many papers from GPT-3 now.
I don't know how I'm going to tell.
It's a train wreck.
And so I call this the cute and fluffy
fallacy. And I think it's going to be just a train wreck. I mean, I don't know how bad it's
going to be because, you know, goodness knows we didn't predict the situation we're in now.
But the problem is this. So if it's human-like in some dimension, we tend to attribute
So if it's human-like in some dimension, we tend to attribute all sorts of human-like capacities to it.
So that's what happened with Sophia.
If it looks like a cute female, like look at the Japanese androids, oh my God.
Yeah.
You know, oh, it's got to be conscious.
I mean, look at the film Her, you know, had a sexy voice.
You know, look at Rachel and Blade Runner.
So we think it has a cluster of other features.
And I think the unschooled human is fodder for being fooled.
And there are going to be all kinds of people marrying their computers.
God.
Well, you can just imagine, like, can imagine for instance, that, that artificial
pets can't be too far off that, that there'll be able to create a, you know, a little robot that
is cute and fluffy and has enough variability in how it interacts and seems to interact enough
that people actually do form like attached to it. Right. The same way that they might to an animal.
Sherry Turkle has been writing about this for years, you know, because there already are these little devices and have
been, and they don't even need to be very sophisticated for people to be fooled. And
maybe that's right in certain cases. I mean, if you're lonely and alone and you can't take care
of a dog, like, you know, a lot of older people don't want
dogs because they don't want to be responsible, you know, should they pass away? You know,
maybe that's the right thing, but I just think it's nice that people understand the stakes and,
you know, Blade Runner, the old version depicted a world full of artificial pets.
Yes. version depicted a world full of artificial pets. And this may be cultural too, right? Because to,
to viewers in North America, they found it sort of sad, like, oh, it's not a real dog. And, you know, but honestly, you know, in Japan, people don't feel that way about their artificial devices. So, you know, part of this is
cultural. But I do think these kinds of cultural dialogues are important. And there are costs,
like I've indicated, in assuming something is sentient when it's not. Because we can't legally
protect non-sentient computers, because there there will be tradeoffs with sentient beings.
Well, so let's let's end on this question.
I do think that we need regulation about the kind of AI that's developed.
Do you think we need more education on this issue?
What are your what are your views in terms of, you know, protecting us from, from bad actors in this space? Like, like,
it seems like neither of us are too scared about, you know,
the Elon Musk version of AI, right? I'm scared. No, I'm scared. Oh,
okay. Oh yeah, totally. Oh, I'm sorry that I don't mean to put words. No,
no, no, it's okay. It's just, it goes back to those Savant systems.
So I'm amplifying his worries. I'm saying like,
it can happen even sooner,
like the control problem, like paperclip factories because the savant system,
you know, so we need to be super careful. But it's not at all to say that this is around the corner or to say that it will look like the Terminator. I think that, you know, unfortunate
reality is a lot of the media stories on this topic use Terminator pics.
Right.
Yeah, and the AI companies were like, oh, my God, you know.
So we do need to separate robotics and the issue of killer robots, you know, which the military is trying to develop, sure.
Oh, yeah, they're developing them.
That's another conversation. I mean, hey, a super intelligence could be a disembodied system, doesn't need to be a robot.
And it's not to suggest it will be a doomsday scenario. I mean, I'd like to end on a positive
note. So, you know, to answer your question, I think we have to do everything we can to have
conversations about these issues and to facilitate research,
not just in AI, but in AI ethics and other philosophy areas of AI. Pay attention to the
kind of whole picture and ask, how do we move forward with AI for human flourishing? And that's
why I'm in Congress, working with Congress on legislative guardrails. So yes, I do think as we move forward,
we don't want unfettered corporate control of our future, especially because these are involving
issues like the elections, privacy, thought data, to go back to, you know, what I've been talking
about, you know, the possibility of, you know, having chips in the head that detect our thoughts or even biometric wearables like
wristbands or shirts. Like we really want to make sure that our lives or our children's lives or
grandchildren's lives are still safe and good. The technology is there for us, not to control us.
Yeah. You know, what you make me think of is that, you know, this writer, James Bridle,
I don't know if you're familiar with him. He wrote a wonderful book called New Dark Age.
And one of the theses was, you know, we're building technologies that are so complex
that we don't fully understand their behavior. We don't we can't always predict
what they're going to do. And we also trust them too much. You know, it's the example of the person
who follows their GPS directions right into a lake. Right. Oh, yeah, I know. God, I've seen.
Yeah. And those sorts of things are or the facial recognition algorithm. Right. That
ends up being discriminatory or ends up cutting people off from opportunities or ends up falsely implicating people in the criminal justice system.
Those are the nightmares that we should actually have. And they're already happening.
And yeah. And luckily, I am happy to see so much concern.
Yeah. About all of these issues in Congress.
about all of these issues in Congress.
You know, nobody wants algorithmic discrimination.
And we certainly don't want AIs that amplify the discrimination that's already there, you know, in the socioeconomic system,
implicit biases and whatnot.
And the good news is I think we can carefully navigate that.
I'm more, I'm pretty optimistic about that. I'm more concerned about the robotization of warfare.
And I know the military is too, right? I mean, concerned about the use of AI as a dead hand device. I mean, there are a whole bunch of issues involving war and
autonomous weapons. And I'm concerned about the possibilities of savant systems being dangerous.
Another thing which we didn't get to, but I have great concerns about is technological
unemployment. Yeah. Yeah. And I think we need to plan for that. And GPT-3 is suggesting to us
that it's around the corner. Yeah. Because it's able to, I mean, I've already used a,
there's a wonderful app called AI dungeon, which is, uh, basically a text adventure that GPT-3
generates, you know, generates text for you that you interact
with, right? Oh, sweet. Have you tried this? No, but I'm going to do a show on this with David
Chalmers, who we were just talking about. Oh, cool. And with GPT-3 and yeah, so cool city. I'll
be playing with that app. You have to try this. It's called AI Dungeon and it says, you know,
you are a wizard and you arrive at the castle and what do you do it's like a text adventure you
say i enter the castle and it generates text for you that really feels like a text adventure from
1985 right um and i have spent hours playing with it because it's really fun and it is you know it's
still created by humans like a human is employed in making this game, right? Okay. But a game writer was not. And there's things that it can't, that it can't provide
that a real game writer can, but it is, it is a case where, you know, this is, this is changing
the nature of work in games. Potentially. I could see how that could be the case in the future
because this thing can generate infinite text, right? Yeah, yeah. And I mean, when you have seamless or close to seamless conversationalists, you can outmode a whole bunch of workers.
When you have grocery checkouts, I mean, it's all coming.
And so, you know, I don't know if Andrew Yang's solution of universal basic income is the solution.
But I think we need to have conversations about it because if you think political divisiveness
is bad now, wait till... And I hate to be grim because we're already dealing with so much, but
we can see what happens with wide scale unemployment and the instability it can create. And we want
people to feel valued and we can really do this right. I mean, it could be a situation where
people are able to take really long summer vacations, shorten the work week. It can make
their jobs better if we do it right. But these are the things that everybody thought about, you know, all technological progress,
right?
This is what Keynes thought we'd have, what, a 10-hour work week or whatever because of
technology.
And we're actually working more hours than ever.
We're just chained to computers now.
It's hard to know if, like, it turns out that those are not technological problems at the
end of the day.
They're social problems that we need social solutions to.
That's right.
That's right.
Our technological innovations often outpace the maturity of a society.
But there are models of democracies that have done it right in terms of how to treat the worker.
I mean, like, you know, notice how German companies give so much time
off, I believe it's six weeks a year to their employees. So, you know, we just need to change
our priorities, right? Instead of fighting over some things that I think are very divisive and just should be not discussed
constantly over and over. We need to turn to, you know, finding common ground. And I think
everybody thinks employment is important, future for their children. We should come together. I
mean, maybe AI will bring us together.
Maybe it will become an us versus them thing. People seem to like that, right? I mean,
we're seeing so much of that. Except presumably the most, except the most likely thing is that a human has created the AI and they're invested in keeping it around, you know, like it's us
versus them might be not all of humanity
not all of humanity against the ai but all of humanity against the humanity that created and
ran the ai and that's going nuts with it just like we currently are you know we're tussling with
google and facebook and all these companies trying to get them to like rein in your fucking technology
that's hurting everybody i agree and i think we need to think that the new reality in 20 years
could be that the most powerful organization on the planet
is not a nation state, but Google.
And we need to think what we want and how to do it
because when you look at the means of production, not to sound like a Marxist or something.
Oh, feel free.
No, I mean, I lived in a communist country when I was young and I'm not at all a Marxist.
But I just want to say that, you know, in the future, who owns the capital when we're talking about algorithms? We're talking about
a really subtle issue involving intellectual property. It's really hard to patent an algorithm,
but whoever owns that algorithm owns the world. And it could be one person. And I don't know how the machines will need us.
I just, once we,
all human abilities can be taken over,
I'm not sure what our role would be.
So that's why we need guardrails set up.
Right now, we need to start. And that's why we need guardrails set up right now. We need to start.
And that's why we need people like you actually thinking about these things and we need
philosophy to be part of this conversation. I think so. I mean, you know, I've been very
controversial today, but, you know, it keeps people listening, you know, and, you know, feel
free to obviously disagree with, you know, lots of what I've said,
but I think it is important that we have these kinds of conversations
about what we want the shape of the future to be.
Yeah. Well, thank you so much for coming on to talk to us about it.
It's been an incredible conversation.
Oh, I've had so much fun. Thank you so much for having me. It was great.
fun. Thank you so much for having me. It was great. Well, thank you once again to Susan Schneider for coming on the show. If you liked that episode, please, please, please leave us a rating or a
review wherever you subscribe. I know every podcast host says this, but we say it because
it's so important. It really, really does help us out. Okay. I want to thank our producers,
Dana Wickens and Sam Roudman,
our engineers, Brett Morris and Ryan Conner, Andrew WK for our theme song.
You can find me on my website, adamconover.net or at Adam Conover,
wherever you get your social media.
And hey, just to remind you, I do stream on Twitch on occasion,
so give me a follow there if you feel like it.
And until next week, we'll see you next time on Factually.
Thanks so much for listening and stay curious.