Theology in the Raw - S9 Ep954: Robot Theology, AI, and Sex Robots: Dr. Joshua K. Smith
Episode Date: March 14, 2022Joshua K. Smith is a theologian and pastor researching the ethics of AI and robotics. He is the co-chair of the Kirby Laing Centre's Technology Hub. He is the author of Robot Theology: Old Questions t...hrough New Media. Is it okay to punch a robot in the face? Questions like this launch us into a scintillating conversation about a moral theology of robots, including the growing phenomenon of sex robots. Theology in the Raw Conference - Exiles in Babylon At the Theology in the Raw conference, we will be challenged to think like exiles about race, sexuality, gender, critical race theory, hell, transgender identities, climate change, creation care, American politics, and what it means to love your democratic or republican neighbor as yourself. Different views will be presented. No question is off limits. No political party will be praised. Everyone will be challenged to think. And Jesus will be upheld as supreme. Support Preston Support Preston by going to patreon.com Venmo: @Preston-Sprinkle-1 Connect with Preston Twitter | @PrestonSprinkle Instagram | @preston.sprinkle Youtube | Preston Sprinkle Check out Dr. Sprinkle’s website prestonsprinkle.com Stay Up to Date with the Podcast Twitter | @RawTheology Instagram | @TheologyintheRaw If you enjoy the podcast, be sure to leave a review.
Transcript
Discussion (0)
Hello, friends. Welcome back to another episode of Theology in the Raw.
All right. So I just got off the podcast with our guest, Josh Smith, and my head's still
spinning. I'm trying to recover here. We talked about all things related to robots and AI. I
guess I should give a little bio here of Josh. Josh is a theologian and pastor researching the
ethics of AI and robotics. He's
the co-chair of the Kirby Lang Center's Technology Hub and the author of the book we're going to talk
about, Robot Theology, Old Questions Through New Media. He got a PhD at Midwestern Seminary,
I think in St. Louis is where it's at. And I don't know Josh. I just saw his book.
We connected on social media.
And I said, dude, this looks interesting.
Let's talk about it.
And this was a fascinating conversation.
I asked a lot of stupid questions.
We did get into some racy stuff the last half when it comes to sex robots.
I mean, that's a touchy, sensitive category. And so if
you're not ready for that, then just listen to the first half. And we talk a lot of stuff,
about a lot of stuff not related to sex robots in the first half. So anyway, without further ado,
my goodness, let's dive into this conversation with the one and only Dr. Josh Smith.
Hi, hey friends.
Welcome back to another episode of Theology in the Raw.
I'm here with Josh Smith, author of Robot Theology. So I just want to say, all right,
go. But give us a background. Who are you? What do you do? And then what got you into wanting to understand robot theology and AI and this whole world that I know hardly anything about?
not theology and AI and this whole world that I, that I know hardly anything about. Um, yeah. Uh, yeah, I'm sure. Yeah. Tell us about how you got into it and that'll take us, I'm sure in many
different directions. Yeah. It's a kind of, it's kind of a long story, but I'll try to make it
quick. Um, so I'm from Mississippi and I'm Southern boy, uh, grew up here, been to other places,
was in the military for a while. Um in high school, I got into robotics,
programmed a little bit for the Nissan factory just up the road. Well, I'm in Colorado today,
but up the road from where I grew up. And yeah, I just got fascinated by the machinery.
It's just one of those things that you kind of either you're really freaked out by it,
or you embrace it and you're comfortable with it.
And I guess I was. And so, yeah, I wanted to study that more. Um, but there wasn't any programs
college wise that I could get into or forward. I had a really terrible GPA in high school. Uh,
it was kind of a burnout slacker. Um, so I joined the military. So, um, I guess that was a logical thing to do.
Is that where slackers go to the military? Yeah, man, I'm telling you, it's, it's, it's a,
it's a strange place. Um, but yeah, I, um, I, I joined because of the bonus money GI bill.
And so I haven't paid for a single decree that I have. And actually, my wife, even her graduate school was paid for.
Wow.
So, yeah.
But anyway, I worked with some systems there that were semi-autonomous, defensive weapon systems, like a big R2-D2 robot with a 20-millimeter gun on the bottom of it.
Really?
So, yeah.
Can you semi-autonomous?
And by the way, I might, I try not to do this,
but I might have to, if I hear something that I don't quite understand, I might jump in,
cut you off real quick and have you define it. So I hope that doesn't destroy your train of thought, but semi-autonomous. No, it's fine. It's a combination of human and like human in the loop or on the loop.
So we're always basically the DOD policy is a human will always be in the loop.
So it's not going to make a decision, even though it can use LIDAR,
which is the image reckoning system that captures everything.
It just looks like a big scope.
If you're familiar with rifles or anything like that.
And it could, it can engage the targets by itself, right?
It didn't need the human operator.
And so there are a bunch of us who sat in a room all day
and watched screens and it would light up and say,
hey, I think this is a potential target.
And then it would alert the operator,
hey, do you want to fire or not?
And so that's what I mean.
It didn't, like the tracking systems and stuff were the autonomous part okay if that makes sense it's
not like it made a decision by itself to engage it's just and even with as we get into more
advanced in the conversation with ai and stuff it's it's always these mathematical uh evaluative
decision making processes that doesn't mean consciousness or anything that it's just right mathematical evaluative decision-making processes.
That doesn't mean consciousness or anything like that.
It's just it's been programmed to recognize certain patterns.
It's really good at that, and it does that.
So that's about the extent of autonomy.
So this R2-D2 with a machine gun, like you would say,
here's a target, kind of punch a button, and he's going to go,
and then you have to punch another button to actually shoot?
Or do you just say, all right, there's a target a target do your work and you're kind of standing back making
sure r2 doesn't turn around and shoot the wrong person or something or there there's both systems
um so the ones that's used by israeli defense they have some that would just loiter around
and so they go into the sky and they they're on their own, right? They decide when to engage and when to, you know, attack.
Like a drone that's not actually being operated by somebody?
It's just kind of doing a circuit?
Yeah.
Ours were always piloted in a sense, okay?
And it's not just one human, right?
There's multiple.
So we had the Navy with us.
We're kind of getting to the weeds now. We we had the navy with us and they would like they would monitor the weapon
system we would monitor the weapon system um and then there were it's just there's so many different
components to it's not just one machine out on its own you know doing its own thing. There's, there's always multiple humans that are evaluating its
evaluation. Okay. Okay. All right. So going back to your journey, so this is what you did in the
military. Then, then take us, take us from there. Yeah. So, um, I did my PhD at Midwestern, um,
in Kansas city. And, um, I've always been a sci-fi person, nerd, comic books, whatever, huge Batman fan.
But unrelated to that, I was watching a couple of different shows while I was doing some
particular research for class.
And I wanted to do something on AI and robots, particularly around like, what does it mean
to be a person? What does it mean to be a person? What does
it mean to be human? And so one of my professors said, that would be really interesting because
I'm really tired of reading the same papers over and over and over. I said, so please do something.
And I really wanted to do like warfare, mechanized warfare and stuff like that. But he was like,
why don't you do like sex robots or something like that i don't remember quite all the details of it but it was it was like i don't really want to
do that research but i did and i'm grateful for it because it really it really pushed me
not just in sexual ethics but like you know when you think about shows like westworld
and um the bbc channel 4 series uh humans which is a better version, in my opinion.
They're asking questions about what does it mean to be human?
How do machines relate to God?
They're asking questions about idolatry.
They don't use that language, but that's what they're talking about.
And so I got into the literature, which is very broad and strange.
which is very broad and strange.
And then I was like, man, there's not a very conservative theological perspective on these issues.
It's always like something either Buddhist or some crazy just kind of far left theologian who I respect.
And I'm not saying that they haven't done the work or anything, but it's just not a conclusion that I would reach about Jesus. And so I wanted to do
something faithful, both to biblical theology and systematic theology and try to bring in
my perspective of scripture and, you know, wrestle with the text, even though, you know,
the Bible doesn't talk about AI and robots, But it does talk about the questions behind why we create some of those entities.
And so that's what I tried to do in my work.
And I wrote a couple of papers on that.
It turned into my dissertation.
And I finished that.
I graduated in 2020 and went on to do some more research.
And so I wanted to broaden it out more.
And that's what led to this book with Wiffenstock. So yeah. Can you give us a somewhat of a brief,
not too brief, but not too long summary of what is robot theology? Like you even said,
like the scriptures don't speak directly to this, obviously, but it does give us principles through
which we should evaluate these modern questions and categories. I'm not even sure what modern questions or categories I'm supposed to ask. I
mean, this is how new all of this is to me. Yeah. So give us a kind of one-on-one version of what,
how we should think. Yeah. Well, for me, I always try to approach topics like I want to go to some
texts in scripture and try to figure out, okay, what is the underlying theme?
What is the theology that's being presented in the Bible?
And then how do I apply that to, you know, systematic issues?
So what I did is, or what robot theology is, is just looking at robots' AI from a biblical and theological perspective.
That's what I mean by that.
So I'm trying to make a category for it. AI from a biblical and theological perspective. That's what I mean by that. It's not.
So I'm trying to make a category for it.
I'm not really quite sure where it fits.
I'm trying to take biblical theology because that's what I, that was my heart, you know,
I'm an Old Testament guy and kind of marry that to some systematics, but not go straight towards systematic and just say, you know, we're not going to do any Bible.
We're not going to do any ancient Near Eastern context or anything like that.
So I try to marry the two.
I don't know if I do it well.
But I try to let the theology of the Bible kind of drive my conversation about, uh, my theology of robots and, and the
extent of where that could go or can go. So, okay. So what are, what are some theological
questions we need to be asking? Yeah, I think the biggest one right now is what does it mean
to be human? Okay. And the AI ethics community is, is picking up on this and it's kind of coming from like what separates us from animals, like the moral consideration for animals.
So the animal rights movement is very closely tied to the robot rights movement.
And as far as like, no, it's not quite a human or a homo sapien, but maybe we should consider it morally.
And so that's kind of where you have to start in a lot
of ways with some of these conversations. Other issues that I address in the work is robots and
racism. That's a big part of the literature. Robots and racism?
Robot, I mean, yes, sir. How, I don't understand how those are related.
understand how those are related. Yeah. Okay. Um, so robot in the check means forced labor.
Okay. So there's a long line of literature that looks at robots as a re-imagined slave.
Okay. So if you go back and there's several authors, uh, Gregory Hampton is a good one to look at. He wrote a small book on re-imagining slaves and he just looked at different movies
the animatrix series if you're interested there's all these imageries of the black body even blade
runner like so the the female that you're not quite sure whether or not she's a robot or not
she very much should have been a black woman. She should have been
played by a black woman. Because the theory is that, you know, those in power can use other
people's bodies as they please. So if you think about slavery, chattel slavery, that's what it is,
right? It's an economically driven decision to make someone less than, to exploit them for economic reasons, and to use this rationale,
well, they're not really like us, they're not really human, so it doesn't matter if we treat
them as property. And those arguments are very similar in the robot rights literature.
Robot, right, okay.
Yeah, so there's an article, and you can Google this, robots should be slaves.
Yes, that's the title of it.
And that's the argument.
And so if you ask people in this field that are – and the people that have written on this are not white people, okay?
So it's not just a bunch of white guys who are like – robots are like slaves.
Okay, so it's not just a bunch of white guys who are like, robots are like slaves. But this is how they're experiencing and understanding how they're imaged in science fiction. And even in, you know, modern cultivation of robots as, you know, a mammy. I mean, that's what Rosie was from the Jetsons, right? She was a mammy. And so there's many different examples of that.
And so even if you go all the way back to the 1920s with Karl Coppock's RUR,
which is Russell's Universal Robots, that's where we get the word robot from.
It's about this question of would robots want to be their own masters?
Would they want to be free or not?
And he says, yes, they would want to be free.
They eventually don't want to be slaves.
And so the question is, should we make something to be for service?
So I'm going to ask some stupid questions because I'm not obviously in the loop um
it's so obvious that I I I I I feel bad I feel like dumb asking it but like if um
like there's no moral agency in a robot or no no like um if I made a baseball bat and I
I use that baseball bat to hit balls with it
is what's the difference between that and if i make a machine that has no conscious no whatever
and use it to do whatever i wanted to do and well yeah i have another question about sex robots but
that's we're going to save that for a little bit um because absolutely that's a that's a good
question that's a good fair question what i'm
missing is it just it's obviously it's more intelligent it's more intelligent than a
baseball bat but is intelligence even the right turn i don't know um it's cool yeah no that's
good uh a lot of a lot of people a lot of people go there it's like it's just a tool right yeah
um you know i wouldn't i wouldn't think twice about destroying a toaster or, you know, if you wreck a car, you're not really destroying a living entity.
And that's not quite the argument that people are making.
The argument is, you know, what does it do to the human?
For example, if we had a humanoid-looking robot that we know absolutely 100% is not conscious, sentient, whatever you want to fill in the blank with.
Okay, that really doesn't matter as much because you beating that robot in public would cause a disturbance, would it not?
Think about it now. You're talking about like a human looking,
like if it looks human, which,
but if it was like,
ah, wait, okay, sorry.
I'm just, my wheels are turning
and they're a little slower.
So is it because there's some sort of connection,
some sort of personification
that's attempted here
of another human.
I guess unless the robot
looked like R2-D2 or something.
I don't know.
Is that the issue?
Because it could reshape
how we think about the human
that this robot is sort of mimicking.
Like again,
so if I made a robot
that looks very human,
painted it, say a darker color, a non-white human,
and then treated it like a slave,
in and of itself, that would maybe raise questions,
but it's more the, gosh, what signal is this sending?
What is this doing to my own view of humans
that this robot is trying to imitate or whatever?
Like that, is that the
yeah yeah you're getting it you got it okay you're not far off um basically and this is a
Kantian principle that um if a man kicks a dog right what does it say about the man
okay and this definitely will get into sex robots later. But, you know, if we make an entity that looks like us, even if we don't, there's been studies done.
Kate Darling is a researcher at MIT who had these robotic dinosaurs, like very, you know, very unrealistic looking dinosaurs.
And she did a study and gave people like an hour to play with them.
And then after an hour, she said, okay, here's some hatchets. I want you to kill them.
And they said, no. So, okay, well, either we kill one of them, you know, so as a group decide,
we're going to kill one of them or we're going to kill all of them. So you guys have to decide.
decide we're going to kill one of them or we're going to kill all of them so you guys have to decide and there was this very somber moment where like people were like emotionally impacted by
having to kill this robot and even with like bomb disposal robots you know they there's been
military funerals for these really entities people's uh roombas like a roomba is definitely
not in this moral consideration category at all, but
people want the same Roomba that they had because it's like a part of their family,
people name them. I mean, the list just goes on and on. Like it's not a bug in humanness to care
about artificial entities. I think it's put there by God that we care about things because all things
belong to God. And this is kind of where I go in my research is that I say, well, what's the
difference between kicking, you know, a stump or a fence post or whatever? It's like, well,
in my theology, that doesn't belong to me. It's never a product of my own. And everything,
to me. It's never a product of my own. And everything, especially something that emulates emotion. So we're talking about like social robots that things like Kismet, if you're interested,
you can look that up. There's robots that are made to treat autism. There's robots that are
made to treat dementia. And so they elicit a very emotional response. And so
it will cause moral harm to the user if you took that robot away. And even some parents who have
used those robots to treat their kids with ASD have offered very high numbers to keep those
robots once they're taken away. And they can't do it because it's not their property to sell.
But so I think that's just a small snippet of the research that if you bring these entities into our lives,
I mean, just think about our cell phones.
It does, cell phone can't wash the dishes.
Cell phone can't, you know, interact with the kids.
I mean, it can, but not in an embodied way.
And once you bring embodiment into it, I think the emotional
attachments are going to be deeper. I think the moral consideration that we give them will be a
lot more complex. But I also think the potential for harm is a lot greater. And so, you know,
we need to really think about those nuances there that, yes, it is a tool. Absolutely it is.
there um that yes it is a tool absolutely it is but there's also an in framing that's happening too and it's performing i mean because i'm just thinking of a range of different kinds of inanimate
objects but yeah robots are performing human functions which again doesn't
which, again, doesn't accredit consciousness or any kind of intrinsic difference between human, animal, inanimate objects.
I don't know, just categorizing it that way.
Like it would be inanimate.
I mean, at the end of the day, it's not any morally different in and of itself as a rock, a stone, a building, but it's performing human functions. But is that a key
difference? And also, we shouldn't destroy creation arbitrarily for whatever reason,
especially a product of human creativity. If I build a car using my creativity that I have because I possess God's image.
I'm using things in God's creation that are fulfilling a certain design.
And then I just bashed the heck out of it for no reason.
Like I,
I wouldn't put that on the same level of adultery,
but I mean,
it'd be like,
well,
that,
that just doesn't,
that kind of goes against the grain of a creature trying to honor a creator in a world that he has been placed as a steward over to just arbitrarily destroy something.
How am I thinking so far?
Am I correcting the thoughts I've said?
So, no, those are good thoughts, man.
I think we get hung up on the sentience and the moral agency argument.
Okay.
But there's also another side, the moral patiency argument.
And so it's like—
What's that, moral patiency?
Like it can be an object of—so it can be an object of love or care.
It can't—so a moral agent can do both, right?
It can give love and care and it can receive love and care. But moral patient is only on the receiving end. So that's kind of where I go. And I talk about this more in the book.
or an AI entity. No, it's not morally, it's not sentient. It's not the same as a human,
right? Although in a lot of people's anthropology and philosophy of mind, philosophy of science,
they don't really make good arguments for why that's not the case, like why it couldn't be.
I think Christian theology has a good answer to that. But that's a different line of thought. But anyway, you know, we kind of use that to say, look, they can't have the same moral
rights that we have as, you know, natural creatures, whatever you want to fill in the blank with,
or they have human dignity and respect, but maybe we should consider them as moral patients because it will potentially
harm the moral agent if I have a negative relationship to this entity, which is happening.
And we're not even talking about social robots yet. You know, there's a group of guys in Japan
who forego, you know, physical interactions with embodied females because they have digital
girlfriends. So much so, and everybody already knows that Japan's struggling with
producing another generation that's going to take care of the current generation.
But there's also other cases where people are abusing their AI chatbot girlfriends.
So there's a there's one called Replica. It's very advanced. It's an AI. And you know, whether
or not there's humans behind it or not, I don't know. But anyway, there's there's a
group of guys, and there's an article about recently that are just intentionally abusing this entity.
And that's completely okay, right?
That's completely, there's no, you know,
it's just like abusing a toaster, right?
I mean, is that harmful or not?
Should we allow that or not?
Should that be?
What's your response to that?
I mean, given where we've
been so far i think i i'm getting it a little bit but i would love to hear you say rather than me
fumble around no i i think like i think because of what it might do to me in my own interaction
with other humans okay and this is kind of the baseline argument here for me from my perspective
is because of you know one that's not a morally positive forming um
action right you shouldn't abuse anything even if it's alive or not even if it's a dog whatever
because that's just not what you should do or ought to do and so i don't i don't necessarily
have to believe that it's conscious or that it understands what's happening. Matter of fact, we know most AI, even though it's very advanced
and can make complicated mathematical decisions,
it doesn't understand the answer that it gave.
So it's not based on that, and I just think we can just throw that out.
Let's just stop talking about those kinds of questions
because just like Preston, I can't prove that you're morally
conscious. I can't prove that I have a brain. Some people would argue otherwise, but it's one
of those things that we get stuck on. And so I think a lot of harm is going to happen to moral agents if we don't take some steps towards policy, regulation.
And we can write moral statements all day long, and I'm all for that.
But I think where theology needs to catch up, especially in this debate, is we need to be discussing this with people who are writing the policies to ensure like a solid robust anthropology is going behind some
of this because it's quite thin it really is when you when you look at some some of these computer
scientists and it's not their fault right like i'm i'm here in colorado this week to talk to
to engineers about this very subject it's like where do you have in your pedigree time to think
about moral philosophy like if you do have one,
it's probably one course and it's like a crash course into philosophy. And so there's this
massive issue where we're developing, creating, consuming, prosuming, but we don't have a lot
of time to really sit back and think like, okay, should I make this product? What are the implications of making it?
Yeah, it might solve a societal need for the moment.
And this will get to sex robots in a minute.
But how is it also deforming people in the process?
Yeah, that's right.
And so, yeah, like it's a both end street for me. Like I think technology, robots, AI, they can have a very positive impact on our life.
And I give very practical examples.
But it can also go too far, right?
If it's substitutionary and not supplemental.
And so we have to be very careful about crossing that line.
And it's rather thin.
It's like a membrane sometimes.
Substitutional meaning these
social robots i like that phrase that that's um can become a supplement for other human beings
rather than or not a substitute for other substitute rather than a supplement that doesn't
hinder our social um beingness.
You've thrown me so many different categories.
I feel like I.
I'm sorry.
I'm sorry.
My vocabulary is like, has like 12 words in it.
Anyway.
Yeah, no, that's, that's, that's, I could also, it makes sense that how you treat another object, whatever it is, that is somehow substituting or even functioning as another human, that will shape how you categorically treat and view other humans.
Yeah. Right. And that does, gosh, I want to get into sex robots.
I didn't reword that. I want to get there in a second. I do have a nagging question, though. It's kind of, I guess, related to what we've been talking about. that it cultivates something like emotions? Maybe not consciousness,
but can an AI say in 50 years, 100 years,
get sad if you come home and don't talk to it
and it asks you how your day was
and you're kind of rude to it
and kind of like how we are to Alexa.
Alexa, play this song.
No, not that one.
You're dumb.
Is it possible for the mathematical... kind of like how we are to Alexa, you know, Alexa, play this song. No, not that one. You're dumb. You know? Um, yeah.
Is it possible for the mathematical precipitate?
I do understand like it's just a bunch of ones and zeros just firing like crazy, right? Like it's just, it's, it's doing the matrix thing,
but after a while can, can,
is it possible that that could lead to actual emotions? Is that,
I just had that question in my mind.
I don't know if that's the right question to ask, but. No, it's a good, it's a good question. Um,
yeah, I think to me it's about your capacity for mystery. Like I want to tell people in these
circles too, when we talk about this stuff, like I believe in a virgin birth, I believe in zombies,
I'm believing like resurrection, you know? And so, resurrection. I was going to ask you a question.
Like, so to me, it's not that, like, I'm not going to say that God can't do anything that
he wants. Like, do I think that humans are going to solve? No. And I go back to, like,
humans are going to solve no um and i go back to like ecclesiastes should be a a primer reading for many technologists like if not all like it you need to understand like one you're not making
anything new that god hadn't already placed in the world you're you're just re mixing minerals
chemicals whatever and and so you know if if God wants to allow that to happen,
I think it could as a judgment more than anything,
then sure, I think it's possible.
But like on a practical,
they're making robots to feel pain.
They are?
Yes.
Meaning?
Okay.
If you slap the robot it go
ow or make it i mean or what does that mean feel pain like since pain like what should be uh painful
so like you know so that uh somebody who's practicing dentistry could practice on a robot
that might emulate real pain emulating a patient okay. You drill too hard and we go, ah, or whatever.
Yeah, like, oh, you know.
Or, yeah, if you slap it, I guess it has some type of emotion.
And I think with pain, this kind of goes really deep into philosophy, too.
It kind of depends on your view of substances.
Like, do you believe in immaterial substances?
Or, you know, there's many Christians
today who don't believe in that. And so pain is just a brain state. It's not a material substance.
Okay. So one, philosophically, it's possible that a robot could feel pain or emotion,
just like it's possible for animals to feel it. And so how do we prove or disprove that though?
Like that's, that's my question that I, that I can't answer is, you know, how do I, how
do you observe externally, uh, an internal state?
So you don't, and that's why you, when you go to the doctor, they're like, they had that
little chart of smiley to frowny faces.
They're like, how do you feel?
You know, like they can't like put electrode on your brain and say, well, he definitely feels this amount of, you know, pain. Now that there's like
imaging that you could do that fires off, sure. But as far as like actually measuring the amount
of pain. And so I think it's going to be similar to what we might see with robotics in the future.
And whether or not that's just calculus happening or if it's a real pain,
I don't know how you would prove or disprove that.
Do you see, I mean,
is there any end to the development of AI?
I mean, could we end up in a Matrix-like situation
where they take over the world?
I don't know.
I mean, is that the question
that every stupid person like me asks?
Because I mean, the advancement of technology
just seems to be exponentially increasing and getting more and more and more and more advanced and more advanced.
And again, if you – or is that not true?
No, no.
It's true.
Like, you know, as far as like stuff like Moore's Law that you can look up where microchips, microprocessors are getting smaller.
Yes. that you can look up where microchips, microprocessors are getting smaller. Yes, and so I think there's some nuance to say whether or not we're getting exponentially better with that.
Right, yeah.
That's the question for me.
There's nothing new here.
We're just making new combinations.
And so the stuff that's in the aircraft that i was on a few hours ago it's not
new technology it's just repurposed you know the turbines and all other stuff is just repurposed
and so i think with ai and robots too they're so informed by science fiction which i love but
science fiction is not where you go to get to the real math and science and philosophy here
where you go to get to the real math and science and philosophy here.
And it's a lot less sexy than what you see on television.
You know, everybody likes the Boston Dynamics stuff, but you don't see all the failure in that process
and all the problems and calculations like it's a it's a big deal.
And it's very hard to make a robot that is
similar to a toddler that can actually not only understand emotional intelligence, but also like read the room, you know, manipulate you. And
so I have a four year old and now he's kind of like testing the boundaries of things.
You know, so I think getting to that point,
this should be more of what we're looking at.
And there are some roboticists
who are asking that question about children.
But if it can blush,
I think if it blushes,
it definitely needs moral consideration
because that's a very complex thing.
If it feels any type of shame
or something like that
or emotes in that way, yeah, I think it should definitely –
It seems very possible that it could mimic human emotions.
Like that blush, that's a different phenomenon than when a human blushes.
But it would like kind of what you said earlier,
that's – we're going to be faced with another human thing functioning as a human acting,
embodying the emotional state of a human. So, if we said something shameful, they're going to give
off that response that's going to be eerily similar to, oh my word. Well, let's talk about
sex robots. And let me give a few
caveats here. This is the one area I've done a little, I mean, a little bit of research, um,
a couple of caveats. Number one, this is this, I, I, I, I, this might be a little more like uncut,
raw, whatever, just because I think the nature of this conversation has to be. So if, uh, if you
need to turn off this episode now, then feel free.
Of course, that means everybody's going to turn it up louder.
And I never want to be edgy for edgy's sake.
So if I say something explicit or whatever,
it's because these are, I think, are live questions.
I heard somewhere, read somewhere,
a sociologist that said, given the rate,
there's two things happening.
Technology keeps advancing.
Sexuality, or let's just say porn, or things like exposure to
and a hunger for sexual satisfaction.
As long as those things keep growing,
which it doesn't sound like people are getting bored with porn. It doesn't sound like people
are like, ah, we're kind of done with technology. Let's stop
advancing. As long as those two things keep
going, more people
will be having sex with robots
than with other human beings by
2050. This is by
I wish I could cite their article.
I remember reading it. It's like a peer-reviewed journal
or something like that. It wasn't like some
New York Post
or whatever, whatever that even means.
Anyway, that's my setup.
So what do you know about this aspect
of the conversation? And is there
concern,
is that an accurate
possibility that
whoever said that is?
Yeah, I mean, one one there's a couple factors that
would lead to that is the cost so right now i mean you have people that are making them
uh real doll um super expensive people like harm yeah like i mean it's just not a practical thing
to have mass use yeah computers were expensive 30 years ago. Yeah. Yeah.
So I think once we get to that stage, um, I mean, but don't, don't get it wrong. Like people are
buying these dolls and, um, and even there's brothels, uh, across Europe and other places
that have, they're not robots yet, but they have the Silicon dolls. And, um, and so they're,
they're very popular so yeah i mean i
think it's possible for sure it just it just has to be the right cocktail circumstances and um
but really what that means is you're gonna have handlers so people sometimes argue that it's gonna
you know uh solve the problem of sex work.
Well, no, because the sex robot is going to have to have some way to handle it to make sure that it doesn't violate the house rules, right?
I don't know any sex work that doesn't – other than illegal like prostitution and stuff that happens.
But where it's legal in certain states, in certain places, you usually have some type of
handler. So I would imagine that that would be the case too with robots. Consent's an issue.
It doesn't make sense to me. If we value consent as a society, why is this okay? Like why is force
torture or even rape, if you want to call it that. So it kind of goes back to,
do you think that this is just a mere tool of masturbation? Or do you think that this is
an entity that might be morally forming? And I think it is.
Well, absolutely. This is where it freaks me out is several levels. One, I think this could,
if that's true, once they get cheaper, I mean, let's just say the technology advances so much that it's almost indistinguishable, the sexual experience with a robot than with another human being with zero relational strings attached, right?
Potentially. You can largely create this object of your sexual fantasy in whatever way you want.
The experience is going to be almost – just almost the same.
Maybe, okay?
And let's just say that that's $100.
And there's more – yeah.
I think that would be disastrous for human sexuality. We've already seen this in a glimpse
of this with porn with guys 20 years old that are on erectile dysfunction medication because they
don't know how to have sex with a real human. I think it would be disastrous for human sexuality.
And I think it would possibly lead to a tremendous amount of harm toward other actual women and children.
Because like you said, it's going to morally form. And we know from porn getting more hardcore,
more violent, that this will probably have that same trajectory where it's not going to be just
this imitation of a consensual sex relationship. There's going to be probably a rise in BDSM type stuff and then more rapey kind of stuff.
And then that can only have a negative moral formation on people.
It really does.
Because we have not, the church has not succeeded with something like porn.
I mean, how many Christians pastors are not, you know,
periodically use, habitual use, or even addictive use where I've talked to kids
who go to summer camp and they're like going through medical, like physical,
physiological withdrawals because they haven't looked at porn in three days.
You know, this is, this is not uncommon. So, and you know,
we keep preaching sermons on, you know, reading your Bible, whatever, whatever.
Like I feel like whatever we're doing, we're not discipling people well in what is leading to some catastrophe
on our human nature. Times that by a thousand with sex robots. It's, I'm not a fear kind of
driven person, but this really freaks me out. So. Yeah. I mean, it's interesting, man. Like,
but I think if you, if you look at the creators, I know it sounds crazy, but just give them a little bit of benefit of the doubt here.
Because I've listened to some of these guys, and I don't think they're crazy.
And I think many – so there's a YouTube channel called Soft White Underbelly.
It is a crash course in empathy. It's all about sex workers, addicts,
you know, homeless people on Skid Row. But you listen to all their stories and
some of them feel like they are a therapist. They really do. And I've heard, I won't say her name
because I don't want to trigger anybody, but I've heard one porn actress say that she, she very much sees her work as therapy.
And these men that will come to her and pay, you know, outside of video, like pay for, you know,
interactions with her. Um, they, they feel like it's, it's a morally forming thing in a good way. And so when I got into the research, there's a massive amount of scientists and others, philosophers, who are like, maybe we could treat the social pariahs of our day.
So – and I talk about this in my dissertation.
I know it's weird.
But think about pedophilia, okay?
about dissertation i know it's weird uh we think about pedophilia okay like the thought of making a childlike sex doll which there are um especially in japan and then saying that we're going to treat
they are making these this is a thing like this in japan they're making this this thing
they don't call them childlike um but i've seen videos of the factories where
it'll make you cry man it'll make you cry you know. It'll make you cry. You know what it is.
And I have a seven-year-old daughter. I'm like, that is disgusting. Anyway,
the theory is that they're providing a social good and reprieve for some of these
feelings. And even for the LGBTQ plus community, there was a paper that came out
that I'm very critical of because I think it's really offensive to that community
that says this is a form of treatment for that. I was like, just think about what you just said.
Like, you know, and this is from a feminist perspective, new materialist perspective that it's we don't have to get into that but i'm like
you know maybe there are other ways to approach it other than hey let's just give it a an entity
to have sex with um and there are other things like but at the same time on the on the other
side of the coin like at least they're trying to do something. They're trying to provide a social good.
It's so vicious. The way I've seen it framed is like – and again, I'm thinking out loud here.
So if I use words that aren't as precise, I mean we're dealing with really sensitive stuff here.
So again, if even – if any of this is like really unhelpful or triggering, then I encourage you not to listen to this.
But these are going to be real questions we're going to need to wrestle with.
So one of the moral arguments I've heard is, yeah,
you have people who are like pedophiles or people who are rapists, whatever.
If you give them an outlet with these robots,
it will reduce the number of actual human-on-human rape
or adult-on on child abuse,
rape, whatever. And you know who first,
you know who first used an argument like this? You might know it.
St. Augustine.
St. Augustine says, I am morally against the brothels,
but we need them. Otherwise,
can you imagine a world without brothels? There'd be a
lot more rape and adultery and all these things. I just came across that a couple of days ago. I
was like, oh my gosh, this is the same kind of moral arguments that we see in this conversation.
First of all, we'd have to ask the question, is that even an ethical way of thinking?
Secondly, which is, I think the answer is no.
Given imitation of something like rape as an,
like that doesn't, to me, the logic of that is horrendous, really. But also just practically, like you said,
like acting on, like you said, acting on, you're still morally forming and forging this desire and
behavior in the human rather than trying to minimize it or steer it away. You are not moving
towards the image of God in this person. You are moving very much away from that. So yeah, I don't think the argument
holds water, but we need to be prepared for what could feel like convincing moral arguments to
justify something like this. Yeah, it's almost got like a liberation feel to it. Like we're
providing an avenue to liberate these people from an oppression. But I push back against that and say, well, are you liberating them?
What are you liberating them to and from what?
And that's Roger Scruton.
That's not me.
But his book, Sexual Desire, it really explores some of those questions that I kind of build off of in my critique of this treatment for pedophiles.
But at the same time, like there are two places in the world who are trying to help, you know, 300,000 plus people who struggle with child, like, you know, attraction to children.
Yeah.
with child, like, you know, attraction to children.
Yeah.
And there are legitimate options, I think, that go so much easier than giving a robot.
You know, chemical castration is one.
It's just been proven to work.
And if you just look at the science of it.
For people with pedophiliac desires,
chemical castration, which can you explain what that, that is? Yeah, it's basically an implant
that takes away, uh, testosterone, I believe. Like it, it just cuts it down basically. So,
um, and there, there's a, a documentary on Amazon prime that talks about, it's like,
it's called I pedophile, I believe there's a couple of ones and it's, it's like, it's called I pedophile,
I believe there's a couple of ones and it's, it's front, it's very graphic. Um, but it interviews
real pedophiles, um, and their struggle and their conversation with it and, um, and their dialogue
with potentially being castrated. Um, so it's not like an castration no no no yeah chemical yeah okay so it just
reduces it basically reduces your your sex drive towards i guess more of an asexual whatever and
this would be something that would be chosen because most so the science behind even pedophilia
is that it is a i hate hate even – I don't love –
sometimes we think orientation is like a neutral category
and we typically only use it of same-sex attractive people.
But if you take the actual category of orientation,
there's many – there's a range of sexual orientations of people
who experience unchosen, sometimes unwanted sexual desires
that feel very innate, maybe even have some
biological basis that for the lifespan in most cases do not change. If we take that understanding
of sexual orientation, there are age orientations, there are sexual orientations, there are many,
even like racial sometimes, or some people have tried to talk about – I don't know how legitimate that is.
Polyamory has been trying to – there was an article in 2011 in a peer-reviewed journal younger people, it very much fits that category of unchosen, oftentimes unwanted, oftentimes is with that struggle, is with them for life.
So anyway, so you're saying chemical castration is something that they might say,
I hate this about myself.
I don't want this.
I would rather not have any sexual desire than this sexual desire.
And so that's happening now.
Yeah.
And you, you listen to their stories, you know, there's no societal benefit to them.
They can't even self-identify as, you you know i mean what they're immediately put on a black
list they're immediately you know cut out from some communities and and they're just trying to
get help in some cases and so what happens is they go to the dark net they go to you know places like
i won't mention it for triggers but you know free sites yeah that have recently been very complicit in that genre.
And so, I mean, I just think as a local church entity,
we just haven't served that community at all,
and nobody wants to talk about it.
And I mean, some people can.
I mean, legally, I don't know.
Some people can talk about it. So that all kind of goes back to why people would make these robots.
And that's not just for sex robots, but for war robots and for worker robots and stuff.
It's all about, I think, an economic-driven need or societal need that drives this. And so that's why it's it's all about i think an economic driven need or societal need um that drives this
and so that's why it's not going away it's not just a bunch of nerds like me in a lab who's just
like what what can we make you know it's as big big companies um government entities that from
the very beginning in the 1950s have really pushed this forward. Not sex robots, but the stuff that we see that has led to the developments that we have in AI,
in AI-driven robots, that is steeped in military funding.
Places like DARPA, Department of Defense.
And so I'm very critical of those entities because the problems that we're facing
now started back in 1956. I mean, there's other issues that started with the fall, obviously, but
this idolatrous desire to transcend our limitations, to profit off people's sexual brokenness,
to profit off our need to consume and feel what we feel is hollow.
That's what it's about, man.
I mean, we have to, if anybody can speak into that,
it's the theological community.
It's a brave new world, right? It's, um, what's his face? Um, uh,
amusing ourselves to death. Who's that guy? Um,
Postman. Postman. Yeah. I mean, he talked about that. Like that.
He just, we have these desires, these cravings. We are all, um,
as Dostoevsky said, you know, we're, we're, we're all Dimitri at heart.
We're all sensuous, you know, like we all have that Dimitri in us.
I just got done with brothers Karamazov a couple months ago.
And like, if he just stuff us full of our sensuality,
we end up in the brave new world, right? Like just numb to everything else.
That's why I love that the way you're thinking through it on an ethical level
is so much richer and thoughtful than just these kind of like real surfacy.
Is this right or wrong?
Like talking about moral formation and how this affects how we interact in
society and treat other people?
And is it moving us deeper into the image of God or further away from it? Are we going with the
grain of the universe, as Harwas said, or against the grain of the universe? And these are more
broad, really important categories that sometimes we don't apply to these weird things you're talking about, man. What else you got?
No, I think it's good, man.
I think, you know, I know we don't like Heidegger because of his affiliation with the Nazis,
but I think he can really help some Christians, if I could say this, you know, rethink about
technology because we're so driven by just one side of it, like where we started in our
conversation, like the tool is just an instrument. Yeah. And, but it it, like where we started in our conversation,
like the tool is just an instrument.
Yeah.
But it's not, you know, it's not. It's not that because it's also, it transforms with me.
And so I'm writing on violent technology right now.
And I was thinking about the rock or whatever that, you know,
Cain used to kill Abel.
You know, it's a neutral, amoral object until you pick it up and bash somebody's head in with it.
And now it is a murder weapon.
It transforms with us.
And that starts in our heart.
That starts in our mind.
It starts with a theology.
It starts with idolatry and all those things working together.
So it's both of those things combining,
and then us making other entities and creatures in our image and hoping and praying that
it has some morally good outcome but i think to to end on a positive note like it it can be a
a mutually beneficial thing to society to use this technology.
And I think there's a lot of great things that are being done in the medical field,
especially with like autism and that treatment.
Like these robots are just, some of them are made, they're not Terminator, right?
They're like cute little bears and stuff like that, you know?
Like that's the kind of stuff the church should get behind.
And that's the kind of stuff we should support and lift up and be more critical of the military enterprise, which we're not.
So we could do a whole podcast on my issues with our acceptance of drones and like even just the rhetoric of like, you know, we need to use this.
And like that's a very dehumanized way to hurt and kill the enemy,
and oftentimes women and children, thousands of women and children.
We're so on it.
I mean, when I peeked behind that curtain several years ago,
I was like, oh, my word, this stuff.
How many innocent kids have died from drone attacks
and stuff that you just don't hear about?
Maybe I'm listening to the wrong news channel.
I don't know.
Even thing like going back, like I, yeah, you know,
or even the gun debate.
Okay.
I mean, gosh, I've got a couple minutes left.
I don't want to open up that.
But I just, to say that a gun is simply a neutral object,
it's the person using it that's the problem.
Like I don't, I think that underestimates,
to use your phrase again, the potential moral formation or even psychological formation that
inanimate objects can have. And anecdotally, I can speak to this. I remember, and I think I've
told this story before, so I won't belabor the point
but well two instances actually
one when I was an idiot
19 year old
pre-Christ
me and my buddies we drove to Vegas
one day and I took my gun
I had a.22
shoot squirrels with it
took my.22 and all of a sudden we got in some car
shit like somebody was like yelling
whatever and we're a bunch of testosterone amped up athletes, you know,
I had a buddy with me that, you know, he made the rock look small, you know, so we, all this just,
we're raging, you know, and I got this gun in the back and I literally started crawling in the back
seat. And I felt to this day, this is 40, 25 years, I could feel the power coursing through my veins because I had a weapon. If I didn't have that weapon,
I would have said, dude, let's get out of here. I don't know who these guys, but I had a weapon.
I was like, bring it on. You know, I was like, it just welled up in me this completely different
posture. A couple of years ago, we do a lot of hiking here in idaho and we got these gray wolves in
idaho um most animals in the woods run from you bears they run from you foxes well foxes aren't
gonna attack you but um wolves don't they check you out they'll they'll just kind of trot along
looking at you like what are you up to man what do you do you think? They're not – you go hiking in the deep woods and it's pretty scary.
So I got a handgun in self – not to shoot people.
If you broke into my home, it would be physically impossible for me to kill you with my handguns and all.
It's not – but I take it.
It's hiking in the deep woods just in case.
I'll shoot a wolf if it's trying to kill me.
And then I'll eat it.
But I remember driving, like buying the handgun and driving home.
And I felt that sensation. I believe in nonviolence. Okay.
So I was like, I would never, but I felt this even driving around,
like just kind of looking at people like, what are you looking at, man?
Like, you know, like it just, it formed,
it didn't have a neutral effect on my posture,
even though I'm committed to not using it on another person,
no matter what,
like it's not like,
I do think,
and this is everything you're saying,
right?
I think we do underestimate the power of inaudible,
inanimate objects for moral formation.
That's all I'm trying to say.
Yes.
That's good.
Good word,
man.
Yeah.
Well,
Hey,
I don't want to take the last word,
but I do have to go and I've taken you an hour.
The book is Robot Theology.
The subtitle is Old Questions Through New Media.
Just came out a month ago, at least at the time of this recording.
So I would imagine there's not a whole lot out there that would compete with this book.
I haven't come across a lot of Christian approaches to robot theology, but I would encourage everybody to check it out.
This podcast has been interesting.
Josh,
I've learned a ton in this conversation.
Thanks.
So thank you for taking us on a unpredictable journey.
Yeah.
Well,
thanks Preston.
Appreciate it,
man. Thank you.