Hidden Brain - Could You Kill A Robot?
Episode Date: July 11, 2017Will we one day create machines that are essentially just like us? People have been wrestling with that question since the advent of robotics. But maybe we're missing another, even more intriguing que...stion: what can robots teach us about ourselves? We ponder that question with Kate Darling of the MIT Media Lab in a special taping at the Aspen Ideas Festival.
Transcript
Discussion (0)
This is Hidden Brain, I'm Shankar Vedantam.
Have you ever talked to your computer, cursed it for making a mistake?
Peacey load letter.
Is that me?
Have you ever argued with the traffic directions you get from Google Maps or Ways?
Starting route to Grover's Mill Road.
Have you ever looked at a Rumba cleaning the floor on the other side of the room and told
it, please come over to this side.
Turn left.
Left.
It just ran itself right off the edge.
Robot and artificial intelligence are playing an ever larger role in all of our lives.
Of course, this is not the role that science fiction once imagined.
It doesn't feel pity or remorse or fear.
Robots bent on our destruction, remain the stuff of movies like Terminator, and robot
Sentience is still an idea that's far off in the future. But there's a lot we're
learning about smart machines, and there's a lot that smart machines are teaching us about how we connect with the world around us and with each other.
This week on Hidden Brain, Ken Robots teach us what it means to be human.
My guest today has spent a lot of time thinking about how we interact with smart machines and how those interactions might change the way we relate to one another.
Kate Darling is a research specialist at the MIT Media Lab. She joined us recently in front of a green robot dinosaur about the size of a small
dog known as a play-o. It's going to be part of this conversation, but before we get to
that, here's Kate.
Kate, welcome to Hidden Brain. Thank you for having me.
You found that there is an interesting point
in the relationship between humans and machines.
And that point comes when we give a machine a name.
I understand that you have three of these play
or dinosaurs at your home.
Can you tell me some of the names that you have given
to your robots?
Yes.
So the very first one I bought, I named
Yohai after Yohai Ben Banklor, who's a Harvard professor,
who's done some work in intellectual property and other areas that I've always admired.
And the second one I adopted after I filmed a Canadian documentary where the show host
had to name the robot.
And he gave the robot the same name he had, which was Peter.
So the second one has a boring name. And then the third one is named Mr. Spaghetti. I don't know if people outside of Boston are
familiar with this, but the Boston Public Transportation System, they wanted to
crowdsource a name for their mascot dog. And the internet decided that the dog
should be named Mr. Spaghetti.
And of course, they refused to do that and named the dog Hunter.
So Mr. Spaghetti became a big thing in Boston for a while.
It was, people were very outraged about this.
And so I named my PLEO, my third one, Mr. Spaghetti.
I understand that companies actually
have found that if you sell a robot with the name
of the robot on the box,
it changes the way people will interact with that robot then if you just said this is
a dinosaur.
So, this is, yeah, this is not, I don't have any data on this, but yes, I have talked to
companies who feel that it helps with adoption and trust of the technology, even very, very
simple robots, like boxes on wheels that deliver medicine in hospitals.
If you give them a little nameplate that says Betsy, their understanding is that people
are a little bit more forgiving of the robot, so instead of this stupid machine doesn't
work, they'll say, oh, Betsy made a mistake.
And I'm wondering if you spent time thinking about why this happens.
At some level, if I came up to you at home and I said,
Kate, is Mr. Spaghetti alive?
You would almost certainly tell me no, Mr. Spaghetti is not alive.
I assume you don't think Mr. Spaghetti is alive, right?
No.
OK.
So given that you know that Mr. Spaghetti is not alive,
why do you think giving him a name changes your relationship to him?
With robots in particular, it's combined with just our general tendency to anthropomorphize
these things and we're also primed by science fiction and pop culture to give robots names
and view them as entities with personalities and it's more than just the name right.
I mean robots move around in a way that seems autonomous to us.
We respond to that type of physical movement.
Our brains will project intent onto it.
So I think robots are in the perfect mixture of something that we will very willingly treat
with human qualities or lifelike qualities.
I understand from one of your papers
that there have been examples, especially in military settings,
where robots have been assigned to do very dangerous tasks,
because you don't want human beings to go out and do those tasks.
But eventually, the military folks who are running the robots
start to relate to them as if they were actually fellow soldiers.
There has been some research on this.
So Julie Carpenter has done some research.
And there's just countless stories online on Reddit
everywhere about soldiers becoming emotionally attached
to the bomb disposal robots that they work with.
They'll give them names.
They'll give them medals of honor.
They'll have funerals for them.
And it's also kind of interesting
because the robots aren't special robots.
They're just kind of sticks on wheels,
but I think just also the situation
that the soldiers find themselves in
where the robot is basically risking its life
to save human lives might also lead them
to become very attached to these devices.
You had a wonderful example some time ago
looking at a colonel, I believe.
I don't know if this was your work or you were citing someone else's work.
Of a colonel who was supervising a robot that was in doing sort of mind disposal.
Tell us a story about the mind disposal robot.
Oh, yeah, this is an incredible, it was an article in the Washington Post, I believe, back in 2007.
And the United States military was testing this new robot that could walk over a field with
landmines and diffuse them by stepping on them and blowing them up.
And the robot itself was shaped like a stick insect so it would walk around on six legs.
And every time it stepped on a landmine, one of the legs would blow up.
And then it would just continue on the remaining legs. And so they were testing this, and the colonel
who was in charge of this exercise ended up calling it off
because he said it was too inhumane
to watch this thing drag itself across the field
on its remaining legs.
And so this raises sort of an interesting question
because at the one hand, we can understand
how this desire to anthropomorphize robots is very human in some ways, but in some ways
in this case, that defeats the whole point of having a robot that's trying to find the
mind.
It does.
I mean, it's really anything from inefficient to dangerous in these contexts where we
want robots to be strictly used as tools to anthropomorphize them.
And it's a very difficult design challenge as well because how do you create a device that people want to use but don't like too much?
All right, so we have this wonderful little prop in front of us. It's a play or dinosaur.
I want you to tell me a little bit about the play or dinosaur, how it works, and how you come to one three of them, Kate.
What is the dinosaur? What does it do?
It's basically an expensive toy.
I bought the first one, I think, in 2007.
There we go, it's awake.
They have a lot of motors and touch sensors,
and they have an infrared camera and microphones.
So they're pretty cool pieces of technology for a toy,
and that's initially
why I bought one because I was fascinated by everything that it can do. Like if it starts
walking around, it can walk to the edge of the table, it can look down, measure the distance
to the floor, it knows that there's a drop and it'll get scared and walked backwards. And then they go through different life phases
adolescent and fully grown.
And it'll have moods.
So I think what we should do, we
bought the robot at Hidden Brain a couple of weeks ago.
We haven't had the chance to give it a name yet.
And I thought we should actually reserve the honors
for this evening, where we're talking to Kate, and see if Kate wants to try and name this dinosaur. Since she
cares about dinosaurs so much, I was looking up Kate's Twitter feed this morning. I understand
that you're going to have a baby soon, congratulations.
Yes, they don't have a name for that either.
Okay. Just FYI, she sometimes refers to the baby as baby bot, so just for whatever that's worth.
And one retweet that you have on your Twitter feed cracked me
up, it said, you don't really know how many people you don't
like until you start trying to pick baby names.
Yeah, that's a quote from my husband.
So I don't want to tell me, you apparently haven't yet
picked your baby's name.
So do you have any choices of top choices?
Is there a name, a spare name that you might care to give the dinosaur?
Well the problem is we've had a girl's name picked out for years and now we're having
a boy and we just can't, we don't even have any contenders.
No contenders.
What would have been your favorite girl's name if you had had a girl? Well, so when I first started dating my now husband,
he at some point said, if I ever had a daughter,
I already know what I would name her.
And I was like, oh, really?
Oh, my God.
We're going to fight about this one.
And he said, yeah, I would name her Samantha and Sam
for short, because Sam is kind of gender neutral.
And I was like, oh oh I really love that.
So that one was picked out very easily.
Alright since you're not having a girl you're going to have a boy would you mind if you
considered naming the dinosaur Samantha?
How would you feel about that?
Oh that would be awesome we should name the dinosaur Samantha.
Alright so henceforth this dinosaur will be called Samantha or Santa Shore. Now, some time ago, kid conducted a very interesting experiment with the
play of dinosaurs and to sort of show how this works, I have a second prop here
which is under the table. It's a hammer.
A large hammer, which we borrowed from the hotel.
Now, as you all know, the dinosaur is obviously not alive.
It's just cloth and plastic and a battery and wires.
It has a name, of course, Samantha.
But it isn't alive in any sense of the term.
And so, Kate, I'm going to actually give you the hammer.
Oh no.
Kate, would you consider destroying a Samantha?
No.
It's just a machine.
I only make other people do that.
I don't do it myself.
You wouldn't even consider harming the dinosaur?
Well, so my problem is that I already know the results of our research and
that would say something about me as a person, so I'm going to say no, I'm not willing
to do it. Tell me about the experiment. So you had volunteers come up and you basically
introduced them to these lovable dinosaurs and then you gave them a hammer like this and
you told them to do what? Well, so, OK, so this was the workshop part
that we used the dinosaurs for.
They're a little too expensive to do an experiment
with 100 participants.
So the workshop that we did in a non-scientific setting,
we had five of these robot dinosaurs.
We gave them the groups of people
and had them name them, interact with them, play with them.
We had them personify them a little bit by doing a little fashion show with a fashion
contest.
And then after about an hour, we asked them to torture and kill them.
And we had a variety of instruments.
We had a hammer, a hatchet, and I forget what else.
But even though we tried to make it dramatic,
it turned out to be a little bit more dramatic
than we expected it to be,
and they really refused to even hit the things.
And so we had to kind of start playing mind games with them
and we said, okay, you can save your group's dinosaur
if you hit another group's dinosaur with the hammer.
And they tried and they couldn't do that either.
This one woman was standing over the thing trying
and she just couldn't, she ended up petting it instead.
And then finally we said, okay, well we're gonna destroy
all of the robots unless someone takes a hatchet to one of them.
And finally someone did.
Wait, so you said unless one of you kills one of them,
we are gonna kill all of them?
Yeah, I think this might have been my partner's idea. So it did this
with a friend named Honest Gossult. We did this at a conference called Lift
in Geneva and we had to improvise because people really didn't want to do it. So we
threatened them and I think he doesn't want you to harm her. Yeah, clearly, clearly.
So what do you think is going on?
I mean, at a rational level, the dinosaur obviously is not alive.
Why do you think we have such reluctance to harming the dinosaur?
In fact, I might have the battery removed so the dinosaur stops making noise.
Well, I mean, it behaves in a really life-like way. I mean, we have over a century of animation expertise
in creating compelling characters that are very life-like,
that people will automatically project life onto you.
I mean, look at Pixar movies, for example.
It's incredible.
And I know that a lot of social robotists
actually work with animators to create these compelling characters.
And so, you know, it's very hard to not see this
as some sort of living entity,
even though you know perfectly well,
that it's just a machine,
because it's moving in this way
that we automatically subconsciously associate
with states of mind.
And so I just think it's really uncomfortable to people
is particularly for robots like this that can display
you know a simulation of pain or discomfort to have to watch that. I mean it's
It's just not comfortable. What did you find in terms of who was willing to do it and who wasn't?
I mean when you looked at the people who are willing to destroy a dinosaur a dinosaur like the Pleyale
You found that there were certain characteristics
that were attached to people who were more or less
likely to do the deed.
So the follow-up study that we did, not with the dinosaurs,
we did with hex bugs, which are a very simple toy
that moves around like an insect.
And there, we were looking at people's hesitation
to hit the hex bug, and whether they would hesitate more if we gave it a name,
and whether they would hesitate more if they
had natural tendencies for empathy, for empathic concern.
And we found that people with low empathic concern
for other people, they didn't much care about the hex bug,
and would hit it much more quickly.
And people with high empathic concern would hesitate more, and some even refused to about the hex bug and would hit it much more quickly. And people with high empathic concern would hesitate more and some even refuse to hit the hex bugs.
So in many ways what you're saying is that potentially the way we relate to these inanimate
objects might actually say something about us at a deeper level than just our relationship
to the machine.
Yes, possibly. I mean, we know now, or we have some indication
that we can measure people's empathy using robots,
which is pretty interesting.
You know, my colleagues and I were discussing ahead
of this interview, whether you would actually destroy
the dinosaur, and we were torn, because we said,
on the one hand, you of all people should know
that these are just machines
and that it's an irrational belief to project
life-like values on them.
But on the other hand, I said, you know,
it's really unlikely she's going to do it
because she's going to look like a really bad person
if she smashes the dinosaur in front of 200 people.
I mean, I don't know if you've been watching Westworld at all,
but the people who don't hesitate to shoot the robots, they
seem pretty callous to us.
And I think maybe there is something to it.
Of course, we can rationalize it.
Of course, if I had to, I could take the hammer and smash the robot, and I wouldn't have
nightmares about it.
But I think that perhaps turning off that basic instinct to hesitate to do that
might be more harmful than over, you know, I think overriding it might be more harmful
than just going with it.
I want to talk about the most important line we draw between machines and humans and
it's not intelligence but it's consciousness.
I want to play a little clip from Star Trek.
Now tell me, Commander, what is data?
I don't understand. What is he? A machine? Is he, are you sure?
Yes. You see he's met two of your three criteria for sentience, so what if we meet the third
consciousness and even the smallest degree? What is he then? I don't know. Are you?
Do you? So this has been a perennial concern in science fiction,
which is the idea that at some point,
machines will become conscious and sentient.
And very often, it's in the context of,
the machines will rise up and harm the humans and destroy us.
But as I read your research, I actually found myself
thinking, is our desire to believe
that the machines can become conscious, actually just an extension of what we've been talking
about the last 20 minutes, which is we project sentience onto machines all the time.
And so when we imagine what they're going to be like in the future, the first thing that
pops in our head is, they're going to become conscious.
Yeah, I think there's a lot of projection happening there. I also think that
before we get to the question of robot rights and consciousness, you know, we have to ask ourselves
how do robots fit into our lives when we perceive them as conscious? Because I think that's when
it starts to get morally messy and not when they actually inherently have some sort of consciousness.
when they actually inherently have some sort of consciousness. There's a lot that's morally messy about how humans interact with robots.
When we come back, we're going to delve into some of those moral and ethical issues,
including the deeply troubling case of a Japanese company that builds sex robots
designed to look like children.
Stay with us.
This is Hidden Brain, I'm Shankar Vedantam. If humans have a tendency to anthropomorphize
machines to see them as human, it isn't surprising that we're also willing to bring all the
biases we have toward our fellow human beings into the machine world.
Many of the intelligent assistants being built by major companies, Siri or Alexa, are being
given women's names.
Many of the genius machines are often given men's names, how, or what's in.
Now, you can say, Siri and Alexa aren't people.
Why should we care?
Why should we care if people sexually harass their virtual assistants, as has been shown
to sometimes happen?
MIT's Kate Darling says we should care because the way we treat robots may have implications
for the way we treat other human beings.
It might.
We don't know, but it might.
One example with the virtual assistance you just
mentioned is children.
So parents have started observing, and this is anecdotal,
but they've started observing that their kids
adopt behavioral patterns based on how
they're interacting with these devices
and how they're conversing with them.
And there are some cool stories like there was
story in the New York Times a few years ago
where a mother was talking about how her autistic son
had developed a relationship with Siri, the voice assistant.
And she said this was awesome because Siri is very patient.
She will answer questions repeatedly and consistently.
And apparently this is really important for autistic kids but also because her voice recognition is so bad, he learned
to articulate his words really clearly and it improved his communication with others.
Now that's great but these things aren't designed with autistic kids in mind, right?
That's kind of more of a coincidence than anything And so there are also perhaps some unintended effects
that are more negative.
And so one guy wrote a blog post, a well back where he said,
Amazon's echo is magical, but it's turning my child
into an echo because Alexa doesn't require please or thank
you or any of the standard politeness
that you want your kids to learn when they're conversing and when they're you know demanding things of of you so uh that you know
it starts there but I think that as this technology improves and gets better at mimicking real
conversations or lifelike behavior you have to wonder to what extent that gets muddled in our
subconscious and not just in children subconscious but maybe even in our own. Do you think it's a coincidence that most of the virtual assistants are given female names and female identities?
I think it's a combination of whatever market research but also just people not thinking.
I mean, I visited IBM Watson in Austin and there is a room that you can go into
and you can talk to Watson and he has this deep,
booming male voice and you can ask questions.
And at the time I went there,
there was a second AI in the room that turned
on the lights and greeted the visitors
and that one had a female voice.
And I pointed that out and it seemed like they hadn't
really considered that.
So it's a mixture of people thinking, oh, this is going to sell better, and people just
not thinking at all, because the teams that are building this technology are predominantly
young, white, and male, and they have these blind spots where they don't even consider
what biases they might perpetuate through the design of these systems.
So which brings us to the question of the sex robots.
I'm curious what you make of this,
and there are really complicated arguments
on both sides of this question.
Should we use machines as sexual companions?
Should we use them in ways that could potentially
satisfy the needs of groups of people
who in some ways we are troubled by?
Yeah, so, you know, referring to pedophilia specifically,
this is a very difficult area because we know almost nothing about pedophilia generally,
and we have absolutely no idea what the effects of technologies like this could be
if they provide kind of an immersive sexual experience.
I mean, it could be that this is a very useful outlet to use therapeutically, yeah, basically
that ends up preventing real child abuse.
And on the other hand, it could be that this is something that normalizes and perpetuates
certain behaviors.
And we literally have no idea what direction this goes in.
And I think this is a question that we're
going to be facing pretty soon.
I mean, like you said, there are companies that make the dolls.
There's legal cases already about this.
And there's a lot of moral panic about sex technology.
But also, I mean, in this case, very understandable
emotional responses to regulation of child abuse.
So, it's very difficult, and you can't research it in the US at least.
So, you sometimes call a robot ethicist, and you've sometimes said, we might need to establish a limited legal status for robots. What do you mean by that? So yeah, it's a little bit of a provocation,
but my sense is that if we have evidence
that behaving violently towards very life-like objects,
not only tells us something about you as a person,
but can also change people and desensitize them
to that behavior in other contexts.
So if you're used to kicking a robot dog,
you know, are you more likely to kick a real dog?
Then that might actually be an argument, if that's the case,
to give robots certain legal protections the same way
that we give animals protections, but for a different reason.
We like to tell ourselves that we give animals protection
from abuse because they actually experience pain but for a different reason. We like to tell ourselves that we give animals protection
from abuse because they actually experience pain
and suffering.
I actually don't think that's the only reason we do it,
but for robots, the idea would be not
that they experience anything, but rather that it's
desensitizing to us.
And it has a negative effect on our behavior
to be abusive towards the robots.
So here's the thing that sort of it's worth pondering for a moment.
If you hear, for example, that someone owns
a bunch of chickens in their farm, right?
So it's their farm, their chickens, they own the chickens.
And they're really mistreating the chickens,
torturing them, harming them.
You could sort of make a property rights argument
and say they can do whatever they want with that property.
But I think many of us would say, even though the chicken belongs to you,
there are certain things you can and cannot do with the chicken.
And I'm not sure it's just about our concern that if you mistreat the chicken,
that means you will turn into the kind of person who might mistreat other people.
There is sort of a level, there is a certain moral level at which I think the idea of abusing
animals is offensive to us.
And I'm wondering what the same thing is true with machines as well, which is it's not just
the case that it might be that people who harm machines are also willing to harm humans,
but just the act of harming things that look and feel and sound sentient is morally offensive
in some way.
Yeah, so I think that's absolutely how we've approached
most animal protections, because it's very clear
that we care more about certain animals than others
and not based on any biological criteria.
So I think that we just find it morally offensive,
for example, to torture cats, or in the United States,
we don't like the idea of eating horses,
but in Europe, they're like,
what's the difference between a horse and a cow?
They're both delicious.
So that's definitely how we tend to operate
and how we tend to pass these laws.
And I don't see why that couldn't also apply to machines
once they get to a more advanced level
where we really do perceive them as
as lifelike and it is really offensive to us to see them be abused.
The devil's advocate side of that argument of course is that
would people then say pressing a switch and turning off a machine, that's
unethical because you're essentially killing the robot.
But we don't protect animals from being killed. We just protect them from being
treated unnecessarily cruelly.
So I actually think animal abuse laws
are a pretty good parallel here.
You've argued that robots might one day expand
the boundaries of how humans relate to one another.
I want to play your short clip from the movie,
Her, where a man falls in love with his operating system,
but then discovers something about her.
I'd love for anyone else.
What makes you ask that?
I don't know.
Are you?
I've been trying to figure out a talk to you about this.
How many others?
600 free one.
What?
What are you talking about?
That's insane.
That's f***ing insane.
So I'm wondering, is it possible that as we start to relate to machines,
is it possible that they will change the range?
Ways we relate to one another in other words, is it possible they'll expand how we think about relationships themselves?
Maybe, but I think it's important to remember
The thing that I couldn't get out of my mind when I was watching her specifically is that
there is a company that makes this operating system and And if I were the company, I would program it to say,
oh yes, I'm seeing 641 other people,
but for $20,000, you can get exclusively me.
Like, that's the direction it's going to go in, right?
It's not that the machines are going
to become conscious and develop their own forms of relationship.
You mentioned Westworld some moments ago, and I want to play you a clip from Westworld
for those of you who haven't seen Westworld.
Humans interact with robots in robots that are extremely lifelike, so lifelike that it's
sometimes difficult to tell whether you're talking to a robot or you're talking to a human.
In the scene that I'm about to play you, a man named William interacts with a woman
who may or may not be a robot.
You want to ask.
So ask.
Are you real?
Well, if you can't tell, does it matter?
So as I watched the scene and as I read your work,
I actually had a thought and I wanted
to sort of run the start experiment by you, which is that, you know, on one end of the
spectrum we have these machines that are increasingly becoming lifelike, human like, you know,
they respond in very intelligent ways, they seem as if they're alive.
And on the other hand, we're learning all kinds of things about human beings that show
us that even the most complex aspects of our minds
are governed by a set of rules and laws, and in some ways our minds function a little bit like machines.
And I'm wondering, is there really a huge distinction? Is it possible? Is the real question not so much can machines become more human-like,
but is it actually possible that humans are actually just highly
evolved machines?
I have no doubt that we are highly evolved machines.
I don't think we understand how we work yet, and I don't think we're going to get to that
understanding anytime soon, but yeah, I think I do think that we fall a set of rules and
that we're essentially programmed. I don't distinguish between souls and other entities
without souls.
And so it's much easier for me to say, yeah,
it's probably all the same.
But I can see that other people would find
that distinction difficult.
Do you ever talk about this?
Do you ever run this by other people and sort of say,
do you tell your husband, for example, I like you very much,
but I think you're a really intelligent machine
that I love doing it.
I haven't explicitly said that to him, but when you go home,
from this trip.
Yeah, we'll see how that goes.
Kid Darling is a research specialist at the MIT Media Lab.
Our conversation today was taped before a live audience
at the Hotel Jerome in Aspen, Colorado
as part of the Aspen Ideas Festival.
Kid, thank you for joining me to do a hidden break.
Thank you so much.
Thank you so much.
Thank you.
Hello.
Thank you.
Hello. This week's show was produced by Tara Boyle and Renee Thank you so much.
This week's show was produced by Tara Boyle and Renee Clar. Our team includes Jenny Schmidt, Maggie Penman, Raina Cohen and Barth Sha.
Our unsung heroes this week are the staff of the Annabelle Inn in Aspen, Colorado.
They kindly let us take over that conference center for several interviews with researchers
who were in town for the Aspen Ideas Festival.
They even turned off the fountains in their outdoor seating area so that we'd have studio
quality sound while recording.
Marie Kassenova, Doug Parks, Mike Clemens, thanks so much for your hospitality and your
willingness to help us make a little bit of audio magic.
You can find photos and a video of Samantha, our Plio dinosaur, on
our Instagram page. We're also on Facebook and Twitter and if you enjoyed this week's show,
we'd love if you shared this episode with friends on social media. I'm Shankar Vita
Anton and this is NPR.
There we go. It just moved like a cow.