The Infinite Monkey Cage - Artificial Intelligence
Episode Date: January 12, 2016Artificial IntelligenceBrian Cox and Robin Ince return for a new series of their award winning science/comedy show. Tonight the infinite monkey's are joined on stage by comedian Jo Brand, neuroscienti...st Anil Seth, and robotics expert Alan Winfield to discuss Artificial Intelligence. How close are we to creating a truly intelligent machine, how do we define intelligence anyway, and what are the moral and ethical issues that the development of intelligent machines might bring?Producer: Alexandra Feachem.
Transcript
Discussion (0)
In our new podcast, Nature Answers, rural stories from a changing planet,
we are traveling with you to Uganda and Ghana to meet the people on the front lines of climate change.
We will share stories of how they are thriving using lessons learned from nature.
And good news, it is working.
Learn more by listening to Nature Answers wherever you get your podcast.
Hello, I'm Robin Ince. And I'm Brian Cox. And welcome to the podcast version of the
Infinite Monkey Cage, which contains extra material that wasn't considered good enough
for the radio. Enjoy it. Hello, I'm Robin Ince. And I'm Brian Cox. Now, one of the questions
I am most commonly asked is,
is Professor Brian Cox actually a replicant?
I've seen things you wouldn't believe.
Chemists on fire in a Ford Orion.
Proton beams colliding in the dark under the shores of Lake Geneva.
All those moments will be lost.
Well, actually, they won't be.
They'll probably be on iPlayer and
Relentless Speaks on BBC4.
Time to dye. Your hair!
Because it has gone grey
and no one believes that you...
People actually think you dye your hair grey
to make yourself appear more human.
Authoritative.
Anyway, so we decided we'd start off with a little bit of a Blade Runner
parody and our producer went,
it's too niche, our audience won't get it.
And she was 50% right.
And by the way, Brian is not a replicant
because he passed the Voight-Kampff test.
That was too nerdy.
Which is, of course, the Voight-Kampff test is
do you cry during Toy Story 3?
The reason for all this is that today's show
is about artificial intelligence.
Could we create conscious machines that think and feel?
What do we mean by the word conscious?
How intelligent are our machines today?
And should we be concerned about the machines of tomorrow?
So, I love the way you do that.
That really... That's proper science.
And should we be concerned about the machines of tomorrow?
It's a bit Clarkson, isn't it? No, it's really good.
It wasn't Clarkson at all. Was it? No.
Professional.
Are you thinking you're a beta male all of a sudden?
So, to help us understand the science, philosophy and ethics
of artificial intelligence, we are joined by three panellists,
one of whom is possibly a robot.
You have to work out which one it might be.
And they are...
Hello, I'm Professor Alan Winfield, Bristol Robotics Lab, University of the West of England.
And my favourite robot, in fact, I've brought two of them along, it's this EPUC, and we
do swarm intelligence research with these robots.
I'm Anil Seth, I'm Professor of Cognitive and Computational Neuroscience at the University
of Sussex. And my favourite intelligent robot,
continuing the Blade Runner theme,
is Roy Batty, the replicant.
And not just for his wonderful death monologue
so beautifully paraphrased by Brian,
but because I think Roy Batty makes us think about
what will happen when AIs really care about themselves.
Very different to his sister Nora.
This is a strange audience you see because you don't get the Blade Runner references but last of the summer wine. Not our usual crowd. Oh hello I
am Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Brand. I indeed I have nine honorary doctorates and this Doctor, doctor, doctor, doctor, doctor, doctor, doctor, brand.
Indeed, I have nine honorary doctorates,
and this is the only chance I'm ever going to get to show off about it,
because where else do they ask you to introduce yourself in that manner?
And when anyone ever says to me on a plane,
is there a doctor, I always say yes.
And a lot of people have died.
But my favourite robot is Hal from 2001 A Space Odyssey because he turned.
And this is our panel.
Well, actually, before we get on to the main... When we were in the green room, we were talking about...
It seems at the moment there's a lot of interest
in terms of, in pop culture, about artificial intelligence.
It's something that's been in science fiction a lot.
And Channel 4 had a very popular series called Humans,
and when we brought that up, both of you kind of went...
So I thought, just to start off,
what were your problems in terms of...
in the scientific research you work at,
in what you felt about humans?
My problem, essentially, is that it's scientifically implausible,
because in humans, what you have are super-advanced robots
that will, in real life, will not exist,
probably for 500 years,
parachuted into essentially present-day society,
which is crazy.
Anil?
Well, my problem is I only saw the first two episodes.
So that gives me a lack of authority on it.
But I also think that, yeah, I agree with Alan,
but in contrast to Blade Runner, back to Blade Runner already,
where not only you had these amazing replicant robots,
but society had also changed, so it was more plausible,
it was more interesting, it was more dramatically powerful
rather than just taking one thing and saying,
OK, it's done, now what?
Jo, did you see humans?
Thankfully, no.
That's that cover, then.
Although I did see a picture in the paper,
and it was a very attractive robot, wasn't it?
I'm slightly disappointed it wasn't a fat, annoying one.
And that's what would do me very well,
making a robot that was slovenly and unpleasant.
But what about Hitchhiker's Guide?
Remember Marvin, the paranoid android, is a bit slovenly and grumpy.
I haven't done Hitchhiker's Guide either.
Oh, my God.
Oh, my God!
They're turning!
Tell them you love Star Wars.
Just something.
Say you're more a fan of the Dirk Gently series.
Can I just say,
I genuinely haven't seen Star Wars.
I'm so sorry.
To bring this back on track, how do we define intelligence?
Not as what intelligence tests test.
That's what a lot of people think intelligence is.
What happens when you fill out one of those IQ tests?
Intelligence is
basically doing the right thing at the right time, doing it in a flexible way, in a way that helps
you keep alive, helps you sustain yourself. And there are various different kinds of intelligence
beyond that. There is this intellectual, more rational variety of intelligence, which is needed
for playing chess, but also for solving complex problems and thinking about the future,
thinking about the past.
But there are also things like social intelligence,
being able to behave appropriately in front of an audience or with other people.
There's emotional intelligence, being able to understand what other people are feeling
as well as what they might be thinking.
And each of these varieties of intelligence,
we generally have an experience as a whole,
but that doesn't mean they can't be separated and understood separately and perhaps even built separately when we think about AI.
I think William H. Calvin, in his book How the Mind Works, he said that his definition of
intelligence is what we use when we don't know what to do. So it's something that goes beyond
the kind of the hardwired or genetic programming. Would you see, do you see something in that?
That's interesting, yeah. A lot of our behaviours behaviors a lot of things that we do we do do automatically
there are reactions there are instincts there are reflexes we don't generally think we need to be
smart to do them but our brains need to be smart to do them a lot of what we do is reflexive is
automatic we sometimes don't even need to be conscious of these things.
But our brains are still doing smart things.
In fact, this is, we'll get on to this, I'm sure,
but it's very, very difficult to programme robots to do some of the things that we find very easy,
that we don't actually have to think about.
And a lot of the issue with AI is they started with the stuff
that seems, for us, very hard,
like playing chess to beat the Grandmaster,
thinking that the other stuff that we don't think about
is going to be really easy.
Alan, would you define a chess-playing computer as intelligent?
I would, yes, absolutely, Brian,
but it's a narrow kind of intelligence.
In fact, we use that exact word, narrow intelligence.
And the kind of everyday intelligence
that we're all kind of good at without really
realising it, you know, the intelligence of being able to go and make a cup of tea in someone else's
kitchen, for instance, without even thinking about it, that's really, really hard for AI.
So we've now accepted, after 60 years of AI, that the things that we originally thought were easy
are actually very hard, and the things we originally thought were very are actually very hard and the things we originally thought were
very hard like playing chess and actually relatively easy and just to clear up the the
term artificial intelligence because i we were talking about it earlier we thought artificial
is a strange word isn't it because it can be you know an artificial table or something or an
artificial it seems like substandard word is it the the right word, artificial intelligence? Well, I mean, John McCarthy, who coined the phrase,
actually has said, you know, that he made a mistake.
He actually, you know, in retrospect, it was not a good idea
because it kind of set the level of expectation too high
for artificial intelligence.
But, you know, we're stuck with it.
But you're quite right that, I mean, really, artificial intelligence simply just means, you know, we're stuck with it. But you're quite right that, I mean, really,
artificial intelligence simply just means, you know,
synthesising, modelling, mimicking natural intelligence.
So, you know, people like Anil, myself,
although I'm a professional engineer,
I'm actually an amateur biologist,
you know, an amateur at lots of other things.
And, you know, we try and understand natural intelligence
in order to make artificial intelligence.
You know, I tend to be at the kind of bottom end of the spectrum,
very simple animals, so doing the right thing at the right time.
Even the simplest animals have to do that,
because if they don't, they either starve, get eaten,
or miss the opportunity to mate.
It's as simple as that.
Can I just...
That definition, doing the right thing at the right time,
I mean, surely doesn't that involve some sort of value judgment?
Because if I stick my foot out and trip my husband up,
to me, that's doing the right thing.
Whereas to him, it's not.
Yeah, the answer is no, it doesn't,
because, you know, most animals,
you know, right down to single-celled organisms,
have to do the right thing at the right time.
It may be a simple thing, but they still have to do it.
And it's clear that they don't have the cognitive machinery
to have value judgments.
And really, of course, it's only us scientists
looking down on them through a microscope
or, you know, the microscope of robotics.
We're, in a sense, making a judgment
about what was right or wrong for that organism.
It doesn't. It either survives or it doesn't.
But I agree with Joe, actually,
because even the simplest creatures,
they might not have a conscious or an explicit value judgment system.
OK, yeah, it's a good idea to trip my fellow rat down the rat stairs.
But evolution has provided them with an innate value system,
an innate set of value judgments,
which will define doing A rather than B
will make me more likely to survive.
But I think the problem with that definition is it's too general.
As you said, it applies to anything.
It applies to a worm, It applies to some bacteria.
So then intelligence becomes a bit vacuous.
So I do think we need something more.
I think we need something that underpins this idea
that intelligence is something online,
something that organisms do that is over and above
what's provided by the immediate constraints
and affordances and opportunities of the environment.
And I think, Robin, that's a bit what you said as well,
that intelligence is being more flexible
than just what our reactions determine.
I mean, Jo, how do you feel about artificial intelligence?
I mean, do you look forward to a day
where a robot can trip over your husband and you can just relax?
I really do.
Well, I kind of look forward to the day
when I can just read a book
and a robot can do everything that I would have done
if I wasn't lying down on the sofa
with a massive box of chocolates and an 18-year-old man.
I don't really mean that. I'm well past that.
I'm not even interested any more.
We'll be covering that in another episode of this series.
Do not worry.
I won't be back.
Alan, can we go...
Because you mentioned a couple of times EPOC there.
If you could just explain to the listeners what you have.
So, of course, they're not smart enough to be able to move,
well, very much, on anything other than a perfectly smooth surface.
But in the lab, we have around 50 of these robots.
And, yes, they're on the radio.
They've been on the radio before.
And we do, essentially, experiments where we model
the kind of social behaviours that are observed in ants
and, you know, social insects.
And using this swarm of robots as a kind of microscope,
if you like, to study swarm intelligence,
we've been able to figure out things
that the ant biologists haven't really worked out.
So here's an example.
So some ants do what we call division of labour.
So, in other words, if there's a few hundred thousand ants in the nest
and there's a lot of food out in the environment,
more ants will go out to forage.
If there's less food, if there's a famine,
then very few ants will go out to forage,
and they actually seem to adapt the ratio of foragers
to kind of resters or loafers in the nest somehow to balance, if you like, to track the amount of food in the environment.
So, you know, the question for the biologists is how do they do that?
Well, because it's very difficult to, you know, you can't do experiments with real ants.
It's really hard.
I mean, some people do so actually what we do
is we make models of that behavior with a swarm of simple robots program rules behaviors that we
conjecture are the correct behaviors and then see if we get the same emergent or self-organizing
behavior can i just say something quickly for the listeners if you if you want to know what
they look like they look like sort of see-through ramekins with lids on,
and they're lighting up.
And if you're a student and you don't know what a ramekin is,
it's an ashtray.
LAUGHTER
It's interesting.
The idea that...
So you talk there about emergent behaviour,
particularly in an ant colony,
where you get very sophisticated behaviours from the colony,
but the individuals are not particularly sophisticated.
I suppose that's an analogue for what we are, in a sense.
Or are we just that?
This is the question, I suppose.
We look at our intelligence.
Is it widely accepted that whatever it is,
it's emergent from the hardware that we have,
and therefore, in principle, we can imagine intelligence
emerging from some sophisticated piece of programming.
I mean, Anil may disagree, but I think that's still an open question
as to whether, you know, the extent to which human intelligence
is an emergent property of, you know, of our bodies and our minds,
or, if you like, our brains.
Marvin Minsky, who is one of the main figures in AI,
talked about the society of mind,
that our cognitive processes, our cognitive architecture
is not located in one particular place, it's not a thing by itself,
it's the collective behaviour of many different,
more fine-grained things that our minds and brains do.
So this is kind of an old idea, I think, in AI.
And in some sense, it's trivially the case
that our brains are massively complex networks,
about 190 billion neurons, a thousand times more connections.
So if you counted one each second,
it would take you about three million years
to finish counting the number of connections in any human brain.
They all speak to each other, all these neurons.
It's almost inconceivable that we don't appeal to something like emergence
to understand what's going on here.
Different groups of neurons start firing together.
They set up patterns within the whole brain
that then constrain what individual neurons do.
This is actually where physics might come in
because statistical physics can contribute a lot to
understanding collective behaviour and
emergence. In fact, we still don't really have
a good way of measuring how emergent
a phenomenon is in a physical system
and the brain is a physical system, of course.
I'll tell you what, Arnold, you really know how to play
Brian because there's always a point in the show
where you can see him drifting off and as long as you say
statistical physics could play a part
he really lights up.
The implications
of this view are interesting.
I remember one of the philosophical
debates about the old ones,
Searle's Chinese room. The picture was,
when Searle wrote it down, that you could have a room
full of people
who would enact an algorithm.
So you could put in some question
and without knowing what was happening at all,
they would move around and follow the algorithm,
and out would come the answer.
And the suggestion was that that's essentially our brain.
So the first question, I suppose, is that what our brain does?
Is it something you could simulate in a computer?
Is it what we call a universal Turing machine in that sense?
It's an algorithm that we could run in principle on other hardware.
But isn't part of the Chinese room thing as well
the fact that what delivers the message
doesn't actually understand the message?
So that's quite an important part, isn't it?
The individual part.
The message is not actually understood by the person.
Right, so understanding is a property of the whole system.
This may be true of the human...
There are many things that come out in the Chinese room experiment.
It's most usually associated with this philosophical view of functionalism
that is intelligence, is experiment, is most usually associated with this philosophical view of functionalism,
that is intelligence, is consciousness, is mind,
something that you can simulate in another system and that will then have that property.
So you can simulate chess playing in a computer
and that computer is actually playing chess.
It might not be playing chess in the same way that a human plays chess,
but it's playing chess.
So it's not simulating chess, it is actually playing chess.
The difference is between simulation and instantiation.
On the other hand, if you're simulating a weather system
and you're trying to predict whether a hurricane's going to form or not,
nobody would expect it to get windy inside the computer
that's simulating weather or wet.
So what about mind, what about intelligence? What about consciousness?
Is it something that when you simulate it, and you can because computers are wonderfully powerful
and flexible simulation engines, that's what you can use them for, is it a case of simulation or
is it a case of instantiation? I think for some things the answer is pretty clear that if it's an
overt behavior like chess playing or perhaps like making a cup of coffee,
then if the system can do it, it can do it. But is there anyone at home inside? Is there any
subjective experience going on along with that, as there is for me at least and for you too? That's
still the open question. That's the question that functionalism gets at. What's your guess? Because
Alan, you were shaking your head and nodding at the same time.
Oh, no, he's broken down. Get the next Alan.
I knew Alan Mark One wouldn't work properly. I told you.
Yeah, I mean, if a thing is a simulation of intelligence,
it would be absurd to say it's not really intelligent, in my view.
I mean, if you take a good example, Google Translate.
So Google Translate really does translate
from one language to another.
Now, you know, it's perfectly OK and probably right to say,
well, there's nobody at home, as it were,
inside the vast Googleplex, you know, computing
that actually is understanding what's going on.
But it's also true to say that no individual neuron in our brains,
or indeed probably entire networks of neurons,
know what's going on either.
So, you know, it's a kind of one of those slippery arguments.
I mean, you know, I used to be a simulationist
in the sense that I thought, no, if it's a simulation,
it's not the real thing. I've changed my mind.
So you think it is?
I think it really is.
I think if you make something that behaves as if it's intelligent
sufficiently well,
then I think you have to admit it really is intelligent.
So intelligence, then, is just...
It's the ability to do a specific task, in this case.
So Google...
Think of the chess machine.
I suppose we know that a chess machine has gone to a new level
if it gloats when it beats you.
Or if Google,
when it translates, when it actually looks
back and it notices that through the translation
the grammar, etc., is
inexact, it feels a level of shame.
And then we start to get...
Or throws in a quip at the end of its
translation. So the intelligence
we're talking write is just...
This is a specific function in this case.
It could be a very big function.
Yeah.
What did you feel?
Well, I think that sort of programming a machine to play chess
is a totally different sort of intelligence,
because surely there are a finite amount of possibilities,
and they're rational and logical.
So that, for example, if you're playing chess,
you wouldn't necessarily sacrifice a piece
unless there was some built-in intelligent reason for doing that, shall we say.
Whereas when you're speaking about human intelligence,
humans are kind of very irrational beings.
And so what are you basing
that intelligence on? Are you basing it on a particular sort of person? Is that person a man
or a woman? You know, you cannot predict people's options when things happen to them, you know.
I mean, just as an example, a friend of mine, when her boyfriend finished with her, we were at a party,
and she went and jumped in the lake.
Well, how... Yeah, she's all right, she didn't drown.
But how would that sort of thing be inbuilt?
Not that I'm saying that robots should particularly go and jump in lakes,
but, you know, there are so many infinite possibilities
if you're trying to
program a robot to be like a human that i think it's it's just simply it's always going to be
impossible because you're always going to be missing that final irrational bit of of humanity
that we all have which means no one can% say what we're going to do next.
I love the jumpy of the lake thing,
the idea of the Terminator robot played by Arnold Schwarzenegger
just suddenly comes up, what would Virginia Woolf do?
I suppose the question must be, though,
if not, if that irrationality of that human behaviour
is not an emergent property based on the laws of physics,
essentially operating in
this biological shell then what is it? Well excuse me because I'm going to use the word paradigm
and for that I apologise but for example in science when you have a scientific paradigm
and it moves from one to enough to a paradigm. That leap is normally made for very irrational reasons.
It's a quantum leap and it's a piece of creativity in one person's head.
And what I'm saying is that if you're expecting robots to be intelligent,
how are you going to cater for leaps forward in progress
that you might want them to help you make
if they don't have an inbuilt, irrational piece of something in their brain
that enables them to make a quantum leap and be creative?
Don't look at me like I've gone mad.
No, I'm not. I'm just thinking that is definitely a 10th doctorate.
That is...
The creativity turns out to be one of the hardest problems to tackle in AI.
And it's almost the antithesis of the chess-playing thing,
because chess is creative when humans play it,
but the way a chess computer plays it is, as you described,
will enumerate a vast number of possibilities
and run through them extremely quickly,
with some heuristics that you can programme in as well.
That's clearly not what people do,
which is why when the first computer back in 1997
beat Garry Kasparov, there was a big hoo-ha,
but that didn't herald the new dawn
of robots taking over the world.
Far from it.
It just highlighted what the real problems are.
And they are controlling these complex bags of flesh
in unpredictable and unreliable environments,
given sensors that don't work properly,
motor actuators, arms and legs that fly all over the place,
hard to keep control over.
And we do behave irrationally,
but we behave predictably irrationally.
We make the right kinds of shortcuts at the right times,
and it's that ecological smartness
that we can try to understand in human brains,
and not just humans, other animals as well.
And I think that's a big pitfall to avoid,
that AI won't happen when we create the first human indistinguishable from another.
Other animals are extremely smart in their own way.
But there is this flexible deployment of strategies to deal and exploit unpredictability
that's core to intelligence.
Alan, it's a super hard problem,
and we just don't know how to make artificial creativity.
And curiously, I think in order to crack the problem,
I think we have to make robots that make mistakes.
In other words, make them unrobotic,
because actually that's how we create, essentially.
And I'm reminded...
So Arthur Koestler wrote an amazing book called The Act of Creation.
And one of the things he said in that book
is that actually the act of telling a joke
is very similar to the act of creation.
He said really the way to think about it
is that it's the small difference between ha-ha and ah-ha.
And, you know, actually, I think we probably...
I don't see any reason in principle
that we couldn't make a robot that is genuinely creative,
but it would be a very, very different kind of thing
to what we currently think of as robots.
And it may not be, you know, entirely, you know, satisfactory.
So this kind of crazy, random, stochastic robot
that crashes around and makes mistakes,
that may well be the price for it being created.
How much of artificial intelligence research
is concerned with understanding the human mind,
so understanding how we think,
and how much of it is separate to that.
I suppose the question would be,
if we do create something that we define as being intelligent,
some way, or conscious, let's say,
will it have been because we copied ourselves,
or will it have emerged in a different way?
So there's always two ways to look at an enterprise like AI.
You can look at it from an engineering perspective
where you want to just build a system to do something that you want it to do,
to be smart,
or you can build a system to help understand how natural systems work.
I'm a neuroscientist, so that's the way I look at AI,
build computer simulations of the brain
as a way of understanding how our natural intelligence,
our natural cognition and perhaps our natural consciousness arises. Mae'r ffyrdd o'r ffyrdd yn ymddygiad â'r ffyrdd o'r ffyrdd o'r ffyrdd. Mae'r ffyrdd o'r ffyrdd yn ymddygiad â'r ffyrdd o'r ffyrdd.
Mae'r ffyrdd o'r ffyrdd yn ymddygiad â'r ffyrdd o'r ffyrdd.
Mae'r ffyrdd o'r ffyrdd yn ymddygiad â'r ffyrdd o'r ffyrdd.
Mae'r ffyrdd o'r ffyrdd yn ymddygiad â'r ffyrdd o'r ffyrdd.
Mae'r ffyrdd o'r ffyrdd yn ymddygiad â'r ffyrdd o'r ffyrdd.
Mae'r ffyrdd o'r ffyrdd yn ymddygiad â'r ffyrdd o'r ffyrdd. and useful machines. There's a good analogy here about the history of flight.
So if you just blithely copy something and think that,
OK, I want to build something to do X,
so a natural system does X, I'll just copy it,
this generally doesn't work.
So people who tried to build flying machines
initially started by building things that flapped.
That didn't really work.
But it helped, in a way, get a hold of the laws of aerodynamics,
which enabled people to build things that did fly.
And they're a little bit like birds, but not entirely like birds.
I think the same thing will happen with AI and psychology, cognitive science,
that simulations will help us understand what really matters,
and we may end up building things that are not direct replications
of humans or even
other animals but that work on the essential principles that we're striving to understand
but this really brings up the question of it's not just the brain it's the body and the environment
so we have this this problem with ai and that sometimes that we think it's just the brain you
can take the brain out of the body put it in a, whatever, replicate it in a computer and put it on a hard drive somewhere. Even if you've captured the individual connections
of your brain, of my brain, of any brain, those change over time in ways that depend completely
on being in this particular body, in this particular environment. And you realise that
you actually have to replicate the whole thing, the body, the environment, and the way all these things interact together.
And then you realise it's a much harder problem
and it's not just something you can simulate inside a computer.
So your consciousness is not just contained in your head,
is what you're saying.
It's a product of interaction between your body,
the environment, the brain.
In one way, it is.
I mean, in one way, I think at least,
in one way, the fact that you are conscious right now
of what you're conscious of
is a property of your brain,
the state of your brain right now.
However, to get that brain into that state right now
depends on your body, depends on the environment,
depends on your social environment,
depends on your past history and so on.
So right now, it's a property of the brain, but not in general.
But isn't it that you need...
If you have no input whatsoever, if there's no experience,
that everything that a creature learns,
it is, over time, it's through experience.
If you are without any experience whatsoever,
how does anything build?
How does personality, understanding or intelligence build?
I think that one group of robots that it would be quite easy to build
would be an England rugby team.
Because there's obviously not many connections going on there.
I mean, my interest really is,
are you trying to build an intelligent robot
that's like a human and reacts like a human?
No.
Because I think, what sort of robot do you build if you do it?
Do you build a Mother Teresa robot
or do you build a Donald Trump robot?
You know, it's just...
What sort of robot do you want?
And it would be impossible, I think, to find down
what are the most appropriate emotional responses
for a robot that could do the best things for society, as it were. So in that case,
does that mean you're trying to build a robot that is able to carry out an almost infinite
number of tasks, but not turn on you or jump in a lake?
Well, my personal view is that we shouldn't be building
humanoid robots at all.
I think there are good ethical reasons for not building humanoid
and especially android robots,
in other words, those that are really high-fidelity,
kind of human-looking robots.
And, you know, why do I think it's a bad idea?
Well, I mean, firstly, there are lots of reasons.
Firstly, it's like a giant vanity project for humanity.
It's great in science fiction,
but actually 99.99% of all of the real-world things,
the useful things that don't put people out of jobs,
that we'd like robots to do in the world,
do not need a humanoid, human-like android body.
If you want a driverless car,
it would be absurd to build a humanoid robot
and get it to get into the car and, you know, and drive it like that.
Somewhere in this discussion, as Jo said earlier,
we're talking about what would we want these things to do that we create.
But, of course, I suppose, ultimately, the question is,
could we end up creating something that is, let's say, intelligent or conscious?
Maybe I shouldn't mix those two terms.
But could we end up creating it accidentally,
if, indeed, intelligence is an emergent property,
and therefore be in a position where we can't continue
to wish to control this thing,
it's how I've built this thing, it's going to do this job,
this job, and this job.
Do we get to a level of sophistication
where it emerges into this being
that I suppose you'd have to give rights to, etc., etc.?
Exactly. I mean, it's a deeply interesting question.
So this is the emergence of,
you might call it artificial subjectivity.
So in other words, the moment that the AI, in a sense, wakes up, it's no longer a zombie.
And there are some serious philosophers, Thomas Metzinger, for instance, a German philosopher,
who has written a lot about the, as it were, the ethics of artificial suffering.
So the question is,
if you've built this fabulous simulation of intelligence and you switch it on,
how do you know that that thing
is firstly experiencing artificial subjectivity,
in other words, kind of a phenomenal self,
and what do you do about it if it is?
Because, you know, there's a very...
I mean, I'm very persuaded by Metzinger's arguments.
There's a very strong probability
that that thing that you've just switched on
is not happy.
Seriously.
And so you've got the...
You know, you've got the... What you've got the the what possibility what percentage
ish possibility do you mean i don't really understand that how you can ascribe an emotion
to a machine what why is there a big possibility it's not happy well stuff may well emerge that we
just didn't expect there's a fundamental asymmetry to this question.
If we accidentally build a robot that's happy and finds joy in everything,
well, no problem, really.
But if we accidentally build a robot that can suffer,
that's something we need to be much more worried about.
We face this not just with robots but with other animals, of course, as well.
Jeremy Bentham, the philosopher, put this back in the 18th century.
The question is not whether they can reason not whether they can
talk but whether they can suffer and that's the fundamental question that prescribes our behavior
towards things that are not ourselves and i've been thinking for various reasons about fish last
week and do fish and chips and what that's that's next. Chips suffer when I get my hands on them.
Definitely suffer.
But do fish feel pain?
We tend not to think about that
for precisely the reasons that we tend to inhabit this uncanny valley as well.
Fish are sufficiently different from us when we see them swimming about
that we don't tend to ascribe any of the states that we ourselves experience.
Yet fish have pain receptors, they have brains, they're quite different,
their behaviours are quite simple,
but the ability to feel pain and to suffer
is arguably the most fundamental of all emotional states
because it's about self-preservation.
It's not about anticipatory regret or shame or guilt,
which require much higher levels of reasoning and sophistication and social intelligence.
So there are very good reasons to at least pre-emptively think about,
if we understand how human pain and more generally suffering arises,
the general mechanisms.
We wouldn't necessarily just want to build them
and assume that they're just going to be simulations.
This is quite problematic, isn't it?
Because what you're suggesting is that we're not really in full control of this research.
I suppose not in the sense that we're going to create something that's going to destroy us.
But in a sense, as you said, there are emergent properties.
And as we get more sophisticated at building more and more sophisticated systems, then these issues may arise.
So do you think, therefore, you need a philosophical
framework before you proceed with the research at that level?
I think we do. I think we need an ethical framework underpinned by philosophical. And
I'm serious about this. I think that we're rapidly approaching the point where robotics
and AI research needs to have ethical approval
and have continuous ethical monitoring, if you like,
in exactly the same way that we do right now
for human subject research, clinical, medical research.
I like to think we need a worry budget.
We can worry about things.
There's a finite amount of worrying we can do.
There are these very catastrophic,
but not in principle impossible
things that might happen, like
accidentally building a robot that can suffer.
It might be as rare as accidentally
building a 747, but it might happen.
There's building robots that
might then build other smart robots
and enslave us all into the
depths of their singularity.
That might happen. It's very rare.
But there are many other things that we should worry about much more. It's very rare. But there are many other things
that we should worry about much more. This is a question. It's not to worry about AI.
That's not the problem. It's real stupidity that's the problem. And we can already interface.
You look at Robin.
But one of the limitations of our minds
is seeing the consequences of our actions at a distance.
Our minds have evolved to deal with the very local
in time and space in small groups.
It's very hard for us to really feel,
it gets back to emotions again,
the consequences of our actions
over great distances of time and space.
So one thing I'm, for example, very concerned about
is the use of ai in military drones where
people can make actions happen at a distance i was just going to say that because if you think
about it in some ways what emerges is going to be down to who's making it you know and let let's say
someone managed to build a robot army that was indestructible and used it for sort of nefarious purposes.
And to me, the big problem is humans and what they're like, really,
because they have the power to create these things.
And we know that there's a pretty sizeable minority of humans
who want to do damage to other people in the world
and aren't nice like you two and want to protect sad robots, you know.
It's true, though.
Well, it is.
It's true.
I can see a lot of people who would go,
right, let's have a robot army and let's invade Europe,
or whatever it was, and that's the issue.
So it's kind of the whole internet debate as well, isn't it?
The internet is a marvellous thing, but you've got good and evil on it.
And is the evil too evil for it to exist?
And should we backpedal, which of course we can't now?
But I think as we go forward,
the same thing potentially might happen with robots as well,
which I think is scary and it's down to human nature.
Absolutely.
For me, this is why we need public engagement in robotics.
We need public debate,
because we as a supposedly democratic society
should decide not only the robots that we want in our lives and in society,
but the ones that we do not want.
That's really important. And in our lives and in society, but the ones that we do not want. You know, that's really important.
And in order to make that decision,
we need to all understand what's going on.
But I will just give a little wave of flag
for the international campaign for robot arms control.
It exists.
Find the website and you can sign up.
It's a really good movement against this very worrying militarisation of autonomous robotics.
The potential for AI to be a good thing, though, I think is more apparent when you think beyond just robots.
Robots are very important but very difficult.
AI is much more general than that and we're already seeing some enormous benefits of AI in society.
So search engines are very useful for us. That's AI working behind search engines. Driverless cars
are just around the corner. Medical diagnosis is an area, yes, medical diagnosis is an area where
better decisions are being made using machines that are able to do pattern
recognition in a way that complements humans and doesn't replace them. And it's when we focus on
what AI systems can do that complement rather than replace that I think we can see major benefits to
society. Driverless cars, I mean, they stand to totally reduce almost to zero road traffic deaths.
Why is it that so many science fiction filmmakers
in the 70s and 80s, and on TV as well,
decided that all robots would be a little bit prissy and camp?
You look and you see, you know, C-3PO, Zen from Blake 7,
K-9 from Doctor Who, and it's like, why did they decide that?
Oh, no, don't do that art, which is also very similar to Richard Dawkins,
but it's like...
LAUGHTER
Dawkins is one! I knew it!
But that was an interesting kind of...
Do you think there is anything in that idea that they go,
let's try and make sure that this non-threatening idea...
Well, that's a new one for you, anyway.
I'm not saying there shouldn't be.
I'm not saying... I think that's great as well.
So, we asked our audience...
Don't you do it, Brian. I've got a robot that does my digging now.
Oh, I can't believe it. And he's broken the shovel.
Anyway, so the...
This year, you worry me, cos your skin doesn't age.
Why would that be?
Like ash from Alien.
So, we asked the audience,
tonight's show is about artificial intelligence.
What are you most looking forward to
when robots become our masters?
The inevitable retribution that
comes from having always opened the microwave
one second before the timer goes off.
I'm looking forward to having robot-only
friends and then turning them off when I've had enough.
Us getting ill and robot doctors telling them to turn us off
and then on again.
The invention of a Brian Cox bot to travel around the country
shouting, everything is physics.
I'm getting more Orville-like.
Yeah, yeah.
Having a sexy Brian Cox android.
A Brian Cox robot.
Giving a sex bot a dab.
Good Lord, no, I'm not reading that.
That's a...
Racy audience.
Well, I hope those answers have helped you know
where the public want you to go.
Thank you very much to our guests,
Arnold Seth, Alan Winfield and Joe Brand.
I don't think we have actually got time for a listener's email.
Let's do that another time.
We have had plenty of emails, and if you do have any questions,
do send them in. We love receiving them.
There is just time, though, for a quick trail,
because we've been recording a couple of specials
about general relativity today,
and they'll be out three months ago.
Yep.
That's space, time and scheduling.
Space, time and scheduling.
A minor perturbation in the metric.
Scheduling is a minor perturbation in the metric.
You're not going to work for the Radio Times, are you?
Thank you very much for listening. Goodbye.
APPLAUSE for listening. Goodbye. Thank you. 15 minutes. Shut up, it's your fault, you downloaded it. Anyway, there's other scientific programmes also that you can listen to. Yeah, there's that one with
Jimmy Alka-Seltzer.
Life Scientific. There's that one where his dad
discovered the atomic nucleus. Inside Science
All in the Mind with Claudia Hammond.
Richard Hammond's sister. Richard Hammond's sister, thank you
very much Brian. And also
Frontiers, a selection of science
documentaries on many, many different subjects.
These are some of the science programmes
that you can listen to. In our new podcast, Nature Answers, rural stories from a changing planet, we are
travelling with you to Uganda and Ghana to meet the people on the front lines of climate change.
We will share stories of how they are thriving using lessons learned from nature.
And good news, it is working.
Learn more by listening to Nature Answers wherever you get your podcasts. you