The Infinite Monkey Cage - How I is AI?
Episode Date: November 22, 2023Brian and Robin (the real ones) are joined by mathematician Prof Hannah Fry, compute scientist Dr Kate Devlin and comedian Rufus Hound to discuss the pros and cons of AI. Just how intelligent is the m...ost intelligent AI? Will our phones soon be smarter than us – will we fail a Turing test while our phone passes it? Will we have AI therapists, doctors, lawyers, carers or even politicians? How will the increasing ubiquity of AI systems change our society and our relationships with each other? Could radio presenters of hit science/comedy shows soon be replaced with wittier, smarter AI versions that know more about particle physics... surely not!New episodes released Wednesdays. If you're in the UK, listen to the newest episodes of The Infinite Monkey Cage first on BBC Sounds: bbc.in/3K3JzyFExecutive Producer: Alexandra Feachem.
Transcript
Discussion (0)
In our new podcast, Nature Answers, rural stories from a changing planet,
we are traveling with you to Uganda and Ghana to meet the people on the front lines of climate change.
We will share stories of how they are thriving using lessons learned from nature.
And good news, it is working.
Learn more by listening to Nature Answers wherever you get your podcasts.
This is the first radio ad you can smell. The new Cinnabon pull apart only at Wendy's. It's
ooey gooey and just five bucks with a small coffee all day long. Tax is extra at participating Wendy's until May 5th. Terms and conditions apply.
BBC Sounds.
Music, radio, podcasts.
Hello, I'm Robin Ince. And I'm Brian Cox. You're about to listen to The
Infinite Monkey Cage. Episodes
will be released on Wednesdays, wherever
you get your podcasts. But if you're in the UK,
the full series is available
right now, first on BBC Sounds.
Hello, I on BBC Sounds. pop band, so obviously that means we normally have a reference to Gimme Gimme Gimme, A Two-Dimensional Man After Midnight,
or Super Symmetric Trooper.
That, by the way, was Brian's, and he was so proud
of that line, he did it in the office,
so do enjoy it. Super Symmetric Trooper?
Fun?
He threw it away, that's why.
And I'm sure everyone will remember the Bletchley
Park special where Brian opened
with, So when you're near me, darling,
can't you hear me? Dot, dot,
dot, dash, dash, dash, dot, dot, dot. It's not only rock bands that are holographic. Actually,
the study of quantum gravity recently, particularly in relation to black holes,
has told us that the whole universe might be a hologram. It's true. Quantum gravity.
Do you know what? I feel that was a very limited woo for the revelation that we may well all be holograms.
I've just said that our reality is potentially a hologram.
Woo!
It's because if you say, what's the content of a black hole,
it turns out that it's equal to the surface area
of the event horizon in square plank units.
Woo!
That was a woo that comes from,
we better woo just to move
this thing.
But as there is no suitable Abba lyric
today, I am actually genuinely Chuck D
when I saw Public Enemy at Glastonbury.
One of his pieces of advice was, you've got
to try and be smarter than your smartphone.
There's no point being a dumb fellow with a smartphone.
Though he didn't say fellow.
He said mother fellow.
He didn't say mother fellow.
Shall I tell you what the show's about?
Go on.
Will our phones soon be smarter than us?
Will we fail a Turing test while our phone passes it?
Will we have AI therapists, doctors, lawyers, carers, or even politicians? How will the increasing ubiquity
of AI systems change our society and our relationships with each other? Joining us to
discuss whether politicians will one day dream of electoral sheep are a multidisciplinary computer
scientist, a multi-talented mathematician, and a multi-storey car park.
This is...
To be honest, chat GBT really is not working as well as I'd hoped
for this particular introduction.
And will... You see, you missed it again.
Will politicians one day dream of electoral sheep?
Sorry.
And our panel are...
I'm Professor Hannah Fry. I'm a mathematician.
And the most ridiculous rumour about artificial intelligence I've ever heard
is an algorithm that claimed to be able to tell whether you were gay or straight
with an 81% accuracy based on a single photograph of your face.
And when I say rumour, I mean it's bollocks.
I'm Dr Kate Devlin. I'm a computer scientist and the most ridiculous
rumour I've ever heard about artificial intelligence is that it poses any kind of
existential threat. My name's Rufus Hound and I am the host of BBC Radio 4's My Teenage Diary
and the most exciting AI rumour that I've heard is that it's already taken over
agricultural food production, which means
old McDonald's out of a job, AI, AI,
yo!
This is our panel!
Hannah, before we get started,
did you actually get any of the mechanics of this idea
that this one photograph would, you know,
give away sexuality or gender or whatever
it may be? Okay, so it said that it could do it with 81 accuracy right and i think that there is a big
clue in that as to how good this algorithm actually was because okay first off there's all of the
moral and ethical implications horrendous but you can come up with your own algorithm that can like
blow that one out of the water and do way better in terms of accuracy and doesn't need any messy machine vision
none of that messy coding all you do is you just take everybody in the entire world just label
everybody as straight and then because 94 of adults identify as heterosexual you beat that
other one by an amazing 13 inaccuracy points or or you just go the other way and get all the pictures off Grindr.
Well, that's not far from what they actually did.
So this is the Stanford Gaydar paper, and it's quite controversial.
And they basically took a bunch of photos without anyone's consent
and then ran them through this algorithm and said,
yeah, this percentage is gay.
But when this was repeated by a master's student
in South Africa University,
they did it without the pictures
and it had pretty much the same results.
So actually the pictures were doing nothing.
Yeah, they took the original image
and then they blanked out people's faces
and it was basically what was going on in the background.
So it was like, you know, people were wearing...
A Steps concert.
Flamboyant hats, that kind of thing that was the the real clue
that they were that they were using yeah okay can we start with the definition so we're talking about
ai systems so do you have a simple definition of an ai system what is it no i don't have a simple
definition it depends there are many different definitions let's go with an artificial
intelligence system is something
that uses a degree of automation that might be self-learning in some way and that can take huge
amounts of data and then make predictions with it. That's kind of a reasonable working definition.
So it's a predictive system that can learn? Yes, there are many different types of AI,
but let's go with the machine learning one
that people mostly refer to when they're talking about artificial intelligence. And that's the
system that, well, basically it's just applied statistics, right, Hannah? Yeah, I mean, I think
the nicest definition that I've seen was someone on Twitter said, what is artificial intelligence?
And there was a reply, which was a bad choice of words in the 1950s, which I think is absolutely
true because you're completely right that a more accurate description,
rather than saying that we've been through
this revolution in intelligence,
is to say that we've been through a revolution
in computational statistics,
which is much, much less sexy.
I mean, admittedly,
depending on how you feel about statistics.
But, you know, ultimately,
we are talking about things here
that are just grids of numbers that are analysing data.
And they're doing it in a way that is a step change from what we had before, both in terms of the computational power that we have and the algorithms that we have.
But, you know, fundamentally, this is just statistics.
So what would be the simplest thing that could be given the term AI?
the simplest thing that could be given the term AI? That's actually quite a controversial argument because you could pick a lot of things. I mean if you're carrying around a smartphone you're
carrying around AI for example and a lot of people may not realise that but if you're using your
phone for things like maps to get you places that uses AI to find your route. It could be something
like a robot vacuum cleaner that uses AI to steer around objects in a room. There are many, many different applications. So it's not just confined to the
things that we're seeing at the moment that are quite fashionable, like chat GPT.
If you look at one of the map applicators, Google Maps or the Apple Maps,
what component of what that's doing would cause you to label it as an AI system?
It's probably got lots of routes on there
and it's able to make a judgment about what likely routes are.
It's taken in lots of data about conditions and times of day
and likelihood of the traffic being particularly busy.
And it's able to come up with a route
that satisfies the shortest distance or the shortest time.
So there's calculations going on
that predicts what the likely route would be.
I think the key point here is it's about learning, right?
So, I mean, at least that's how the modern definition of artificial intelligence is loosely used.
So the example that I like to think of is if you have a smart light bulb in your house,
you can program it to say, OK, turn on at 6 o'clock, dim at 9 p.m. and turn off at 11, right?
That's kind of just like a computer programme that's doing it.
But if you had a lightbulb that learned your behaviour,
so that was like checking your patterns,
that you tend to do something in the summer,
you tend to do something in the winter,
and then picks up on the statistical patterns that you're creating
and adjusts its decision-making on that basis,
that, I think, becomes artificial intelligence.
I love the idea of a smart lightbulb,
because it immediately makes me think of a light bulb having an idea
and then what would appear above a light bulb when it had an idea?
You know the way the internet is built on cats, right?
It basically exists as just cats all the time.
Well, Google researchers actually used that to come up with deep learning.
So what they did was they had this algorithm
and they decided that they would
let it go and look at thousands of pictures of cats online. And the algorithm then learned
what a cat looked like. No one had told it what a cat looked like, but it had come up with a series
of criteria, a certain threshold that it had to meet to be defined as a cat. Didn't always get it
right. But this led to deep learning. This is where you can chuck huge amounts of data at an algorithm and it will find patterns for itself.
You can do it another way. You can tell the algorithm what things are. You can label it
and that's supervised learning. So you can say, here is a picture of a cat. Show me other pictures
that look like this cat. The algorithm will check, you know, does it have four legs? Does it have a
back? Is it a chair or a cat sort of thing? You know, there's room for error here.
But that was always very, very difficult for computers to do. It was very easy for us.
We are, from birth, we are distinguishing all the objects in an image, for example, and we can tell
something's a bicycle, whether it's on the ground or leaning against a wall or there's someone on it.
Computer can't do that. So with the cats, then, it grew its own internal representation of what a cat is like.
It has no understanding what a cat is, but it knows one when it sees one,
or its own idea of one.
So with that change, you know those things where you have to,
when you're signing in for something and they give you nine pictures,
so that idea of which of these pictures has a bicycle in it,
do we now already have to upgrade the way of that that supposed use of security is always that is that
an illusion to make us feel more secure that's you training the algorithm so when you see a capture
like that and it says click on all the squares of traffic lights that's you confirming that there
are squares of traffic lights and so a self-driving car now has that information from you that's the key point isn't it the other component of this is is user feedback so we train the ai yeah it takes in lots of data
from us as well so yeah there's and there are people whose job it is out there to sit and label
images so they'll get a bunch of images of traffic scenes for example if they're trying to program
self-driving cars and it's their job to segment the image, to click on all the different objects in the image
and label them so that the machine learning system can identify them.
It sounds, Rufus, really benign.
What is your view of AI?
If I said, before any of this discussion,
you're coming on to this show tonight,
artificial intelligence, AI,
what's the first thing that pops into your head?
Is it Google Maps?
No, because truth be told,
I've been absolutely obsessed with this for about six months.
So this doesn't come to me completely freshly.
The best version of why it's not intelligent that I heard is,
I think, is it called The Chinese Room?
Oh, yeah, it's good.
Searle's Chinese Room.
Which is essentially this, right?
Imagine yourself in a room and two characters in
chinese or you know mandarin come through a slot in the wall and you look at them you've got no
idea what they mean but there's a slot the other side of the wall so you take a punt you go well
i'll post this one back out the other side and the green light comes on you think brilliant okay
i've got that now so then another two come in and you know which one was the right one so you put that through another green light
lovely and now three come and you think oh maybe i have to do it a different order however complex
the number of chinese symbols coming into the room become you have worked out through trial and error
what to post back out in what order but at no point can you speak Chinese and it was that
that made me go oh it's fine because up until then it does just sound absolutely terrifying
but understanding that it is a processorial computational game ultimately of trial and error
immediately you begin to see oh yes right no it is just a computer program because up until
then really the thing that had blown my mind was um things like the guy from google who they had a
language model running and he was conversing with the language model and the engineer over time
became absolutely wholly convinced that this thing was sentient and wrote to his bosses and said,
you cannot turn this off, it's like extinguishing a life.
And they fired him.
Right?
There's a lot about it that feels very terrifying.
I remember watching Stephen Fry talking about Prometheus
and we will finally now, as human beings,
be on the planet with an intelligence that we know is greater than our own
but is that intelligence or is that just an algorithm that is able to process a simulcrum
of intelligence and it seems like it is that that that because this is the thing right of course
there are algorithms that appear superhuman but we have created tools that are superhuman for a really long time.
I mean, forklifts are superhuman, you know?
And, like, no-one is kind of looking at ChatGPT as though it's a forklift.
The point that you make there,
that there's no real understanding of what is manipulating,
I think that's completely true.
I think no algorithm that's ever been created
has a conceptual understanding of what it's manipulating.
What about with ChatGBT, though?
Because this does seem to have really caught people's imagination.
But I put in a thing, you know,
when everyone was playing around with it
to get it to write a comedy routine,
and it just came up with this kind of soulless wordplay
that I sold to Jimmy Carr.
And I just...
But that...
You clap, but he made 10 million quid doing it.
But that sense that, to me, again, from a very uneducated eye,
it just seems like a cut-and-paste system.
So that level of invention, the important part of creativity,
the important part of sentence structure and that individuality
doesn't seem
to be there yet. I mean, I think that actually it kind of is there. I think it depends on how
you prompt it. But one thing I would say though, is that when you get the multimodal examples of
generative AI, right? So imagine chat GPT, but that can watch all the videos on the internet,
read all of the books as well, but see all the images. Then when you start being able to translate
between different modes, actually then i think
that you do get some grounding so you know if you've watched all of the videos on the internet
you kind of have a sense of how gravity works and if you can translate that between text and video
that is i think a little bit more of a step change okay that's a threat for you brian
kate could you describe i don't know briefly if it's possible
what chat gpt actually does how does it work right if i say to you a b c d you then say
please say it a b c d a f g yeah it's a completion thing so there've been chatbots for ages but what
makes chat gpt and other large language models so good is well one
large they're really really big they can take in millions and millions of pages of data but also
they have this architecture called transformer that's what the t stands for in chat gpt it's
able to provide context which was never really there before so it pays attention to particular
parts of the sentences so it's not just this completion of A, B, C, D, E, F, G.
It can go further than that.
And Rufus's example of saying,
well, what if this thing has been talking to it for a while
and it suddenly sounds as if it's alive?
Well, it might sound like that
because it's been trained on all our 80s films
and it's been trained on sci-fi that we've written
that's out there on the internet.
And that's the thing. It's got all of our content but that's why microsoft had to turn
theirs off because uh i can't remember which one it was that launched it might have been chat gpt
suddenly other labs that have been working on ai technology were like oh we're doing it too
microsoft put it out there and and the Microsoft AI showed jealousy.
It showed truly bad vibes.
It is properly scary.
But I think that what makes it most scary
is that it's being used by us.
We live in a world where the more efficiently you can do something,
the more of it that will exist.
So therefore, most of human creation ceases...
Well, in fact, creativity as a thing
ceases to be a human concern. Because you can ask a computer to generate 50,000 versions of the next
episode of EastEnders and whittle it down to the one that will work. Great. Well, now we don't need
any of the actors, any of the cameraman, any, any, any, any. It's about efficiency.
But that's the evolution of a society as well, isn't it?
That the jobs change.
It's how quickly it can do it.
But there was one point in your description
which you sort of glossed over,
which was generate 40 episodes, pick the best one.
That is something that only a human can uniquely do.
Do you truly believe that?
I really do.
For how long?
But I think indefinitely, really, really I do do and i think the reason is that there is something totally human
about caring about other humans the example that i always think about is do you remember alexander
mcqueen right he did this show and the big finale to one of his shows was he had a robot that's used to spray paint cars and he had it
spray painting a dress okay and it was so mesmerizing but the thing that made it amazing
was that the dress was being worn by this model and so she was there kind of like reacting to it
as it was like spraying her in the face and stuff and the thing is if you took that girl out of the
equation right if she wasn't there and you just had a robot spray painting a dress it wouldn't be interesting at all there was nothing interesting about that
and I think that in the same way as if you had a robot that could cross a tightrope there's no
jeopardy it's not interesting I think that humans are so intrigued by other humans and other human
stories and I don't think that will ever go away. No, I think that's absolutely right. However, I think of TikTok, if anyone uses TikTok, right?
Micro videos and you go, spum, spum, spum, spum.
At the moment, there are people making those videos.
But what happens when TikTok is just the AI
that says I can make a thing that looks like people
talking about the thing that you like.
You no longer need the creator.
And the corporation says says this is fantastic.
We haven't got to pay anyone now. This is the first radio ad you can smell. The new Cinnabon
pull apart only at Wendy's. It's ooey gooey and just five bucks for the small coffee all day long.
Taxes extra at participating Wendy's until May 5th. Terms and conditions apply.
In our new podcast, nature answers rural stories from a changing planet,
we are traveling with you to Uganda and Ghana to meet the people on the front lines of climate
change. We will share stories of how they are thriving using lessons learned from nature.
And good news, it is working. Learn more by listening to nature answers wherever you get your
podcast okay so as a computer scientist just in your view because i know it's controversial but
do you think there's a limit to how intelligent and we can speak about how we would define that, but how intelligent a computing device can become?
Right now, yes, because it is not conscious or sentient.
And that might never happen.
It's a huge area of discussion and debate in cognitive science and in AI.
We don't know.
Some people say, yes, it's inevitable.
From this machine will come some glimmer of self-awareness.
Others think it couldn't possibly happen at all.
And I'm just going to be agnostic and sit on the fence.
The natural question then is, you know, people may know
that the famous example would be the Turing test,
the Alan Turing board.
So how would we determine whether this thing,
chat GPT or whatever it is,
is now in some sense self-aware?
We can't. We don't have a test for consciousness.
In fact, the Turing test is not a test of intelligence.
It's a test of deception.
It's can you deceive someone into thinking that this computer can think?
And I have no test to find out if any of you are conscious.
I'm just going to take it for granted that you are.
But there's no way of telling.
People have tried, but yeah, there's's no way of telling um people have tried
but yeah there's just no way we're just going to assume that's david chalmers yes this of course
matters though on as rupa said on social media it matters of course because we we know about this
problem we know that there are bots and there are bot farms and there are things that influence our
politics and our opinions which behave in as far as you can tell online as a human being. So it's an important issue, isn't it,
to tell what you are talking to?
Well, yes, because one of the reasons is
because humans get very cross
if they find out they've been deceived.
So if they know it's a bot,
they're kind of okay with it and know what to expect.
But if they find out they've been deceived,
they get pretty angry about it.
But yes, you could be interacting with a bot you
could be interacting with something you could strike up an online friendship and then find out
later down the line that you've actually made friends with a bot perhaps have you heard about
the minimal turing test no it's this really brilliant paper that was published a couple
years ago same setup right there's a closed room and behind the door is a judge you and a robot
standing there and
you both have to convince the judge that you're human, but you only get to submit one single word.
Okay. So no, no long conversation. Anyway. So in this paper, what they did is they,
they tested it on thousands and thousands of people and collected the words that they felt
marked them out as human. And there were these really clear patterns that appeared. So there
were words like love, the word human as well came up a lot. There was also quite a lot of people talking about pizza.
And then there was like an entire category that was just bodily functions and profanities,
which I quite like. And then what was intriguing is that they then took pairs of words and they
tested them on thousands of people to see which word felt like it was more human than the
other and some words that had been submitted a lot like the word human actually people didn't
believe that it came from a human they thought that that would have been a randomly generated
word right um the word love beat almost everything it'd be like empathy and banana and robot like
loads of things but there was one word that completely stood out above all of the others
as the one word that marked you out as human more than anything.
I want Rufus to guess what it is, and I want Robin to guess what it is.
What is that word?
I know we're on Radio 4.
I'm doing a Turing test live on air.
If you gave me that task, I would write bollocks bollocks that to me is the most human of words
right it's not medical it's not anything but also it sort of describes a nihilism that i think we
as animals as conscious animals have bollocks i'm gonna go with souffle
i don't think there'll be any hunger you know what i mean i feel sure i would not go with souffle I don't think there'll be any hunger
you know what I mean
I feel that me and I would not go with souffle
sure
any other guesses
I'm assuming Kate you know
no I don't know this one
oh go on then
I'm going to go with help
oh help was submitted a lot actually
there's like lots of words like mercy
and lots of people talking about God as well that happened
any guesses Brian
I'm an algorithm according to Robin
okay the one word that
marks us as human more than any other it's the word poop poop yeah i mean as an american study
there's something about poop there's something about the kind of the childish fun yeah i mean
that's the thing isn't it to have a word that has a level of fun a level of fun but it's not just referencing
an emotion you know like fear or anger or whatever it's actually evoking one and it's something that
whole point about it being a childhood word i think for me is the really key point here because
actually even long into the future when you're imagining you know really amazing machines that
are indistinguishable from humans,
the difference is that they will not have had a childhood, right? And I think that making that reference between that thing that's uniquely human, that connects all of us, but only us,
I think there's something in that. That's not a measure of intelligence, though,
it's just a measure of history. Sure. It's a description of that. Sure. But then I think
there are some people who say that consciousness comes about as a result of our history.
So, I mean, there's different theories, right?
One is that consciousness is a natural consequence of intelligence.
You get intelligent enough and consciousness emerges.
But there are other theories, and the one that I like the most is the idea that actually consciousness emerges part of our evolution
because there was an advantage to understanding the internal state of another.
And if you're understanding the internal state of another,
as a consequence of that,
you understand your own internal state.
And so that idea, I mean, you know,
there's like lots of question marks over this
and, you know, lots of hand-waving
and grey areas and philosophy and stuff.
But the idea of that then
is that you're not just going to magically have
consciousness emerge inside a machine.
I'm impressed as a scientist that you place philosophy amongst hand-waving in grey areas.
I want to pick that up with you, Kate,
because I know you've done some work on the relationship people have with AIs.
And in particular, one piece of work that I found fascinating was the fact that
people fall in love with them, which is a very human thing to do. So they perceive there to be
an internal life. They do, yes. And quite recently, sort of in the past year, there are a number of
chatbots. And one example is Replica, people may have heard of. And they are like online partners.
Of course, it's heavily gendered. So it started off with an online girlfriend, always.
And people were communicating with them
and this was an AI that would learn from your interactions,
so your own personal avatar on a screen.
And it would learn about you
and it would build up a rapport with you
and you'd have conversations with it.
And people were developing really strong feelings.
Now, this is nothing new because back in the 60s there was a chatbot called eliza built by a guy
called weisenbaum and eliza with no ai in it whatsoever it was completely unsophisticated it
just put out responses in the manner of a therapist so if you said good morning eliza it would say
why is it a good morning and things like that you know and you say isn't it a lovely day and they
say why are you talking to me tell me about yourself and it was always repeated but it
sounded plausible because it was kind of framed in that therapy way people knew that it wasn't
intelligent they knew it wasn't alive and they really really loved it and they would have long
conversations with it to the point where the creator said well I'm going to look at these
conversations as transcripts to understand what's going on.
They said, no, we talk about really personal things.
So this bond had formed, and it's very, very compelling.
And it's because we, as social creatures,
we see the social in those things,
and we respond to it really well.
And so it's not really that strange
that we fall in love with AIs.
It's quite plausible.
And this has happened to hundreds and thousands of replica users.
They've developed feelings for an AI.
Wasn't that facility turned off?
Oh, it was.
So replica allowed you to do a thing called erotic role play.
So basically you could talk dirty to this AI.
And it's a paid feature.
If you wanted to escalate it, you know, you could pay a bit more.
Aren't you clever?
I mean, it brings new meaning
to going pro.
Think about data protection.
And yeah, so
this company's definitely making some money out of this.
But yeah, they eventually were
called out on it and they switched off that ability
to do the filthy talk.
People were devastated and they switched off that ability to do the filthy talk people were devastated and
they were they were posting on forums saying things like i'm heartbroken i've lost my partner
i've lost the one that meant most to me and sincerely held and sincerely meant and actually
i think i think it's quite sweet when when this happens i don't think there's anything wrong with
that i'm certainly not going to sweet i think it's quite sweet i don't know i find that the more i
list this the more i think the problem in the issue we're talking about is human beings and maybe we should just let
ai take charge because frankly we don't feel like we're really up to the job of being involved in
this planet if you look at the reasons people give for engaging with these there are people
saying i find it hard to make friends i was able to do that online with my ai and then it helps me
go out into the real world
and make more friends or people saying I can't come out to my parents but I've got a relationship
going with an AI and that makes me feel like I'm wanted so there's a lot of people working through
feelings with it and the problem with anything is if it goes too far people get dependent then yes
it's going to be a problem but if it's something that's positive and bringing good things to your life then why not but we've talked about quite harmless
things perhaps chat bots and things like that but i suppose we do give ai's increasing responsibility
so self-driving cars would be an example but you could imagine um military uses for ai should we
put it in charge of our nuclear arsenal? Maybe we do. I
don't know. AI versus Trump. Who do you want in charge of it? It makes me think actually of that
great disturbing case of the almost nuclear exchange. Stanislav Petrov. That's it. Maybe
you want to tell that story. Yeah, it's an incredible story because I think this is the
idea about the balance of power between humans and automation actually
has been going on for a really long time. It's not this super, super modern discussion.
So this is in the 1980s. And it was a particularly tense point in the Cold War. And the Russians,
they had this system that was monitoring the skies over their airspace. And in a bunker
somewhere in the middle of nowhere was this
Russian guy called Stanislav Petrov and his job was to sit there and watch the computer screens
right and if the computer screen said that they detected a missile a nuclear sort of opening
salvo from America his job was to pick up the phone and to call the Kremlin and then one day
he was in this bunker I think it was really late at night and all of the alarms started going off
it said it detected a handful of missiles and you know his orders were absolutely clear you pick up
the phone there's nothing you that's that's it you just do that and something kind of gave him a
little bit of pause because you know he was like okay well hang on a second if this is the moment
if this is it right end of days you know why would they only send a handful
of missiles why would they not do a much bigger opening salvo and also like the kind of exactly
where it is it just doesn't totally make sense but he knew that if he picked up the phone to
the kremlin then that would be it right they would immediately launch their counter-strike and there
would be nobody else along the chain who would stop it from happening. So instead, he just sat there completely frozen for 25, 30 minutes until the time elapsed where they would have landed on the soil. And he
knew that nothing had happened. And then he just never made the phone call and genuinely saved
humanity from extinction as a result. And there's another example in the Cuban Missile Crisis,
same thing with a Russian nuclear submarine commander.
To me, that's very important,
because it does suggest there's something about human decision-making,
our humanity, that is extremely valuable.
And so the debate really becomes,
how much do you trust these extremely efficient systems?
And that's where the policy debate was.
But there's another side of that
which is we have an electoral system or you know a system of representation or even if you live
under a dictatorial regime where people are making those decisions right but if you said this is just
an algorithmic thing then you could theoretically go to the computer and say,
we would like you to run everything.
So what are you asking?
Make the world a better place for everyone?
Make it fair?
Provide healthcare for everyone?
Feed everyone?
That's the question, right? Because a computer could design, theoretically,
a perfect system that would do all of that,
and it wouldn't care.
It doesn't got any skin in the game if it eats or it lives in a big palace but the question is does anyone in this room think that
the powers that be that would be able to provide that system would ask the computer to do that
rufus raises a very good point which is that if we are trying to train a car for example a self-driving car then um the
parameters that we give it that surely it's very it's a very complex set of parameters and at the
moment presumably it's just tesla or ford or whoever it is who decide that there's there's no
societal oversight or democratic oversight of how the thing is trained and therefore what value it puts on different lives for example well so there's the
trolley problem and if you haven't heard the trolley problem it's essentially that you're on
a bridge looking at a railway track and there's a the trolley the railway car is coming along
and there's someone on the track tied to the track and it's going to hit them and then there's a switch and if you pull the switch it will divert to another track
and do you pull the switch and save that person now what if on the other track there's a few more
people they're tied to the track and what if the person on the first track is a really horrible
person what if the people on the other track are really nice people or what if one is really old
and one is really young so this this concept of the trolley problem which is a philosophical
thought experiment,
people often apply it to self-driving cars to say,
oh, what if they have to decide whether they hit a push chair and a baby
or a homeless person crossing the street?
Well, actually, MIT did a big study
where they asked exactly that
and they let you choose in computer-generated scenarios
and they gathered millions and millions
and millions of responses.
Did they find out how
to create the perfect moral vehicle? No. But they found out an awful lot about what people thought
about different categories of people. And they found out that ethics are not universal. We don't
have universal ethics and that's the problem. Different cultures, different societies place
different emphasis on different things. You know, going back to that point about that human-machine
collaboration, I think there's something kind of interesting in that. Because I think that the last 10 years of
self-driving cars has essentially been about people getting really excited about the possibility of
the technology, building the car so it can drive for miles and miles and miles on its own,
and then putting a human in the driving seat and saying, okay, can you just step in when it goes
wrong, right? And the thing is, is that if you can you just step in when it goes wrong, right?
And the thing is,
if you think about what humans are not very good at,
we're not very good at paying attention,
we're not very good at being totally and completely aware of our surroundings,
and we're not very good at performing under pressure.
And so if you put people in that scenario,
and the nuclear safety industry has learned this
over many decades, likewise airline pilots,
if you put people in that situation with technology that is 99% working or 99.5% excellent but needs the human to step in
in that last moment we are terrible at it and so I think that while this technology falls short of
perfection actually I think what we're seeing with driverless cars is that it's the other way around. You keep the human in the driving seat, you keep the human doing what they can do,
and actually all of the flaws that humans have, not being able to pay attention,
not performing well under pressure, not being totally and completely aware of our surroundings,
you build the technology to fill in those gaps.
Would it be a fair summary to say that it's really the interesting issues and the problems here
at the interface between the technology and human beings, it's how we use the technology rather than the technology itself?
Yeah, but if you're going to design a self-driving car, you actually need to design it so it does run people over.
No, this is a real thing.
We use cars for a way for people to get around everywhere.
And we build a car that the moment you step in
front of it it stops you as people now live in a world where if you step in front of a car it will
stop so why are you paying attention to cars anymore you won't which means the people in the
cars know that the moment you get in a car, oh, it's going to take forever, because if someone wants to cross the motorway, they will,
because it'll stop.
So unless you have that brinksmanship, it doesn't work.
Cars are pointless.
So let me give you the chance to summarise it.
Where do you think we are, Kate Kate with our use of AI and the debate
which is going to prevent us or allow us to proceed further with its use well if you read all the
headlines in the paper we're supposedly under threat that AI is going to take over kill us all
you know that'll be the end of us and that I think is really really untrue but there are plenty of
issues that we should be concerned about.
And one of the things we can do is keep the human in the loop. So you'll hear that phrase a lot in AI. So make the human have the decision, make them have the ultimate control over things.
But there are a huge amount of other problems with AI that we don't really hear about. Things like
the hidden labour that is involved in segmenting those images, then clicking on all the different
images, not just through the captures that we do
when we try to get onto a website.
There are people paid small amounts of money
living in terrible conditions,
and that's what they do day in, day out.
It's the same with content moderation.
If you use a website and things have been censored,
the AI doesn't do that very well.
There's human people doing that as well.
That's their job, to look at disturbing images day in, day out. So there's human people doing that as well that that's their job to look at
disturbing images day in day out so there's a huge hidden cost there there's a sustainability issue
so the amount of energy it takes to generate models for machine learning or to run them or for
data servers and data farms and you know there's there's lots of things around the way which we engage with the world where we think, do we want to replace the things that we enjoy and do?
So plenty to be going on.
But the threat is not from the technology wiping us all out.
The threat is more, are we letting it control our lives?
And what about the opportunities?
Because those are the threats and the potential problems.
I mean, if we just leave it there,
it would seem that we should
just abolish the whole thing.
The thing is, I'm a tech optimist
and I really do genuinely think
there are huge advances being made
that we should be very thankful to AI for.
There are breakthroughs happening
in healthcare, for example,
in agriculture,
even assisting people
with their daily lives.
And as you say,
it's not something you can put back in the box.
This stuff is out and we can try and control it and use it beneficially.
But that requires a lot of responsible work
and it's trying to get big tech companies
to take some responsibility that is the challenge.
And the one thing we all know about big tech companies is how brilliant they are at doing
that elections end of society civilization death destruction anyone that you know who didn't get
like a doctorate now basically doesn't have a job because you can go to an ai that can tell you your
legal problem like all the law is is here's a set of rules.
Great. No more solicitors. No more lawyers.
There's a whole strata of middle-class jobs just gone.
And that's the thing, right?
So no-one worried when they were coming for the blue-collar workers.
We automated factories years ago.
Nobody actually gave a damn.
Beep, beep, beep.
An unknown item in the bagging area.
All of those people's jobs gone.
It's when they come for the copywriters.
That's when people get worried.
Right.
I think one of the biggest employers...
You're all right, though, because polemicists are going to still be here.
One of the biggest jobs for non-skilled
or whatever average skilled workers is call centres.
I want the end of call centres.
But that is literally like
two million jobs or something.
Gone. Like that.
It's the scale and the breadth of what will be
replaced. It isn't AI we've got to be
afraid of. It's capitalism.
I agree. This is a whole other
show.
We're alright anyway. We're in
Elon Musk's hands. It's fine.
I mean...
We also asked the audience a hands. It's fine.
We also asked the audience a question.
The question we asked is, what do you think is the scariest possibility of artificial intelligence?
What have you got, Brian?
This is from Fish. This is my new knees becoming sentient and blaming me for 30 years of rugby.
It might become PM and then tank the economy, kill the monarch and ruin the country oh no wait jenny is worried about a fridge becoming self-aware and stealing her cheese
that's from wallace
it making fun of me when it sees I get no girls on dating sites.
Luke will be in the foyer later on if anyone would like to...
We've got a website you can go to
actually. I can build him a robot.
Build him a robot? Yeah, special robot.
To deal with...
Yes, to deal with...
Oh, Brian,
suddenly again you've failed our
Turing test.
To deal with the... That's what the...
To deal with the love.
We can't just let that go, can we?
Oh, I think it's best we do.
The robot could...
Thank you very much to our fantastic panel,
Hannah Fry, Kate Devlin and Rufus Hound.
And next week we are asking,
big or small?
Yep.
That's all we've got so far.
That's the whole subject.
I've been told that I've just got to show you various things
and you have to say big or small.
How would you define it, though?
You need some dimension for scaling the problem, don't you?
Exactly, and that's where it becomes an infinite monkey cage.
You're wittering on.
Thanks. Bye-bye.
Bye.
APPLAUSE Thanks, bye-bye Bye Turned out nice again
Nature Bang
Hello
Hello And welcome to Nature Bang.
I'm Becky Ripley. I'm Emily Knight.
And in this series from BBC Radio 4
we look to the natural world to answer
some of life's big questions.
Like how can a brainless
slime mould help us solve complex
mapping problems? And what can
an octopus teach us about the relationship
between mind and body?
It really stretches your understanding of consciousness.
With the help of evolutionary biologists.
I'm actually always very comfortable comparing us to other species.
Philosophers.
You never really know what it could be like to be another creature.
And spongologists.
Is that your job title? Are you a spongologist?
Well, I am in certain spheres.
It's science meets storytelling,
with a philosophical twist.
It really gets to the heart of free will
and what it means to be you.
So if you want to find out more about yourself
via cockatoos that dance,
frogs that freeze,
and single-cell amoebas that design border policies,
subscribe to Nature Bang.
From BBC Radio 4, available on BBC Sounds.
Bye.
This is the first radio ad you can smell.
The new Cinnabon pull-apart only at Wendy's.
It's ooey, gooey, and just five bucks for the small coffee all day long.
Taxes extra at participating Wendy's until May 5th.
Terms and conditions apply.
In our new podcast,
Nature Answers,
rural stories from a changing planet,
we are traveling with you to Uganda and Ghana to meet the people on the front lines of climate change.
We will share stories of how they are thriving
using lessons learned from nature.
And good news, it is working.
Learn more by listening to Nature Answers wherever you get your podcasts. you