SciShow Tangents - SciShow Tangents Classics - Artificial Intelligence

Episode Date: January 31, 2023

Original Airdate: February 19, 2019It seems like everyone's using Artificial Intelligence these days, so we thought we'd dip back into our archives for a little refresher on just what the heck AI even... is. Plus, meet the character find of 2023: President Smartest Dog in the World! 

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everybody, it's Sam, and since you're hearing my voice at the beginning of the episode, you know what that means. We're taking a break this week and running a rerun. I've chosen to replay our episode about artificial intelligence because it kind of just seems to be everywhere these days. And in recognition of that, I got an AI bot to write an introduction to today's episode from the perspective of the smartest dog in the world, who is also the president of the United States and loves smoking cigars. Take it away, world's smartest dog. Greetings, fellow citizens. It's your favorite cigar-loving canine world leader, the world's smartest dog, here to introduce the rerun of the artificial intelligence episode of SciShow Tangents. Now, I may be the president, but let me tell you, being the smartest dog around and smoking cigars is the real dream. But in all
Starting point is 00:00:44 seriousness, I'm excited to dive into this episode and learn more about AI. Who knows? Maybe I'll pick up some tips to improve my leadership skills and even learn something new about the future of AI. So put on your thinking caps, light up a cigar, and join me as we listen to the latest advancements and discussions on AI and its impact on our world. Woof, woof. Hello and welcome to SciShow Tangents, the lightly competitive knowledge showcase starring some of the geniuses that make the YouTube series SciShow occur.
Starting point is 00:01:27 This week, joining me as always are Stefan Chin. Hello, I'm here. What's your tagline? I hate winter. Stefan, thank you for producing lots of SciShow and making it beautiful. You're welcome. I don't say that enough. And for tolerating the cold.
Starting point is 00:01:39 And for living in Montana. It's beautiful. That is a big part of working here. You just have to come here and live through it. You're from Montana, though. I know. Sam Schultz is also here. It's so much colder where I'm from, too.
Starting point is 00:01:52 Yeah. Is that your tagline? No, my tagline is I'm very sick, so I'm sorry, everybody. If I don't make any sense, that's why. We're also joined by Sari Riley, writer for various science communication things. Hi, Sari. How are you? Hello.
Starting point is 00:02:05 I'm good today. Whoa. I'm worried. Yeah. Yeah, wow. It's like a change of pace. We're both sad and you're good. Yeah.
Starting point is 00:02:14 Cool. I stole your energy. You did. Oh, no. Sari, what's your tagline? Wisdom teeth not included. And I'm Hank Green. My tagline is the creosote bush.
Starting point is 00:02:26 What is that? What is that? It's just a bush that grows in the desert. It's very toxic. You can't eat the leaves. It's a desert strategy generally. It's hard to make stuff in the desert. So anything comes by and like eats your leaves, you're like I need it. I can't make more. So things in the desert are hard to eat.
Starting point is 00:02:42 They have spines. They're tough and sometimes they're toxic. This podcast is not about the desert. This to eat they have spines they're tough and sometimes they're toxic this podcast is not about the desert this episode is not about desert stuff though maybe next time because that's a good topic
Starting point is 00:02:51 that's a good idea yeah but as a reminder to people this is SciShow Tangents every week we get together we try to amaze each other we try to delight each other with science facts
Starting point is 00:03:00 and we're playing for glory and we're playing for pride but we're also playing for Hank Bucks which are imaginary so I guess it's just more glory and pride. We do everything we can to stay on topic, but judging by previous conversations with this group, we will not be great at that. So if the team decides that the tangent we go on is unworthy, we will force that person who created that tangent to give up one of their Hank Bucks. So tangent with care. Now, as always, we're going to introduce this topic with a traditional science poem this week from Sari. So this poem was created
Starting point is 00:03:30 using a Wikipedia article and software. It's called Predicts How the Outputs. Jump to navigation, jump to search engines, such as the input. Data starts from a random guess, but then they may have unintended consequences. A common type of CAPTCHA generates a grade, that test, that assesses whether a computer can be. Media sites overtaking TV, images as a source of learning itself, more machine learning accidents
Starting point is 00:03:59 mean different mistakes than humans make. The simpler work, consider humans' algorithms. Does humanity want computers and humans apart? We begin the search for AI. So first, our topic of the day is artificial intelligence. Second, what was that? How did you make that? So I took the Wikipedia article for artificial intelligence, and I found an open source software called Janus Node, which describes itself as many things, but also Photoshop for text. And so you can program in different rules of how to filter the page. I used a button that's WebPoem, which was very easy. You click it and it does a mixture of
Starting point is 00:04:39 randomizing the words. So I guess it assigns each word a value and then shuffles them around. And also some Markov chaining, which is where it uses probability of how words follow, like the probability of one word following another, and using that to generate strings that make more sense than just random words. So you're saying that your science poem, you didn't write it at all. Yeah, I did curate it a little bit. Like I cut out some of the things that made no sense at all and squished stanzas together to make it have some sort of meaning but all the words are wikipedia okay well i think you get a hangbook for that yeah if for nothing else the ingenuity yeah of course getting out of writing well that's what that's the whole thing with artificial intelligence because ultimately it's even even when it's
Starting point is 00:05:23 something is created by a computer, a person created the computer program that created it. But also, that's true of me. I was also created by people. I liked the part in the poem where it said, computers make different mistakes than humans do. I was like, yeah, that's taken me somewhere. I can't tell you exactly where. So our topic is artificial intelligence. Sari, what is that? It's very broad, but very narrow. It's weird. People have tried to define it, but it gets philosophical.
Starting point is 00:05:51 In general, it's making a machine think, analyze, and make decisions. Approaching some sort of what I think it's called natural intelligence. It's what they consider humans and animals and organic stuff to have. One table that I found. I read a lot of textbooks for this because I'm not a computer scientist. How many textbooks did you read for a single episode of Tangents? Okay, I said plural textbooks. What I meant is pages of one textbook. And one of them defined it as a grid of like human-based versus rationality and then reasoning-based versus behavior-based.
Starting point is 00:06:28 So are we trying to design systems that think like humans, systems that act like humans, or systems that think rationally, or systems that act rationally? And then there's like the entire text is all a discussion of what is rationality, what is intelligence, what is acting like a human mean? And generally, we design artificial intelligence to solve problems in a way that either humans would, but faster and better than we would, or problems that humans can't solve on their own because our brains are limited in some way. In my probably less significant delving into this topic, I did find that there is sort of like a bunch of different categories of artificial intelligence, you know, like the kind of AI that we're using right now with artificial neural nets that can like tell whether there's a dog in a picture versus like the kind of artificial intelligence that's like 2 plus 2 equals 4 is a kind of artificial intelligence.
Starting point is 00:07:17 Because math is, you know, feels like intelligence to us. But we've had mechanical calculators for hundreds of years, even before computers. And then there's like artificial general intelligence. The singularity. Or yeah, when it's like this, you can just give this thing a task and it'll figure out how to do it on its own. Or strong AI when it's like, okay, now we're talking about things that are smart similarly
Starting point is 00:07:41 to or beyond that of people. It's called strong AI? Yeah. Which is the stuff that gives you the goosebumps. And you start thinking, well, maybe we're going to make ourselves obsolete. Like my phone from 10 years ago isn't that good. Is that how I'm going to feel about human beings? What is the difference between AI and machine learning?
Starting point is 00:08:00 Is that the same thing? Machine learning is a kind of AI. Okay. I stumbled across a thing somewhere that was saying that AI is often used for things, like when we're imagining super smart robots that mimic human intelligence or whatever, but then as soon as we figure out how to do a specific thing, that sort of falls, like we sort of remove the intelligence part of it, and we're just like, oh, that's object recognition or something. Or like these are neural networks that can program this thing or my interpretation of it is like we now know how the sausage is made and so we don't label it as intelligence anymore right that that is an interesting uh place to end up at where you're like oh wait a second are we going to
Starting point is 00:08:41 keep redefining this as not intelligent because we know how it works? Even beyond the point where we have created slave bots? No. See, I'm thinking once we figure out how our brains work and then we're like, oh, we're not intelligent either. I don't think humans will ever say we're not intelligent. No, I know. Everyone has too much ego for that. say we're not intelligent. No, I know.
Starting point is 00:09:01 Everyone has too much ego for that. The other thing is I don't think we'll ever really figure out how our brains work because I think we'll be able to create intelligent machines
Starting point is 00:09:09 more easily than we will be able to understand our own brains. Will they be able to tell us how our brains work? Probably not because like
Starting point is 00:09:16 getting in there and doing that measurement would be destructive. Though now that I've said that I'm like if they want to find out they can just be destructive. So now it's time to talk about this more in the framework of Trigger Fail, which is our segment when one of our panelists, this week it's me, has prepared three science facts for your
Starting point is 00:09:36 education and enjoyment, but only one of them is real. The other panelists have to figure out either by deduction or wild guess which is the true fact. If they do, they get a Hank Buck. If they are tricked, I get the Hank Buck. All right, fact number one. Scientists have actually given IQ tests to artificial intelligence programs, and they found that Google's AI had a 47-point IQ around the IQ of a six-year-old human. Google's AI a couple years before that had a 23 point IQ. So in the last two years, the IQ of that AI has more than doubled. If that happened again over the next two years, Google would be as smart as the average adult. Or effect number two, some AI scientists have theorized that a true general artificial intelligence, where a computer can handle tasks the way a human would, would require that the program be raised like a human child. And this was attempted by a researcher, Brian Zweiler, with a computer that operated inside
Starting point is 00:10:38 of a simulation where Brian interacted with the avatar of the AI quote-unquote child and allowed it to live and grow inside that simulation until Brian accidentally left the simulation running while he was on vacation. During those two weeks, the AI walked around the same circuit in the house tens of thousands of times and the behavior overwrote all of its previous learning and the experiment had to be restarted. Or number three. You okay, Sam? That one sounds like a ghost story. Mark Zuckerberg, in his spare time,
Starting point is 00:11:12 created an artificial intelligence system for his home and hired Morgan Freeman to be the voice of that system. That is not the fact because that is true. I'll just come out and tell you that that is not not true. That is 100% true. Here's the part that might not be true. I'll just come out and tell you that that is not not true. That is 100% true.
Starting point is 00:11:25 Here's the part that might not be true. He called Morgan Freeman and was like, can you please record all this stuff? And among the things that Mark Zuckerberg asked Morgan Freeman to record, your coffee is almost ready, Mark. Did you want me to turn out the lights in the garage, Mark? And I think you look fine today, Mark. Oh, I hate that. So those are my three facts. We've got scientists have given IQ tests to artificial intelligence,
Starting point is 00:11:52 and it's doubled for Google's AI in the last two years. Number two, Brian Zweiler accidentally left a simulation running, and it overrode its program by walking around the house for two weeks straight, tens of thousands of times. Or Mark Zuckerberg asked Morgan Freeman to record the line, I think you look fine today, Mark. They got increasingly weird. Like you started out being like, okay, Google's development is fine. Then demon Tamagotchi computer.
Starting point is 00:12:21 And then Mark Zuckerberg. It's not a demon Tamagotchi. It sounds like it is. How is it not? It just walked. Like it only had the one thing to do. Walk?
Starting point is 00:12:30 Yeah. Well, but when his friend was there, it had other stuff. It had other stuff to do. When a Tamagotchi is by itself, all it can
Starting point is 00:12:36 do is poop and then it dies. Just like this. Same thing. Yeah, I guess I qualitatively added the demon part because I assumed that was going
Starting point is 00:12:44 to seek vengeance. Yeah, just turn it qualitatively added the demon part because I assumed that was going to seek vengeance. Yeah, just turn it off, unplug it, delete program before it knows we did that. All right, quiz me. What do you think? I'm dubious about the first one because IQ tests, as far as I know, involve a lot of different aspects of things. So it's like verbal tests, spatial awareness. As they worked on natural language processing, then it would get better at like one chunk of the IQ test, which is how it could get better. But I think there are too many different things that an AI could just like
Starting point is 00:13:12 blanket improve. Do you not think that it could match a human's IQ on an IQ test? Like it could learn to take the test? I think it could learn to do problems. It could learn a specific problem and then like plug and chug versions of that problem. But if you're given it a logic puzzle, then that's an entirely separate set of programming and conditions that seems extremely hard to do. It could Google the answer, you think? I guess it had a database of all the answers. That's how Watson worked with Jeopardy, I think. That one also feels like it has too many holes
Starting point is 00:13:46 in it to me, I guess. Because of the reasons you just said. And more smarter ones that I won't say out loud. Sam had a bunch of smart ideas he's not going to tell us. Yeah, they're secrets.
Starting point is 00:13:55 I cut them all out of the episode. They took too much time. I really want the middle one to be true. Yeah, 100%. I don't want to justify it at all. I just want that one to be true.
Starting point is 00:14:04 Where Brian Zweiler's AI just walked around a house tens of thousands of times. I bet Brian felt bad when he got back. Right? I just like the idea that the only way to make a smart computer is to raise it like a human. Yeah. That is legitimately something that I will say out loud is a thing. One of the cover stories
Starting point is 00:14:20 in Scientific American recently was about how to make an artificial human you would have to raise a computer like a human. Could you not do it Blade Runner style? Or you just like give them all the memories? Well, what you would do
Starting point is 00:14:31 is you would raise the first one like a human and then you could just port the program to other bots. Seems like a bad idea. Yeah, so they'd just be like thousands and thousands
Starting point is 00:14:42 of one type of and you would be like the dad who raised the thousand day eyes. Ooh, sorry. Don't put that one in the episode. I want to write that story. Nobody thinks Mark Zuckerberg asked Morgan Freeman to? I mean, I just
Starting point is 00:14:57 don't care about that one. It doesn't tickle me at all. Demon Tamagotchi, that really hits a spot. It seems too perfect, though. I don't know now. I'm going with number two. You're going with Demon Tamagotchi?
Starting point is 00:15:10 Mm-hmm. I think I want to go with number two as well, because I want it to be true. Okay. Me too. Oh, everybody's going for number two? We can't all pick the same thing. You're all wrong! Yikes.
Starting point is 00:15:20 I knew that was maybe going to happen, but, you know. Why did you guys pick the same one? I made that one up completely. You did? No. You already wrote the third book. Yeah, it's true that people have theorized that you'd have to raise an AI like a human child. But Brian Zweiler was made up and nobody created a computer.
Starting point is 00:15:38 There's no Brian Zweiler? There's no Brian Zweiler. Sounds like a real name. It sounds like a real name, doesn't it? I even looked it up to make sure Zweiler was a real last name. It is. And Mark Zuckerberg did not ask Morgan Freeman to say
Starting point is 00:15:50 you look fine today, Mark. But he did ask Morgan Freeman to say a bunch of other stuff. I just made up some things that Morgan Freeman would say. And it is true that this group of researchers gave an IQ test. And I don't know what IQ test. And I looked, and the article
Starting point is 00:16:06 didn't tell me exactly what IQ test it was. And Google's AI performed best on the test of all the IIs it tested, like significantly better. Like Siri, sorry, was like in the 20s, and Google was 47. Siri. Siri. You meant Siri. God damn it.
Starting point is 00:16:23 Siri. We made eye contact. Siri. Siri. You meant Siri. God damn it. Siri. We made eye contact. It's hard. It's hard. So Siri had her IQ, I guess she's a she, in the 20s. And then the Microsoft one, Cortana, was in the 30s. One from a Chinese company was in the 30s. And then Google was in the 40s. Do we interact with this
Starting point is 00:16:46 AI in our day-to-day lives? I think that they're testing just the assistants that we have on our phones. Those things are that smart? Apparently. I don't know exactly how they asked them the questions and whether like Watson it was like going to like find an
Starting point is 00:17:02 answer and just doing natural language processing and being like oh I'll check the internet for that. That seems like cheating. It does seem like cheating. What it comes down to is like understanding, like they aren't understanding the question we're asking them. They are using systems to provide us the answer that is most likely to be true. Isn't that what we're all doing? That's the question. That's the point. Yeah. At what point, like what is understanding? Yeah. So we that's the point yeah yeah at what point like what is understanding yeah so we've reached the point of the podcast where we're all feeling a tinge of existential crisis so this is probably a good time to go to the advertisements And we're back.
Starting point is 00:17:55 Hank Buck totals. One for Sari for the science poem. Three for me. Garbage. I don't think we've ever been dumb enough to all pick the same answer before. It was a really excellent lie, though. So I think think we've ever been dumb enough to all pick the same answer before. It was a really excellent lie, though. So I think you deserve it.
Starting point is 00:18:09 Those are good ones. And now it's time for our Fact Box, where Stefan and Sam have each brought facts to present to the others in an attempt to blow our minds. And we, me and Sari, each have a Hank Buck to award the fact that we like the most. So who goes first? It's the person who most recently had a banana. Oh my god. Did you watch me at my freaking desk just now?
Starting point is 00:18:33 I ate a banana right before I walked in here. I did not know that. That's so weird. I was trying to make myself feel better somehow. Give yourself some potassium. Well, one time I was playing Wii Fit and it said if you're ever
Starting point is 00:18:48 feeling tired eat a banana because it has as much energy as a cup of coffee or something like that and ever since then what does that mean
Starting point is 00:18:53 I don't know but ever since then whenever I feel bad I eat a banana it never works though we lied to you thanks for nothing Wii Fit
Starting point is 00:19:03 yeah some programmer at Nintendo was just like man I love bananas I feel better every time I have a banana
Starting point is 00:19:10 alright well I guess Sam's going first unless Stefan like just like is eating a banana right now nope when was that your last banana
Starting point is 00:19:16 I don't even know I went through a phase where I ate so many bananas and I haven't had a banana in months what is so many so many just ate so many bananas and I haven't had a banana in months. What is so many?
Starting point is 00:19:29 It's just like so many bananas. What is so many bananas? How many? Like multiple a day? Oh, yeah. Like a bunch a day. Like an actual bunch? Yeah, yeah, yeah.
Starting point is 00:19:38 Like you got to go to Costco and you can stock up because they're a little bit. You're going to eat like 15 bananas this week? No, I haven't been doing that. No, but there was a point when you did. There was a point where you were eating like. Yeah, sure. Why not?
Starting point is 00:19:48 What's the problem? That's a lot of bananas. That's a lot of bananas. How did everything turn out in the end? I feel great. Sam. I still have to go first, even though he's definitely eating more Lifetime Bananas.
Starting point is 00:20:02 That would be a good stat to know. Lifetime Bananas? Why doesn't good stat to know. Lifetime bananas? Why doesn't Steam tell me the important stats? Ready? Yep. The Serengeti National Park in Tanzania is about the size of New Hampshire, but overseen by only 150 rangers. So it's basically impossible for them to watch the entire park at once,
Starting point is 00:20:20 but it's also they have a huge poaching problem where people are killing elephants for their ivory and it's like they can't do anything about it really for the most part because people sneak into the park and they can't stop them they have set up motion sensing cameras that have been sort of helpful but there's so much stuff going on on the savannah like it could be a hyena sneaking around or it could be a person sneaking around and the only way to know is to look at all this footage and see, oh, there's a person. But with a little bit of help from researchers and a generous grant from the Leonardo DiCaprio Foundation, there has been a new series of cameras developed called Trail Guard AI, which are being deployed in the parks right now.
Starting point is 00:21:02 And what they do is they can tell humans and cars from other kinds of animals. So instead of sending all the footage that they take to the guards to check out, they just send the footage of humans and vehicles going around. So then the park rangers get that right away and they can go do something about it. They've had an older version in the parks for a while that don't narrow it down quite as well. And they've caught like 20 poaching rings so far. So this new thing hopefully will be even better than that. Similar technology is being used to catch tomb raiders in China. And it's being used to, they send like scooters down with cameras on them to coral reefs. And the cameras on the scooters can tell what kind of animals and plants are in the coral
Starting point is 00:21:47 reef. So then they can tell the scientists if they're too far gone to bother helping with or if there's still a chance for them to like put money into it and try to save the coral reef. I was thinking about during that there's a spy plane that they have used for both like war and terrorism stuff but also for just crime in some places where it's way up in the sky and has extremely detailed photographs and it's just always filming and you can track not just where the people are when the crime happens but like track them back to where they came from and track them to where they went and it's very creepy but it seems like a sort of perfect application for that.
Starting point is 00:22:28 Yeah, they only have a couple hundred of these cameras, I think. So it still is like not necessarily covering the whole thing. But they just put them where they guess people will be able to get in. Sure. Are they disguised at all or are they just like? They're only the size of a pencil. So they just like shove them into trees and stuff. And they have like a Wi-Fi connection, like talk to the
Starting point is 00:22:47 cellular stuff? Yeah, they have some kind of chip in them that does the first pass of being able to tell what to send, and then I think they send it to a bigger computer that does like a second pass, and then it sends it to the people to take a look at. Awesome. Are they 360 cameras?
Starting point is 00:23:04 No. They're just like, they look like pencils and they're the size of pencils. All right. Stephan, what do you got for us? There have been some studies so far that have shown links between eye movement and personality, but all these studies are pretty much lab-based. They give them personality tests and then they try to predict things like, how many times is this person going to fixate on
Starting point is 00:23:25 this thing based on this personality trait? And so these researchers wanted to explore real-world eye movement and see if they could predict the personality traits from the movements using an artificial intelligence neural network. So they had 50 students and faculty of this university and tracked their eye movements while they walked around. They had to run an errand. So they had to walk around for 10 minutes and buy something from a store on campus. With like a thing on their heads? Yeah, yeah.
Starting point is 00:23:52 So I was like, maybe that's affecting their eye movements too. I don't know. I think if I had a weird thing on my head, my personality would change. They had head-mounted eye trackers and phones strapped to their chest so they could film what was in front of them. Hello, I am a normal human. I am here to buy eggs. I would like 15 bunches of bananas. Then when the people returned to the researchers, they had them take a bunch of different personality tests looking at the big five personality traits.
Starting point is 00:24:22 And then they trained a neural network and it did a pretty good job of predicting four out of five of the personality traits. It was best at predicting extroversion, which feels like that sort of makes sense to me. They didn't really go into detail, but I'm like, you know. Make a lot of eye contact. Yeah, you're like looking around at people. Introvert, you're like down at the ground. Please don't notice me in my goggles
Starting point is 00:24:45 and then it also did a pretty good job at neuroticism agreeableness and conscientiousness but it was not good at predicting openness and i thought it was interesting they mentioned specifically that the pupil diameter was important for predicting neuroticism but not useful for anything else. I need more research. This is a small sample size. Yeah, there needs to be more research. They don't know why these things are connected yet.
Starting point is 00:25:14 And they say it's not accurate enough yet to do practical things with it, but it is outperforming other baselines so far. And it corroborates the findings of all the previous studies. Well, get ready for robots that can know more about you than you do. That's what I thought was interesting was like what I thought when I read the headline was like, oh, if I see someone make an S shape with their eye, that means they're lying or something like that. But it's more they want to be able to design
Starting point is 00:25:39 robots that can read your expression really well and possibly mimic that itself, like if the robot has eyes, so that you can have a more meaningful computer-human interaction. Great, that's what I want. So it can advertise to you better, probably. So it can advertise to me better. So it can more effectively get me
Starting point is 00:25:56 to buy the correct bananas. You look like a 15-year-old banana guy. Yeah. I can tell by your eye movements. You've had a lot of bananas in your life, haven't you? You keep looking at bananas. Like what other application would there even be for it? Just so that an AI would be nicer to interact with.
Starting point is 00:26:16 If like I walk up to somebody and I can tell that they're shy, I will treat them differently than if I walked up to somebody and they are obviously outgoing. So like ideally, a computer or robot that interacts with people would also be able to do that. Right. If it's like doing caregiving activities or something. Okay. There's nothing that I think about more than where my eyes are. So it probably works. They're in your head. Now I'm thinking about it a lot.
Starting point is 00:26:42 But when I look at anything, I think, am I looking at this thing too long? Yes. Especially a person. Yeah, I don't know where to look on a person. Where do I look? At your nose? At your eyes? Are you freaking me out now?
Starting point is 00:26:54 Between your eyes, maybe. I think you are scoring high on neuroticism. Probably. Not surprising. I've consulted the algorithm. All right. I like both of those facts a lot. I do.
Starting point is 00:27:07 I'm going to go with Sam. It doesn't even matter who you give your money to. It's true. Ultimately, nobody's going to win this except for moi. I'm going to give mine to Stefan because I've never heard of that research before. And that seems really weird and scary. So I want to know more about it before it starts happening to us. Yeah. And now it's time for Ask the Science
Starting point is 00:27:28 Couch, where we ask listener questions to our couch of finely honed scientific minds. Adit Bhatia asks, which AI has done the best on the Turing test, and can we access it? I do not know the answer to this question, even a little bit. Can you talk about what the Turing test is, though? I can talk about
Starting point is 00:27:44 that. Basically, Alan Turing said, and I do not agree with him, and I don't think many people do anymore, that if a computer can convince you that it is a person, then it will be a person. And so we are sort of heading for the future in which you can have a conversation with a computer and not have any idea that it's not a person. And that would be passing the Turing test. And there, I think, have been situations where AIs have, quote, passed the Turing test. But we look at that now and are like, eh, you know, it's just doing a really good job of saying something that sounds like something someone might say in response to that particular question. And oftentimes very weird things. Like one of the weird examples, I think it was one of Google's natural language AIs. Somebody asked, what is immorality? And it responded, the fact that you have children.
Starting point is 00:28:38 It said the purpose of life was immortality. That sounds like a computer. Well, also it kind of sounds like people. Yeah, some people too. So the AI that's done the best on the Turing also, it kind of sounds like people. Yeah, some people, too. So the AI that's done the best on the Turing test, there are two that I found. The most controversial one was the chatbot named Eugene Guzman, who supposedly was designed to be a fake 13-year-old boy from Odessa, Ukraine, who doesn't speak English that well.
Starting point is 00:29:04 Right. Okay. So I see that you've made your way around the Turing test by being like, oh, well the kid doesn't understand what I'm saying very well. Yeah, and so it tricked 10 out of 30 of a panel of judges at the Royal Society in London into believing that he was a real boy.
Starting point is 00:29:19 And people were saying that that counts as the Turing test. As far as I can tell, I could not find Eugene Guzman online. I was looking for him and I can't find him. I don't think you can chat with this boy anymore. I want to chat with my fake Ukraine boy. Did they just turn him off and he's
Starting point is 00:29:35 gone forever? He's just walking the same circuit around his room. Stored in a computer somewhere. Apparently his dad is a gynecologist and he has a pet gerbil or something. There are transcripts of his conversations with other people online, which I read a couple of.
Starting point is 00:29:51 It's not very good, but also I didn't do any of the work to program him, so I can be judgmental. The other one that seemed more legitimate to me, there was a festival in 2011 where a modified version of Cleverbot, which is, I don't know who designed it but that you can play um tricked 59.3 percent of 1334 votes which included 30 judges and a generic
Starting point is 00:30:16 audience and so that passed the 50 threshold which was generally described as part of the test 50 is the threshold yeah because that seems low the original premise was there are two rooms and one of them has a machine and one of them has a human and you talk to both and you have to decide which is like you have a one in two chance of deciding what the machine is um it just has to trick you there's some question as to like the validity of these competitions, like who's playing them in the case of over a thousand people playing them. Have people talked with a chatbot before? What are what conversations are they used to having? So the people judging whether it is
Starting point is 00:30:56 human or not are variable in the Turing test. There is a strong argument for Turing test in some cases because it is actually a very difficult thing to accomplish because you need natural language processing to for it to understand what you're saying to it you need to store information before during the conversation you have to have some sort of reasoning algorithm to generate the responses and you have to have some degree of machine learning to like adapt adapt and constantly learn what it has from the conversation, store that, create new responses to make everything make sense. When Turing was throwing out these ideas, AI was still such a fairly new concept and fairly tied to philosophy.
Starting point is 00:31:40 This was a way to try and attempt to answer, can machines think? What is the best way to do that? Language, I guess, maybe? The criticisms are really interesting nowadays. I think they fall into like three main-ish categories from what I can tell. One is that the people who are designing these things for Turing test competitions, they're all chatbots. The thing that they are designing for this is an AI that is extremely good at talking to people, which isn't what most of the researchers who are doing AI are interested in. Like people are doing so many different things with like image processing and self-driving cars and natural language processing
Starting point is 00:32:15 in different, more useful ways or more broadly applicable ways. I guess I don't want to put value on it, but like Siri is more useful than a 13 year old Ukrainian child on the internet. Strong statement. The other thing that like keeps nibbling at my head is that are we asking the question, are these things thinking and are they alive? I can't get away from feeling like, I mean, if a bacteria is alive, then I think some of these computer programs are alive. If a bacteria is alive, then I think some of these computer programs are alive. And I have a hard time with that. But, like, first, bacteria are alive, but also, like, you know, I don't mind mass murdering them in my mouth every morning. But, like, where are we at and at what point do I have to feel bad about turning off a computer program?
Starting point is 00:33:00 And, like, I legitimately think that's going to be a thing in my lifetime. Will it be alive when it won't let you turn it off? Is that the alive threshold? No, I don't like it. Like I could turn you off, man. Oh, yeah. Okay. That felt like pretty confrontational, me looking straight into your eyes while I said that.
Starting point is 00:33:21 But it's true. Yeah, I guess you're right. I wouldn't stand a chance. That's not in your current state. You've only had one banana. There's also the question of intelligence, too, right? So we look at other intelligent animals, like dolphins, for instance.
Starting point is 00:33:38 They wouldn't pass a Turing test because they don't speak the same language as humans, but we still feel like, by other measures of animal intelligence, they got their shit together. Yeah, I don't want to go language as humans. But we still feel like by other measures of animal intelligence they got their shit together. Yeah, I don't want to go turn one off. Yeah, you have some moral obligation not to. Yeah.
Starting point is 00:33:53 Keep as many dolphins turned on as possible. I started down the path, I just kept going. Yeah. There's that dolphin that wanted to have sex with its trainer. They apparently are pretty into people, from what I've heard. Yes, also from what I've read. I'm glad that that's how that sentence ended. I'm taking away Hank Buck for that one.
Starting point is 00:34:16 Was I on a tangent? I guess I kind of was. Dolphin sex? I wouldn't let you do it if I wasn't going to steal one. Why try to make something that thinks like a human being? Or is it possible to make something that doesn't think like a human being since we think like human beings? I think that's like a very current problem that AI researchers are trying to tackle. So there's this idea of what is intelligence and what is imitating intelligence versus what is actually
Starting point is 00:34:46 intelligence. And apparently, Noam Chomsky, the linguist, has pointed out that when we build machines to move in water, we don't make it swim like a human necessarily. We still think that a submarine is a very effective machine because it's designed to do a task that we want it to and we don't design it in human image and so there are probably a lot of branches of ai that we could explore that aren't just mimicking human language for example like there could be a way to process a large number of images that's completely different from how our eyes receive information and our brains process that retinal image and do things. As far as I know, that's like directions that AI research is going into is like,
Starting point is 00:35:32 how can humans overlap with computers and how can our brains work similarly? But also how is artificial intelligence completely different and what directions can it go? Well, what I got out of that is that I'm as deep a thinker as Noam Chomsky is. Professional thinker. If you want to ask the science couch, you could tweet us your question using the hashtag AskSciShow. Thank you to PixieBlood32 and HLToler and everybody else who tweeted us your questions.
Starting point is 00:35:53 Now it's time for our final scores. Sarah, you've got one point. I've got two because I went on a dolphin sex tangent. Sam, you've got one point. Stefan, you've got one point. I remain the winner. If you like this show and you want to help us out, it's super easy to do that. First, you can got one point. Stefan, you've got one point. I remain the winner. If you like this show and you want to help us out, it's super easy to do that. First, you can leave us a review wherever you listen, like KT Simon did.
Starting point is 00:36:12 Thank you. It's super helpful. It helps us know what you like about the show and also helps other people know what you like about the show. Second, tweet out your favorite moment from this episode so that we know what that was because I'd like to know. And finally, if you want to show your love for SciShow Tangents, you can just tell people about us. Thank you for joining us. I have been Hank Green.
Starting point is 00:36:32 I've been Sari Reilly. I've been Stefan Chin. And I've been Sam Schill. SciShow Tangents is a co-production of Complexly and WNYC Studios. It's produced by all of us and Caitlin Hoffmeister. Our art is by Hiroko Matsushima, and our sound design is by Joseph Tuna-Medish. Our social media organizer is Victoria Bongiorno, and we couldn't have made any of this without our patrons on Patreon. Thank you. And remember, the mind is not a vessel to be filled, but a fire to be lighted. But one more thing.
Starting point is 00:37:12 Allow me to read you the title of this 2010 paper, Unembedded Design of Intelligent Artificial Anus. So sometimes you don't have a rectum anymore because of disease. It's not fun because you can't control bowel movements as easily, so you have to have situations to handle that for you. But if you have an artificially intelligent anus, your anus can have a microcontroller and use pressure sensors to detect whether there is a need for excretion. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.