Ologies with Alie Ward - Artificial Intelligence Ethicology (WILL A.I. CRASH OUT?) with Abeba Birhane

Episode Date: May 8, 2025

Who’s babysitting AI? Will it steal your job? What happens when you’re rude to a chatbot? Cognitive scientist, Trinity College professor and Artificial Intelligence Ethicologist  Dr. Abeba Birhan...e lets me ask her not-smart questions about legislation around AI, auditing datasets, environmental impacts, booby traps, doorbell narcs, commonly used fallacies, how the “godfathers’ of AI feel about their creation, robots doing your homework, and and whether or not AI is actually the root of all evil. Also: bacon ice cream and why Siri is a girl. Visit Dr. Birhane’s website and follow her on Bluesky and Google ScholarA donation went to The Municipality of Gaza and UNRWAMore episode sources and linksSmologies (short, classroom-safe) episodesOther episodes you may enjoy: Neurotechnology (AI + BRAIN TECH), Architectural Technology (COMPUTER PROGRAMMING), FIELD TRIP: A Hollywood Visit to the Writers Guild Strike Line, Futurology (THE FUTURE), Gizmology (ROBOTS), Genocidology (CRIMES OF ATROCITY)Sponsors of OlogiesTranscripts and bleeped episodesBecome a patron of Ologies for as little as a buck a monthOlogiesMerch.com has hats, shirts, hoodies, totes!Follow Ologies on Instagram and BlueskyFollow Alie Ward on Instagram and TikTokEditing by Mercedes Maitland of Maitland Audio Productions and Jake ChaffeeManaging Director: Susan HaleScheduling Producer: Noel DilworthTranscripts by Aveline Malek Website by Kelly R. DwyerTheme song by Nick Thorburn

Transcript
Discussion (0)
Starting point is 00:00:00 Oh hey, it's the tag that you wish you'd cut out of your shirt, Allie Ward. And for every one of us that has seen AI become more and more present in our lives and wondered, is anyone driving this bus? I have here for you a chat with an expert who tells us exactly who is driving the bus and where it could be headed. Is AI evil? Does AI even care about us? Is it going to kill us?
Starting point is 00:00:23 Should we feel bad for it? Don't ask me, I'm not theologist. We're gonna get to it. Now this expert is a senior fellow in trustworthy AI and an assistant professor at the School of Computer Science and Statistics at Trinity College in Dublin, Ireland. And they're cognitive scientists. They research ethics in artificial intelligence
Starting point is 00:00:42 and they've published papers with such titles as The Forgotten Margins of AI Ethics Toward Decolonizing Computational Sciences, The Unseen Black Faces of AI Algorithms, and The Values Encoded in Machine Learning Research. So they're on it and I got to sit down and visit and chat in person when I was in Ireland just last month. Also, just a pleasing aesthetic side note, they were born in Ethiopia but lives in Ireland. And this expert has the most melodic cadence, just like Bjork. I was mesmerized. We're going to get to all that in a sec.
Starting point is 00:01:18 But first, if you ever need shorter kid-friendly episodes with no adult language, we have Smologies. They're episodes in their own feed wherever you get podcasts. Also linked in the show notes. That's Smologies. Also thank you to patrons for supporting Ologies and sending in questions ahead of time. You can join for as little as a dollar a month at patreon.com. Thanks also to everyone who's ever left a review for this show. They helped so much. And I read all of them weirdly, including this recent one by Remily who wrote, this podcast will expand your horizons and help you indulge in your hyper fixations, both known and yet to be discovered. Remily also
Starting point is 00:01:56 says sorry it has taken me over five years to finally write a review. Remily, anytime is a good time. Thanks for that. Okay, let's get right into artificial intelligence ethicology. It's the ethics of machine cognition. Is it cognition? We'll talk about it. What does chat GPT stand for? Why is Siri a lady? Can you ask a robot for a cheeseburger yet?
Starting point is 00:02:17 What happens when you're rude to a chat bot? Also, how do artists prevent getting ripped off? How much energy does AI take up? Will we all lose our jobs? Booby traps? Doorbell narks? Commonly used fallacies? How the creators of AI feel about AI? What is hype?
Starting point is 00:02:34 And what is horror? What are the benefits of AI? What happens if you assign a chatbot your homework? And whether or not AI is the root of all evil or a pocket pal, with embodied cognitive scientist, professor, scholar, and artificial intelligence ethicologist Dr. Ababa Burhane. I'm asking questions that would take hours to unroll and they expect you to answer it in like one minute, two minute marks. They're rushing you to get off.
Starting point is 00:03:22 So no worries. I am Ababa Bruhane. She hers. Great. And AI. You have been an expert in this field for a while, but I haven't known about AI for that long. How long have you been studying it?
Starting point is 00:03:37 Technically speaking, I am a cognitive scientist. So I finished my PhD about three years ago in cognitive science. Um, so halfway through my PhD, then around the end of my second year, I left the cognitive science department and joined a lab where people do a lot of testing, evaluating and testing of, you know, chat boards in various AI models. And what is exactly cognitive science? Cognitive science is very broad. So traditional cognitive science tends to be about understanding cognition, understanding human behavior, understanding human interaction, and so on.
Starting point is 00:04:20 And cognitive science often is not taught at graduate level. It's either at a master's level or a PhD level, because cognitive science is really interdisciplinary. And Dr. Brahani says that cognitive science is actually a mishmash of disciplines, or what's sometimes called the cognitive hexagon, with sides representing philosophy, psychology, linguistics, neuroscience, artificial intelligence, and
Starting point is 00:04:46 anthropology, according to some institutions. And anthropology can also mean social sciences. Different institutions will phrase it their own way, but broadly speaking, cognitive science is a lot of stuff. So the idea is you will take, you know, important or helpful aspects from these various disciplines and cognitive science that allows you to synthesize, to combine these various theories, even computational models to understand human cognition. That's the idea. So from philosophy, for example, you will learn how to question assumptions. You will go down into the various questions around what's cognition, what's intelligence, what's human emotion.
Starting point is 00:05:31 So philosophy really lends you the analytic tools. Same from neuroscience. The idea is you get to learn how the brain works and use it in a way to synthesize from all these different disciplines. So that's traditional cognitive science. You can call cognitive science cog-sci if you want to. And she says that she's in a really niche specialty within cog-sci, and it's called
Starting point is 00:05:55 embodied cognitive science, which isn't just about understanding the human brain. Whereas embodied cognitive science is moving away from this idea of treating cognition in isolation, your cognition doesn't end at your brain and your sense of self doesn't end at the skin but rather it's extended into the tools you use. Anything you do, you do it as an embodied self, as an embodied person. So your body, your social circle, your history, your culture, even your gender and sexuality, all these are important factors that play into, you know, your understanding, your cognition, your intelligence, your emotion, and so on. And when you're studying cognitive science, how do you not use your brain to think about your brain all the time?
Starting point is 00:06:47 How do you get out of your brain? You do it. Yeah. Yeah. Yeah. I mean, you have to, you have to use your brain. So you are familiar, you know, with Descartes famous quote, cog er gossam. So I think therefore I am. The idea is you can know who you are, you can know you are a thinking being, you can know that you are someone. So that is like using your brain to understand your brain, so to speak. Whereas again, the emphasis with embodied cognitive science is that, you know, all that is really very individualistic, even to confirm our existence. It's through conversations with others. So that's why embodied cognitive science emphasizes others in communities
Starting point is 00:07:42 in your body as really important factors in understanding cognition. So embodied cognitive science kind of goes downstairs a bit and it considers how a person's body experiences the world and how the environment shapes thinking and perception. And cognitive scientists have this thought experiment that tickled me and it envisions a little person in your brain interpreting inputs, but then who is in the brain inside that little person's brain? Does it have its own brain inside the little person's? It's like Russian nesting dolls and there are just infinity little humans inside humans'
Starting point is 00:08:18 brains to interpret what the brain inside the brain is braining. This is called endearingly the homunculus argument. And yes, embodied cognitive science is like it's more than a tiny person in your skull. And how can we truly understand artificial intelligence if we don't first grasp intelligence intelligently? And when we're thinking about who we interface with, with AI, way back in the day, there used to be a search engine called Ask Jeeves.
Starting point is 00:08:50 And it was like a butler who would find to you the answers to things. And now we have Siri and Alexa. Why do you think with AI they've gendered and they've personified these voices that are actually a huge network of artificial intelligence. Is that to help our brains understand? It's a little bit of like a marketing strategy but it's also a little bit of appealing to human nature. We tend to kind of gender and personify objects. So if you are interacting
Starting point is 00:09:28 with ChatGPT, for example, we tend to just naturally treat it as another person, another being, another entity. On the one hand, as you said, these chatbots, there is no intention, there is no understanding, there is no soul in these machines. They are just pure machines. But also the developers and vendors of these systems, they tend to market them as kind of personified entities, because it's much more appealing to think that you are interacting with another sentient thing. I always wonder if they made some of them female voices because we're more accepting, we're less threatened by females. We're socialized to have a mommy figure come and help us with something. It's not as threatening that it'll turn on us. And also it's like, oh, a lady
Starting point is 00:10:23 will get it for you, you know? Yeah, yeah. I mean, we naturally tend to think women are much more like nurturing and they have the role of helping you. So it is kind of related to the social norms that really dictate society. So it's really also to some extent leaning on that stereotype that if it's a woman, you know, it's approachable, it's there to help you. And yes, I am far from the first person to notice that digital assistants tend to be ladies. And some historians and media scholars think that it starts early by hearing a female voice while you're cooking in the womb, and that's why
Starting point is 00:11:05 people love a lady voice. Or that the first telephone operator in the late 1800s happened to be a woman with a great voice and then it just stuck. But as these AI avatars start to have faces and personalities and pastimes and Instagrams, why do we see mostly younger, hotter avatars? Well, there was a 2024 article published by Reuters Institute, and it reached out to this multimedia professor, April Newton, who noted that a gentle, well-modulated woman's voice is usually the default for AI assistants and avatars because, quote, we order those devices to do things for us, and we are very comfortable ordering women to do stuff for us. And also just a side note for me, did you know that the word robot, it comes from a
Starting point is 00:11:52 Slavic root for forced labor or slave? It's creepy. And historically, humans have loved capable servants, but not too capable or else you're just begging for an uprising. So the future, it's also the past. Am I rooting for the robots now? I don't know. When it comes to Siri or virtual assistants,
Starting point is 00:12:14 how is that different than the AI that's been ramped up in the last couple of years? Is Siri and Alexa, are those AIs or are those just search engines? Is Google an AI? We call some things AI and then other things just computing. What's the difference? Yeah, so what's really different between, say, traditional AI, whether it's implemented
Starting point is 00:12:37 as a chatbot or as a predictive system, generative AI that has really exploded over the past two years. Think of Chatchi PT, you know, Gemini and cloud and so on. The fundamental difference is that, do I go into the technical details or remain clear of technical details? Yeah, no, you can give us some technical details. I'll try to keep it light. There is no way of explaining the difference without getting into reinforcement learning. A typical classification system is an algorithm that you give it massive amounts of data. Think of like a face identification system. And these days data sets have to be really big. You might even have, you know, trillions of tokens of images of faces and you train your algorithm like
Starting point is 00:13:27 This is a face. This is a face. This is not a face. This is a face. This is not a face It's called machine learning if you have succeeded in your training Then your AI should be able to tell you it's a face or it's not a face So this is like a typical, you know classification system what we consider AI you know, classification system. What we consider AI is a very broad term that encompasses so many different subcategories. Just side note, how alone am I in not knowing that Apple Virtual Assistant Siri stands for something? Speech interpretation and recognition interface?
Starting point is 00:14:01 Did you know Siri was an acronym? I didn't. Also, Apple is reportedly freaking out behind the scenes about ChatGPT for the last few years, being generative and having chat bot capabilities and longer conversations in Siri. So apparently, a lot of their efforts in self-driving cars got shuttled over to their AI division. And a lot of people at Apple are like, I can't even speak publicly about this. The other big subcategory under the broad umbrella of AI
Starting point is 00:14:29 is NLP, or Natural Language Processing. So this is the area that deals with human language. So you have audio data that you feed into the AI system. And the idea there is that you are building an AI system that even makes predictions about human language. NLP tools would learn, for example, predictive text. What the algorithm is doing is kind of predicting what the next word is likely to be. So it's just predicting the next token, the next token.
Starting point is 00:15:04 So that's kind of traditional classification or predictive systems. And that was machine learning or NLP, which is natural language processing. And those deal with visual data turned into tokens, and they predict language, like when your phone knows you better than you know yourself. And it's heartwarming and scary. It's like love. Now, chat GPT, for example, I didn't know this, but the GPT stands for Generative Pre-trained Transformer.
Starting point is 00:15:32 And a transformer is this type of deep learning system and it converts info into tokens and can handle more complex processing from language to vision to games and audio generation. So it's definitely a step up from just simple prediction. Whereas over the past year with generative AI, these are AI systems that do more than classification, more than prediction, more than aggregation. They are called generative AI systems.
Starting point is 00:16:02 They produce something new. So image generators, for example, you can put a text description as a prompt in the AI system will produce an image that resembles based on your description. The same with language systems. So Chach-Bitti, for example, you put in a prompt and it's able to produce new answers. So this is where the new, this is what's new with generative systems. And the systems also need to learn which of the words in a language model are the important ones,
Starting point is 00:16:34 which is part of the self-attention mechanism. And then they generate based on statistics. Like what is the most probable way based on the data sets it's learned from, from completing a certain sentence or prompt? So the training data really has a really significant impact in what outputs, whether it's image or text, what outputs these systems can generate. Let's get to the juicy part.
Starting point is 00:17:00 So with any tokens, would that data be in tokens? Like you were saying, facial recognition might have a trillion tokens. Does AI kind of scrub what we know of impressionist art and science fiction and anime? Does it scrub it and grab a bunch of tokens so then it can have those as reference points? So the training data sets is one of the most contested issues in AI because data is constantly harvested. So like your search history feeds into some kind of AI one way or another. I don't know if you are like me, if you signed up for some kind of bonus point, you go to
Starting point is 00:17:42 a grocery shop and you tap that. So that is kind of like in the background, that's kind of bonus point. You go to a grocery shop and you tap that. So that is kind of like in the background, that's kind of collecting your behavioral data. So that data may breeze through infrastructure like Google and then just be on its merry way to a third party aggregator or a broker or an AI company itself. And they just yum, yum, yum, gobble up the data. So this kind of practice puts a lot of data gathering for the purpose of training AI systems, either illegal or borderline illegal. Yes because you know first of all there is no consent, people are not even aware that
Starting point is 00:18:19 you know their data is being used to train AI. So that's just number one on the general level. Let's get to art. But you will also have noticed over the past year or two, the creative community, writers and artists are realizing that their work, their novels, their writing, their arts are being used to train AI systems again without any compensation or with very little compensation after the fact if they go and contest the use of their data.
Starting point is 00:18:55 Because large companies that are kind of developing AI, think of Google, DeepMind, Meta, OpenAI, Anthropic. They are businesses, they operate under the business model. They are commercial entities. Their objective is to maximize profit. My work is auditing, for example, large-scale training data sets. People don't have access to what kind of data set
Starting point is 00:19:24 these companies hold, so we don't have access to what kind of data set these companies hold. So we don't have any mechanism to kind of take a look to scrutinize what's in the data set. Where does it come from? How large is it? These are the kind of questions that simply there is just no mechanism for tech corporations to be transparent, to open up for us to have an objective understanding. So big companies are like, no, you can't see it.
Starting point is 00:19:50 But auditors like Dr. Bahani and colleagues can observe open source publicly available similar data sets as proxies or substitutes to kind of figure out what might be going on organizationally behind the locked Willy Wonka factory gates of the other big tech companies. So you would do get an idea, but through proxy data sets. So to answer your question, I mean, if you are an artist, a writer, if you have produced
Starting point is 00:20:17 novels and so on, it's very, very likely that your work is being used to train AI, but there is very little legal mechanism to actually have a clear idea. Is it like they're stealing a bunch of ingredients and then making something, but they're like, you can't see our recipe and you're like, you stole the ingredients and they're like, you can't see our resume. We made something with it. Is that sort of how it's going? Yeah. So the fact that they are protected by proprietary rights as a commercial entity means that there is virtually no mechanism to force these companies to open up their data sets. Of course, you can encourage them, you can kind of appeal to their good sides and so on.
Starting point is 00:21:03 Good luck. But yeah, there is no legal mechanism. of appeal to their good sites and so on. Good luck. Yeah, right. But yeah, there is no legal mechanism. There is no law or regulation that says you have to open access or you have to share your data. And why not, you ask? Of course, if that is to happen, then all these AI companies would go out of business as it is. A lot of them are under a lot of lawsuits. Yeah, from Meta to OpenAI. OpenAI itself is under a lot of
Starting point is 00:21:37 lawsuits, including from the New York Times. So the minute they open up their data, it becomes clear that a lot of people's suspicions, especially the creative and artists communities, the minute the data sets are open, why they are going to court. Is there any recourse that artists or writers have? Is there anything they can do other than trying to open a really big expensive lawsuit? Yeah, so writers and creatives are organizing for class action suits. I know there are a bunch of class action lawsuits, both in the US and in the UK. Now last August, there was this landmark case and it decided in favor of artists. It found that generative AI systems like Mid Journey and Deviant Arts Dream Up and Stability AI's Stable Diffusion,
Starting point is 00:22:26 that those were violating copyright law by using data sets of billions of artistic examples scraped from the web. And Getty Images sued stable diffusion for copyright infringements, and the evidence was like almost funny if it were such a bummer. But apparently the generative AI so relied on Getty Images that it started such a bummer. But apparently, the generative AI so relied on Getty images that it started adding a blurry gray box to some of its AI output, which was learned from the iconic Getty images watermark. It's just embarrassing. Now, in March of this year, a judge ordered that this lawsuit between the New York Times and OpenAI can proceed despite OpenAI begging it not to.
Starting point is 00:23:05 And the newspaper, New York Times, along with a few other journalism outlets, alleges that OpenAI scraped a lot of their work to train chat GPT. So there are lawsuits, but there are also just huge glaring holes. But also in the UK, for example, they are considering a regulation that leaves massive loophole for intellectual copyright issues that leaves artists and writers with absolutely no protection at all. So people are really organizing on the ground and they have massive amounts of petitions signed and so on. But also there are technical,
Starting point is 00:23:46 I don't wanna say solutions, technical kind of remedies. So you can use data poisoning tools, for example. You have, yeah, so you have- Like a booby trap? Like one of them is called nightshade. So for images, for example, you would insert various adversarial attacks that will make the data unusable for machines, because it's maybe a tiny pixel that's been
Starting point is 00:24:14 altered, so that's not visible to the human eye, but kind of misses with the automated system or how machines kind of use these data sets. So there are various tools like that. Another type of AI booby trap is called a tar pit. And it sends AI crawlers in this infinite loop where they just get stuck and they can't escape. And I just love to think of like the AI system on the toilet, just scrolling, just not being able to exit and get back to work. Even if you find technical solutions now, the big companies are likely to come back
Starting point is 00:24:49 with another solution that makes your own solution defunct. So you really have to be adapting to that constantly. So I think a viable solution has to come from the regulation from the legal space. And how do you feel about, is it Sam? Oh my gosh. Sam Altman. Thank you. I almost said Sam Edelman, which is a shoemaker.
Starting point is 00:25:12 What is wrong with me? How do you feel about kind of some recent changes, like last May, I believe, he went before Congress in the US saying like, hey, we better watch out. And now I think he was, like, at the inauguration. So we chatted about this in 2023's NeuroTechnology episode, because just a few months prior, in May 2023, Sam Altman was in the news a lot as a cautionary voice. And as we said in that episode, if you're wondering why this was a big deal, Sam Altman is the head of OpenAI, which invented ChatGPT. And in spring of 2023, he spoke at the Senate Judiciary Committee subcommittee on privacy, technology, and the law hearing
Starting point is 00:25:55 called Oversight of AI, Rules for Artificial Intelligence. He also signed a statement about trying to mitigate the risk of extinction. And he told the committee that, quote, AI could cause significant harm to the world. My worst fears are that we cause significant— we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways. I think if this technology goes wrong,
Starting point is 00:26:22 it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening. And ultimately, Altman urged the committee to help establish a new framework for this new technology. And though in 2016, Altman declared that Donald Trump was terrible, he recently backpedaled on that. And Altman said that he's changed his mind and donated
Starting point is 00:26:45 $1 million to Trump's reelection campaign in 2024. So Altman's thoughts on AI regulations likely have pivoted in the last few years since that hearing. It doesn't seem like regulations are going to happen very fast. Yeah. So this idea of, oh, we got to watch out because our AIs might become sentient and might be out of control, might cause existential risk and lead to human extinction. So unfortunately, this is a very popular and commonly disseminated worry emerging out of
Starting point is 00:27:24 AI, but there is very little scientific evidence for it. People have done a thorough analysis how such a possibility is just 0% likely. There it's just- Really? Yeah. But human beings can be so terrible and they're learning from us.
Starting point is 00:27:45 Yeah, so there is no intention, there is no wish or there is no desire to act on something, to do something. But at the end of the day, it really is a massive complex algorithm, of course, and that is to some extent unpredictable. But that doesn't mean AI systems as they develop further, all of a sudden develop intentionality or wishes or interests or needs. I mean, you and I are, as human beings, we do something and we get satisfaction out of it. I have a motivation for doing the research I do and if that doesn't happen, I feel disappointed, I can feel sad. I also feel accountable when I put out, you know, a research
Starting point is 00:28:33 paper. I know if there are errors in it, I know I'm the one responsible for it. So there is none of that. So when an AI system gives you an output, it's not because it might be worried about if it's an incorrect answer or it's because it wants to please you. It's just a chat bot that is designed to provide, you know, given answers again, based on prompt. So this idea of AI systems causing existential risk
Starting point is 00:29:05 leans on this huge leap of faith that requires you to believe that there is intention, there is emotion, there is motivation, all these human characters, all these things that makes us human. But it's just not there. It can't emerge out of nowhere. That's part of what makes us different and unique
Starting point is 00:29:24 as individuals, as biological organisms. These are things that are hardwired on us. These are also things that makes us human. This is why I fret still. Of course, we have to worry about powerful people using AI to do terrible things. And what worries me is over the past year, and especially since the rise of Trump and since the Trump administration came to power, you have a lot of large corporations really
Starting point is 00:29:54 abandoning their voluntary pledge to protect basic fundamental rights. So Meta has, for example, walked back their commitment for DAI, their commitment to fact check and monitor their social media platforms. And against hate speech too as well, which is what? Exactly. What really is worrying also now is you have superpowers in powerful governments, like the US government, the UK government, even the European Union itself, using AI, moving into AI for surveillance, for military purposes, for warfares. And a lot of AI companies, starting from OpenAI, Meta, Amazon, Google, they had
Starting point is 00:30:41 voluntary principles not to use AI for military purposes, but over the past year, all of them have abandoned that. So even here across in the EU, you have a French AI company called Mistral announcing that they are open to working with European governments to provide military AI. So of course, we have to worry about governments using AI under the guise of national security, which really means monitoring and surveillance and squishing dissent. And really this is against fundamental rights for freedom of expression, freedom of movement and so on. So we have to worry about AI, but AI in the hands of powerful governments and people in position of power, rather than
Starting point is 00:31:26 the AI itself, because the AI can't do anything by itself. Is it sort of like the guns don't kill people, people kill people kind of a situation? Yeah, exactly. How is that working out for us in the US though? Well, death by firearm is the leading cause of mortality for teens and children in the US, according to the Pew Research Center. And over half of our nearly 50,000 gun deaths a year in the US are suicides. That's not going well.
Starting point is 00:31:55 And that's because the NRA slogan, guns don't kill people, people do, is what is known as bumper sticker logic or a misdirection, also called a false dichotomy or plainly speaking, a fallacy, according to philosophers. So giving AI a sense of technological neutrality is a bit misguided. The regulations being walked back is terrifying, especially trying to put trust in a government to stop things when a lot of our people in power don't know how to use their own printers. So some of the questions and the congressional hearings are like, how does this even work to...
Starting point is 00:32:34 Does Google track my movement? Does Google through this phone know that I have moved here and moved over to the left? Which is terrifying. So maybe they don't have morals and guilt and things like that and ambitions, but I was looking at some research showing that AI is being trained to become more and more sexist, more and more xenophobic, more and more racist, use more and more hate speech. And is it learning from the worst of humanity? Is it amplifying it? Is that just exposing how much hate is in the world? Yeah, so let's maybe walk back 20 years.
Starting point is 00:33:19 Okay. Because that's when real progress in AI started to emerge. We've had a lot of the core principles for AI since in the 1950s, 60s, 70s, 80s. Some of the foundational papers about reinforcement learning, deep learning were written in the 1980s. So Geoff Hinton's famous paper on, I think it was Convolutional Learning, was written, you know, late 1980s. We don't have to go too deep into it, but I do want to tell you that Geoffrey Hinton is apparently considered the godfather of AI and a leading figure in the deep learning world.
Starting point is 00:34:03 And in 2024, he won the Nobel Prize for his work. He's also worked for Google Brain, and then he quit Google because he wanted to, quote, freely speak about the risks of AI. Quit Google so he could talk about it. Now in 2023, during a CBS Saturday Morning News segment, he warned about deliberate misuse by malicious actors, unemployment, and existential risk involving AI. He is very much in favor of research on the risks of what could become a monster that
Starting point is 00:34:34 he helped create. He's like, yo, we need some safety guidelines and regulations, buddies, and that is not really happening. But yes, he is among a few who over the last many decades drove these innovations. But what really made the AI revolution possible is the World Wide Web. With the emergence of the World Wide Web,
Starting point is 00:34:57 it became possible to kind of scrape, to kind of gather, harvest massive amounts of data from the World Wide Web, you know, through chat forums or domains like Wikipedia, they are really a core element of training AI, at least for text data. So that means that a lot of our training material for AI comes from the worldwide web, whether it's our digital traces, whether, you know, it's the pictures we put on social media, pictures of your kids, your dogs, yourselves, and so on. Or the kind of infrastructure, digital infrastructure, like Google is everywhere and has dominated, whether you want to email
Starting point is 00:35:36 or prepare a presentation or write a document, Google has provided the infrastructure. That means they have the infrastructure to constantly harvest training data. This means that a lot of the data that we are using for training reflects humanity's beauty, but also our cruelty and the ugliness of humanity. And just last week, a tech report released by Google admitted that its Gemini 2.5 flash model is more likely than its predecessor model 2.0 to generate results outside of its safety guidelines. And images are even worse than text at that. And I mentioned this in a 2023 episode we did with Dr. Nita Farhani about neurotechnology.
Starting point is 00:36:21 But around Juneteenth of that year, I saw this viral tweet about Chach-GPT not acknowledging that the Texas and Oklahoma border, the panhandle, was in fact influenced by Texas desiring to stay a slave state, which is fact that Chach-GPT would not acknowledge. So Dr. Berhane notes that when an AI is built on racist, sexist, xenophobic, et cetera, data set, the results, like history itself, are not kind to minoritized identities, she says. It reflects, you know, societal norms. It reflects, you know, historical injustices and so on.
Starting point is 00:37:00 Unless you really delve into the data set and ensure that you do a thorough job of cleaning the data set. We've audited numerous data sets and you find content that shouldn't be there. You find, you know, images of genocide, images of, you know, child rape. One of the early data sets we audited back in 2019 was a data set called 80 million tiny images. It was held by MIT and we found several thousands of images, really problematic images, images
Starting point is 00:37:36 of black people labeled with the N word, images of women labeled with the B word, the C word, and words I can't really say on air. So while the upside of AI is detecting cancer from scans earlier or predicting tornado patterns, there's also so much concern. Now Dr. Martin Luther King Jr. observed and proclaimed that the arc of the moral universe is long, but it bends toward justice. But I think we might consider that the arc of the internet is
Starting point is 00:38:11 short and it bends towards smut and hate. So you can assume any data you collect from the web is really horrible. And in one of the recent audits, actually, we found an overwhelming amount of women, women concepts really represented by images that come from the pornographic space. So massive amounts of the web is also really, you know, pornographic and really, you know, problematic content. So you have to do a lot of filtering. So as a result, this is why, you know, DEI initiatives, this is why obligations to audit your data set to ensure that, you know, toxic contents
Starting point is 00:38:53 have been removed and so on. This is why it's so critical. So an AI is only as ethical as its datasets. And the internet is a weird, dark place where people say things they would never say in person. So the datasets are feeding that. But as we are seeing now, a lot of these companies are abandoning their pledges
Starting point is 00:39:13 and we're really walking backward. But for any given AI system, whether it's a predictive system or classification or generating, you can assume that deeply-held societal injustices and norms will be reflected in how that AI performs in the kind of output the AI gives you. So that's the default. So we have to work backwards to ensure we are removing those biases. Let's say that some of the comments online, some of the hate online is AI generated comments,
Starting point is 00:39:45 which I sometimes I'll look at now X and I'll say, who are all these people? Why are comments getting meaner and meaner with Facebook with a lack of fact checking, more and more sort of hateful speech? Does that mean that the next tokens and datasets pick up on that and say, oh, this is how people think? And then the next tokens and datasets pick up on that and say, oh, this is how people think? And then the next one, so does it get amplified like mercury toxicity and like a tuna fish? That's one way of putting it. Yeah.
Starting point is 00:40:15 Okay. Yes, yes. You are encoding those biases and you are exaggerating them. The technical drawback is that so we train a given AI for a next word prediction. For example, it's based on these massive amounts of data that kind of tells you how people text, how people use language, for English, for example, how people construct a core and sentence. That data, that training data comes from actual people activities, people interactions. That is your baseline, so to speak, in when you are modeling how language operates.
Starting point is 00:40:53 But now, as you said, as the worldwide web is filled more and more with synthetic text or syntax data that comes from generative AI system itself, then your AI system has no frame of reference. It tends to forget. So the quality of the output starts to deteriorate. That's so scary. So this is called model collapse. Okay. Does this keep you up at night? Does it? I mean, I know that it's like, don't be afraid, don't be afraid. But it's also like, this is very new territory for humanity, right?
Starting point is 00:41:32 Yeah, but at the end of the day, I mean, people should be in control. If an AI system starts producing output that is rubbish, that is irrelevant, I don't think it should scare us. It should make people like, okay, that's not helpful to me anymore, so I'm not going to use it. LESLIE KENDRICK Maybe the more it unravels and crashes out, the less people will rely on it. But of course, that hinges on being able to tell that it's spitting nonsense at you. And in this day and age, the world is so profoundly absurd
Starting point is 00:42:03 that truly anything is believable. And Dr. Burhani says that public education is key and just getting the word out that a lot of what we think about AI's capabilities are just big corporations pumping out hype and PR. But the auditors on the inside, like her and her lab, know that, woo, boy howdy, hot damn, it is a bunch of horse bucky flim flam and not to believe the hype. The actual performance is nowhere near what the developers claim.
Starting point is 00:42:33 So these are the facts that we really have to communicate. A lot of the AI systems that we are interacting with are actually subpar in terms of performance, in terms of what they are supposed to do, in terms of what people expect them to do. Because these big corporations have really mastered public communication and PR, a lot of the failures or the drawbacks of AI systems are new to people when you actually communicate it. But this should really be like common knowledge. And if people want to use AI, they should know both the
Starting point is 00:43:05 strengths and what they can do with it, but also where the limitations are and what it can't do for them. Does abstention work? Does not going on meta and giving them more fodder? Does not using chat GPT, does any sort of like boycott work? Yes and no. On the one hand, so a lot of these AI systems have really cleverly been integrated into the social infrastructure. Yeah. So for example, I'm not on Facebook. I haven't been on Facebook for over 10 years.
Starting point is 00:43:45 But the apartment complex I live in can only be communicated via Facebook groups. I still refuse to create a Facebook account, but situations like this really gives you very little option to abstain, to not use these platforms. And you can't avoid Google, for example, Google Search and like, you know, Gmail and like Google Docs. It makes it really difficult. If you want to apply for a job, almost all companies now use some kind of AI to filter your CV before it reaches human. So in some senses, you don't even have the option to opt out. If you are, you know, someone looking for a job, you can't say, oh, I don't want you to use an AI system to seep through my CV. It's just like...
Starting point is 00:44:32 It's going to happen. Yeah, it's going to happen. Dr. Burhani says that it's pretty unavoidable. And I have asked tech lawyers and even they don't read Apple's terms and conditions. They're like, I just checked the box. So instead of using WhatsApp, which is owned by Meta, which really gathers all your text, all your information, we can move to other messaging apps like Signal. So Signal has end-to-end encryption. There is no backdoor.
Starting point is 00:45:00 Nobody can access it, not even governments. This is one of the things Meredith Whitaker, the CEO of Signal, has been really strong in standing up to large governments is that nobody should have a back-end access that gives them the opportunity to gather data. And yes, Signal is run by a nonprofit foundation, SignalFoundation.org. And Meredith Whitaker is Signal's president, and she had worked at Google for 10 years and she was raising concerns about their AI. And she was also a core organizer of the 2018 Google Walkout in protest of sexual harassment
Starting point is 00:45:33 there and pay inequities. She also advises government agencies on AI safety and privacy laws. So signal, good. Yay, signal. And many recently laid off government staff that I know of will only communicate via Signal, which is kind of telling in terms of their own safety concerns. But yes, use Signal. So we can do some things.
Starting point is 00:45:53 We can use less and less of these large corporations' infrastructure, and we can use more open source tools, but also sometimes it's just out of your control. But every little helps in every awareness, you know, it kind of culminates in it will eventually lead to, you know, this massive switch, I hope, at least. That's encouraging. I hope. Yeah. And can I ask you some questions from listeners? Is that okay? Yeah, for sure. But before we do, let's give away some money. And this week, Dr. Bahani selected the cause, the municipality of Gaza and UNRWA, which
Starting point is 00:46:30 directly supports Palestine refugees and displaced families in Gaza. They say every donation, no matter the amount, helps them reach families with life-saving food assistance, shelter, healthcare, and more. And for more info, you can see donate.unrwa.org, which is linked in the show notes. And for more on the ongoing humanitarian crisis in Gaza, please see our Genocideology episode with global expert in crimes of atrocity, Dr. Dirk Moses,
Starting point is 00:46:58 which we will also link in the show notes. So a quick break now. Okay, we are back. Let's run through some questions from your real squishy brains made of human beings out there. There's some great ones. Job replacement. Karla de Azevo, Alia Myers, Red Tongue, Jennifer Grogan, Ian, Jenna Congdon, Rosa, Rebecca Roem, Other Maya, Sam Nelson, Howard Nunes. All these people wanted to know, in Ian's words,
Starting point is 00:47:26 will all jobs be obsolete soon? Do the people working on AI give any thought to compensating people for the lost income? Jenna Congdon said, when will AI get so good that human writers are basically crowded out of a job? This goes for visual art as well. In a capitalist economy, when you got a hustle to make money, as it is, what is
Starting point is 00:47:46 going to happen job-wise? Do you think? Or do AI experts such as yourself? Some of the worry about job displacement is genuine and grounded on real worry. You hear even the so-called good fathers saying things like, you shouldn't bother learning code or like the job of software engineering, for example, will become obsolete and so on. So whether you're a software developer or a writer or an artist, Dr. Brahani says. I don't think AI will fully automate, AI will fully replace human task force. Because at the end of the day, what even the most advanced AI systems do is really kind of aggregate information and kind of output a very mediocre, whether it's image or text.
Starting point is 00:48:45 Some of them are so good though. Some of the art is so good and you're like... But some of the art, it's not just the pure, the raw output. People have tweaked probably like a thousand times. People have tweaked it, people have spent hours perfecting the right prompt and so on. So there is always people in the loop. There is always, whether it's data preparing, you know, data annotation, data curation, to building the AI system itself, to then kind of ensuring the output is something appealing. You really, you need people through and through. So for me, as a former newspaper journalist and I was also a newspaper illustrator, I
Starting point is 00:49:30 am not as optimistic. So so many writers are copywriters who are making content and articles for websites to raise their profile. And now I'm hearing from those people that articles are just written by AI and they are full of shit. And just doing this aside is making me depressed and my chest hurts, but Dr. Brahani is an expert, so I'm going to try to find some bright spots.
Starting point is 00:49:51 And before she had mentioned that lawsuit with OpenAI and the New York Times, and I was looking for it and I found a recent article. This was published literally yesterday, which had the headline, AI is getting more powerful, but its hallucinations are getting worse. A new wave of reasoning systems from companies like OpenAI is producing incorrect information more often. Even the companies don't know why. That's the headline. And this New York Times article explained
Starting point is 00:50:16 that AI systems do not and cannot decide what is true and what is false. And sometimes they just make stuff up a phenomenon that some AI researchers call hallucinations. And in one test the article says, hallucination rates of newer AI systems were as high as 79%. And I also want to note that my spell check tried to get me to change the it's in the headline to one with an apostrophe, which is incorrect. So computers, what's going on? But yes, Dr. Bahani says that a lot of journalism
Starting point is 00:50:51 has been replaced by AI, even though we all know that the generative system is unreliable. It hallucinates a lot of the time. It gives you information that sounds coherent, that seems factual, but it's just absolutely made up. It just there is, it even sometimes gives you citations and so on of things that don't exist. So we always need people to babysit AI, so to speak. So a writer might be, you know, your hours might be reduced and you might be getting paid less and your company might be bringing AI to kind of do the bulk of the job. But still you can't put out the raw outputs because most of the time it's not even legible. So the role of writers and artists and journalists and so on becomes more of kind of a babysitter for AI, verifying the information that's been
Starting point is 00:51:46 put out, kind of ensuring it makes sense and so on. That's right, Kenny. The babysitter is dead. To some extent, the answer is yes and no. Humans will always remain at the heart of AI. The minute human involvement ceases, the minute AI stops operating. Because AI is human through and through, as I said, from the data that's gathered from humans, and so much work goes into data preparation, data annotation, cleaning up the data, detoxifying the data. And unfortunately, a lot of these tasks
Starting point is 00:52:18 are allocated to the developing world. So you have a lot of data workers in Kenya, in Nigeria, in Ethiopia, in India, for example, that really do the dirty work of AI. There are even a bunch of stories where you have Amazon checkout, for example, AI checkout or self checkout, where Amazon was introducing this AI
Starting point is 00:52:40 where you can just collect groceries and your items and just walk out and the AI is supposed to identify what you have picked up and charge you from your credit card for whatever you have used. But then it turns out that it was actually data workers in India that were scanning every item you are picking up. Oh man, what a world. Yeah. Yeah. And I mean, McDonald's also recently partnered with IBM or one of those companies to have
Starting point is 00:53:12 like an AI drive through where AI systems take order and they have to close it within the next few weeks because people were getting orders of like, you know, bacon on top of ice cream and things like that. You added bacon to my ice cream. I don't want bacon. What else can I get for you? Why raise the national minimum wage for the first time since 2009 when you can just spend billions of dollars tweaking unpaid machines?
Starting point is 00:53:39 Like, welcome to the future, maybe. So I guess the point I'm trying to make is like, you always need humans for AI to function and operate as it's supposed to, because at the end of the day, these are like really mere machines that don't have, you know, intention, understanding, motivation, and so on, like we humans are. So maybe our jobs will look different,
Starting point is 00:54:02 but there will be jobs. Yes. I know a lot of people, myself included, wanted to know the environmental impact. Lily, a bunch of folks. And first time question askers Eleanor Bundy and Megan M. And we also did a recent episode for Earth Day with this climate activist and humanitarian rights lawyer, Adam Mett, who said that AI could be solving some environmental concerns, which is optimistic. But what does an AI expert take on that?
Starting point is 00:54:27 Megan Walker asked environmentally, how bad is AI when compared to the current computing we do? Yeah, what's going on? Yeah, yeah, yeah. How much energy does it use? Yeah. So again, like we have very little information about training data. The kind of energy consumption used by AI systems is very opaque. There is very little transparency. Okay, but what is the damage generally?
Starting point is 00:54:53 So, generative AI really consumes massive amount of power compared to traditional AI. For example, if you are using Google to put a prompt, say, you know, how many glasses of water should I drink per day? And if you do the exact same prompt and you ask the generative system, such as ChatGPT, people estimate you use about 10 times more energy to process that query and to generate answers.
Starting point is 00:55:20 I wanted to go straight to the source. So I used Google AI and Chat GPT for the first time Asking them both how much energy does Chat GPT use as opposed to Google? Now Google AI said in what I hope is a snotty tone that Chat GPT consumes Significantly more energy per query five to ten times more electricity than a standard Google search Now it cited a 2024 Goldman Sachs report titled, AI is Poised to Drive 160% Increase in Data Center Power Demand.
Starting point is 00:55:51 Then I asked Chachit P.T. and it said that its version 4 can use up to 30 times more energy than a basic Google search. And it also noted, I like to think defensively, that Google has had decades to optimize for a lower footprint. Now, Dr. Brahani says that the energy consumption of generative AI systems has become, indeed, a big issue, and that in countries like Ireland,
Starting point is 00:56:16 the data centers are power hogs. The compute resources required to running AI systems equals or is more than the total amount of energy that's required to run Irish households. But in places like Texas, sometimes that energy consumption is taken away from households to run data centers. So it kind of results in reduced energy for households. And this is before we even get into the massive tons of water that you need to cool down data centers. Oh, yeah. I didn't even think about that.
Starting point is 00:56:58 Yeah. And the water also has to be pure because you can't use, say, ocean water or seawater because of the sea salt that might damage the servers and so on. So again, there is competition. It tends to be when you use water that is used for households. As a result, people tend to pay for the consequences of that. So yeah, water consumption is another massive area as well. And do you think that more companies will look toward some sort of nuclear power for their supercomputers or is that still too highly regulated? I think companies like Google are actually talking about using nuclear power.
Starting point is 00:57:39 But yes, that option is being considered. How about health care? Several people, Benjamin, Brenna Schwert, Annalisa DeYoung, Emil, Nikki G. asked, how can AI be ethically applied to health care, like data analytics, treatment options, medical imaging interpretation, second opinions? Is there some hope there for it? Yeah, I think there is some hope.
Starting point is 00:58:02 There is some hope for sure. I think there is some hope. There is some hope for sure. I think there is some hope in numerous domains for AI to be useful. Okay. However, that just remains a theory. It's possible in theory. But the problem, there are a bunch of problems. One of them is that generative systems are fundamentally unreliable. So for example, there is a new audit that came out, I think towards the end of January, where they looked at this new AI tool,
Starting point is 00:58:34 where the system kind of records your conversations between say a healthcare provider and a patient, and it summarizes the conversation and it kind of it's supposed to reduce a lot of the work for nurses and so on. And what they found was that in some cases, eight out of the 10 summaries were hallucinations. Oh, no.
Starting point is 00:59:02 So generative systems tend to be unreliable. And the other thing is because a lot of these tools that are supposed to be used in healthcare tend to be built by businesses with the objective of maximizing profits, they tend to have a different kind of objective than say, you know, what's good for the patient. So another famous case is United Heals that is in court at the moment where they were using a suit of about 50 algorithms to look at mental health services. And what they found was that they were using, you know, cost as a proxy rather than the need of the patient as a proxy. And they were cutting a lot of services, a lot of like, you know, therapy services
Starting point is 00:59:50 and meditations and other necessary services, again, because they are, you know, they're looking at the wrong motivation, the wrong proxy. They're looking to save the company money rather than to ground what they do in the needs of the patient. the company money rather than to ground what they do in the needs of the patient. So if we correct, for example, hallucinations and biases in AI systems, and if we kind of, it's impossible to strip down all capitalist motivations, but if capitalist motivations come second to the needs of the patient, then it's possible to kind of develop AI systems in various areas of healthcare that prioritize patients, that prioritize people as opposed
Starting point is 01:00:35 to just inserting technology for the sake of having technology and also for using technology to maximize profit as opposed to ensuring patient safety. Which is, once again, good luck. I mean, our healthcare is above and beyond frustrating, but a lot of people wanted to know, Samwise, Emily Heard, Amalia Magda Casauca, wanted to know, Kira Henderson asked, first time question asked her,
Starting point is 01:01:01 how do we feel about AI and chatbots and using them in high schools? Schoolwork, Samwise, AI use in schools, thoughts? Is there a way to flag it? Are we doing education and injustice? Yeah. So on the one hand, I know some people that find using AI chat boards really helpful. You give it a prompt, it gives you just a bunch of answers. Of course, these are people that know how to craft the perfect prompt, that know where AI can be useful and where it might fail you. So with all that in mind, it can be useful, but you need to be an expert.
Starting point is 01:01:47 Having said that, for young kids, studies are starting to emerge that, for example, they did a control study, I think, with over 3,000 students where some of the students were given chatbots to help them with. I think it's math problems. The others weren't. And they did a test. So what they found was that the kids that had chatbots did better than the kids that didn't have. Then they performed another test a few weeks later, and they found that the kids that used
Starting point is 01:02:24 chatbots performed way worse than the kids that didn't have. So people are realizing that these systems inhibit learning. Of course, education is not just information dissemination, the teacher going into class and just like telling the students facts, but rather it's an interaction. It's a two-way street, both for the student and the teacher developing the skills, especially critical skills to analyze and to decipher fact from fiction, information from misinformation and so on. And when you use AI chatbots without knowing their limitations, you tend to kind of trust the output, you tend to treat it as fact, but also it inhibits your learning, it inhibits
Starting point is 01:03:12 your critical skills. And if you don't have the knowledge to begin with to verify the answer, you have no way of knowing what you have, what you are getting is correct or incorrect. So in the long term, studies are coming out to show that they might seem helpful in the immediate term, but in the long term, these chat bots might be inhibiting the learning process. Last listener question, and I know my husband has this question too, DVNC Sherry Rempel and Chelsea and her doctor Charlie want to know, Chelsea asked, why does AI do better when you threaten it? Is that ethical?
Starting point is 01:03:47 Because it doesn't feel like a good president to set in any part of life. Deviancy asked, is it weird that I feel the need to say please and thank you when talking to chat bots? Will the AI overlords be nicer to me when they take over? I assume all of my conversations will be logged for eternity. And Jarrett, my husband also, doesn't use chat GPT very much, but when it came out, he was trying to teach it to be civil. And I was like, boy,
Starting point is 01:04:11 I don't think that's going to work. Do manners matter? Yeah, yeah, yeah. So for models like chat GPT, they have what they call a knowledge cutoff date. So the training data, your interaction won't really input into the learning system of the model. It's the training data set for, I think, JudgeGPD that ends, I think, around 2021 or 2022. So JudgeGPD, for example, can't give you a coherent answer for any event that has happened recently. So different models are using data from different timelines. They have to collect it and clean it and process it first. So it's not as real time as I thought it was or as some people might expect.
Starting point is 01:04:53 For kind of when you speak to it, was it aggressively threatening? Yeah, threateningly. Why does it do better when you threaten it? So it's the first time I'm hearing this. So I should try it out. I should check it out and see if that also happens to me. But yeah, it's the first time I'm hearing it. Okay, let's hit the books for this.
Starting point is 01:05:13 Specifically a 2024 study about how to interact with large language models. And it's titled, Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLMs, a cross-lingual study on the influence of prompt politeness on LLM performance. So this abstract explains that they did testing in English, Japanese, and Chinese language models and that as the politeness level descends, the answers generated get shorter. However, on the far side of the rude scale, impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. And the best politeness level is different according to the language.
Starting point is 01:05:53 And they say that this suggests that LLMs not only reflect human behavior, but are also influenced by language, particularly in different cultural contexts. So what about the future? The researchers say that it is conjectured that GPT-4, being a superior model, might prioritize the task itself and effectively control its tendency to, quote, argue at a low politeness level. So as it matures, it just won't engage.
Starting point is 01:06:21 It's like this AI new generation has been to therapy, if it were a person, which it's not. And as to the AI overlords, again, I mean, it's just models. It's just, you know, data sets and algorithms and, you know, connection of networks. Okay. There is no kind of all-knowing, God-like, all-seeing AI. But of course, you know, the people that are running AI companies come close to that because they have access to the data, because they have
Starting point is 01:06:51 access to the algorithms. So you might worry about those people, you know, using your data. It has like almost invisible but very nuanced downstream impact. So in the US, for example, authorities are forcing companies like Meta to give up data so that authorities are hunting down women that had abortions, for example, in areas where abortion is prohibited. Law enforcement is working with Amazon, for example, for, was it Amazon Ring? Oh, right. This is that camera enabled and Amazon owned Ring doorbell. So, law enforcement like ICE uses that kind of data from those to do something, to even deport people. So, what we should worry about is not really AI overlords,
Starting point is 01:07:45 but these companies working with powerful entities to really kind of identify people that might be in trouble with the law or that might be doing, you know, that violates the law because that data gives them access, gives them the knowledge about the whereabouts and the interactions and the activities of people. Yeah. And previously, according to a 2020 Newsweek article titled, Police are Monitoring Black Lives Matter Protests
Starting point is 01:08:12 with Ring Doorbell Data and Drones, Activists say. It's reported that Amazon Ring has video sharing partnerships with more than 1,300 law enforcement agencies across the U.S. However, in January 2024, Ring said that it would stop letting police departments request and receive users' footage on its app. Now, on the flip side, some Ring doorbell owners are posting on the Ring neighbors' app when ICE raids are going down locally and they're alerting their community.
Starting point is 01:08:41 Now, Ring, of course, notes that those are user-generated posts. It has nothing to do with them. Whether or not they'll censor those user-generated posts is like anyone's guess. Hey, let's take a welcome departure from reality for a sec, shall we? Do any movies get it right? Like, does The Matrix get it right?
Starting point is 01:08:58 Any, does AI, that old Spielberg movie, does anyone actually get AI right? Or does that drive you absolutely insane to watch TV? So I love science fiction, actually. Like The Matrix is one of my favorite movies. I knew it, I knew it. It's a good one. But also that's nowhere,
Starting point is 01:09:15 you have to treat science fiction as science fiction for the sake of like, it's really, some really good science fiction really brings you into a world that you couldn't even envision. So I love that element about science fiction really brings you into a world that you couldn't even envision. So I love that element about science fiction, but a lot of this like robot uprising Terminator like movies are really just for entertainment. There is nothing that can be extrapolated and said, oh, this could happen to real AI, but you have kind of very nuanced sci-fi movies that nail it. So you have Continuum.
Starting point is 01:09:49 It's not a movie, it's a series. It was on Netflix a while back, a few years back. So what happens here is that as AI companies become powerful, they take over government and they become the bodies that really govern society. So that kind of sci-fi is very close to reality than Terminator-like movies, yeah. How about Black Mirror?
Starting point is 01:10:18 Oh, Black Mirror is so good. I mean, Black Mirror, there are some things that are like, eh. Now, when Black Mirror came out, it was just like, wow, this could happen. And now it's like, oh, that has happened. Or it's like, oh yeah, this is what's happening with this and that government.
Starting point is 01:10:34 So yeah. Wow. Yep. And the last two questions that I always ask are always worst and best. I guess your most loathed thing about AI and your favorite thing about it, I guess we've talked a lot about cautionary, but in terms of what you do or in terms of
Starting point is 01:10:51 your job, worst and best thing? Yeah. So the worst thing is really just the hype. As a researcher, I have my own research agenda, but the hype is so destructive. You see something that's not true being disseminated, going viral and you know, as an expert, it's really troubling. So you have to stop what you are doing and do some work to kind of correct it or at least attempt. So yeah, a lot of the hype really is what really gets on my nerve and it becomes also a problem in terms of getting my own work done.
Starting point is 01:11:26 But what excites me about AI is I'm still extremely optimistic about AI. But unfortunately, a lot of the AI I get excited about is not something that results in massive profits. So using AI for disaster mapping, using AI for soil heads monitoring and so on. These are things that really excite me, but there is no monetary value in developing AI for this system. So these are the things that really get me excited that really make me feel like, wow, this is powerful tool that we can use to actually do some good in the world.
Starting point is 01:12:04 Yeah, we could make sure that everyone is fed and has healthcare and that resources are allocated in a way that's fair and we just don't because of money. Because it doesn't make you money. Which is, I think, once again, money is the root of all evil. Thank you so much for doing this. This has been so illuminating and it's great to talk to someone who knows their shit about this. Thank you so much for doing this. This has been so illuminating and it's great to talk to someone who knows their shit about this. Thank you so much for having me.
Starting point is 01:12:28 I really enjoyed our conversations. So ask real people, real not smart and important questions because how else are we supposed to learn anything? So thank you so much to Trinity College's Dr. Brahani for sitting down with me and making the trip to Ireland so eventful. I loved this talk. And you can find links to her and her work in the show notes as well as to the cause of the week.
Starting point is 01:12:51 We are at Ologies on meta-owned Instagram and on Blue Sky. And I'm giving my data as Ali Ward with just one L on both. And our website has links to all the studies we talked about, and that link is in the show notes. If you're looking to become a patron, you can go to patreon.com slash Ologies and you can join up there. If you need shorter, kid-friendly versions of Ologies episodes, we have them for free in their own feed. Just look for Smology's. That's also linked in the show notes. Please spread the word on that. And we have Ologies merch at ologiesmerch.com. Thank you to Erin Talbert for adminning the Ologies podcast
Starting point is 01:13:24 Facebook group. Aveline Malik does our professional human-made transcripts. Kelly Ardwyer does the website. Noelle Dilworth is our flesh and blood scheduling producer, human organism, Susan Hale, managing directs the whole show. A live editor, Jake Chafee, helps put it together. And the connective tissue lead editor is Mercedes Maitland of Maitland Audio.
Starting point is 01:13:44 And Nick Thorburn made the theme music using his brain and ears and fingers. And if you stick around to the end of the show, I tell you a secret. And this week, it's two. One is that I think I'm going to be shooting something next week and I will tell patrons about it first, but also I'll do some posting on social media if and when it happens. I'm really excited. I don't mean to be secretive, but just send good vibes next week. I'll tell you the secret after it happens. And the other secret is before I went to Ireland, I got a couple of those film cameras disposed because it's like, ooh, what is this? There's film in this. And I took all the pictures
Starting point is 01:14:16 and I haven't gotten them developed yet. And I kind of feel like the longer you wait to get them developed, the more you'll like them. And so I don't know what the appropriate amount of time is to forget about this disposable camera and then get it developed if it should be like a couple more months, or if I should get it developed in a year. And so now I just have this disposable camera in my backpack and I don't know how long I should,
Starting point is 01:14:42 I also don't know where to get it developed if I'm being honest. But anyway, if anyone has thoughts about that feel free to advise me that is a very analog update here for me all right fucking please do not use chat GPT to write papers or illustrate anything important hire an illustrator if you can illustrators writers artists musicians please let them live. They are alive. Okay, be good. Bye-bye. Hacodermatology, homology, cryptozoology, lithology, nanotechnology, meteorology,
Starting point is 01:15:17 nephrology, seriology, cytology. It's the technology.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.