Factually! with Adam Conover - Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor

Episode Date: October 2, 2024

The AI hype train has officially left the station, and it's speeding so fast it might just derail. This isn't because of what AI can actually do, it's all because of how it's marketed. This w...eek, Adam sits with Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton and co-authors of "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference." Together, they break down everything from tech that's labeled as "AI" but really isn’t, to surprising cases where so-called "AI" is actually just low-paid human labor in disguise. Find Arvind and Sayash's book at factuallypod.com/booksSUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgum» Advertise on Factually! via Gumball.fmSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript
Discussion (0)
Starting point is 00:00:00 This is a HeadGum Podcast. You know, we're in the middle of the election season, and if you're like me, you have likely been pulling your hair out and lamenting, why does it feel like American democracy is unraveling? Well, the answer is that it is the fault of our electoral system and its winner-take-all ideology. But is this really the best that we can do in 2024? Spoiler, it's not.
Starting point is 00:00:23 For that answer, I highly recommend The Future of Our Former Democracy, the brand new podcast from More Equitable Democracy and large media. Hosts George Chung and Colin Cole dive into the fascinating history of Northern Ireland, exploring how they reformed their political system to overcome deep divides
Starting point is 00:00:40 and ensure more equitable representation. Each episode takes a closer look at why the US could learn from Ireland's journey and how a system like theirs might help us break free from the chaos of our own elections. So take it from me, their latest episode is an eye-opening exploration connecting the rich complex history
Starting point is 00:00:57 of Irish-English conflict with the racial tensions shaping the U.S. today. It is a compelling episode that you won't wanna miss. So don't miss out. Follow the future of our former democracy on Apple podcasts, Spotify, or wherever you get your podcasts. With Audible, there's more to imagine when you listen.
Starting point is 00:01:15 Whether you listen to stories, motivation, expert advice, any genre you love, you can be inspired to imagine new worlds, new possibilities, new ways of thinking. And Audible makes it easy to be inspired to imagine new worlds, new possibilities, new ways of thinking. And Audible makes it easy to be inspired and entertained as a part of your everyday routine, without needing to set aside extra time. As an Audible member, you choose one title a month to keep from their ever-growing catalog. Explore themes of friendship, loss, and hope with Remarkably Bright Creatures by Shelby Van Pelt.
Starting point is 00:01:43 Find what piques your imagination. Sign up for a free 30-day Audible trial, and your first audiobook is free. Visit audible.ca to sign up. I don't know the truth. I don't know the way. I don't know what to think. I don't know what to say.
Starting point is 00:02:03 Yeah, but that's alright That's okay I don't know anything Hello and welcome to Factually, I'm Adam Conover. Thank you so much for joining me on the show again. You know, when it comes to AI, it can be hard to know what to believe. When AI boosters talk, they claim that AI will rapidly and exponentially improve, that it will replace everyone's job, and that it is so powerful there is a measurable
Starting point is 00:02:32 chance it will destroy the entire world. Yikes. But by now it should be clear that a lot of what the AI evangelizers say is just marketing. You know, it's easy to believe in the coming singularity apocalypse when believing in it might be your path to making billions of dollars. These tall tales and enormous valuations depend on an appeal to authority. AI entrepreneurs say, I know a lot of things you don't, here are some spooky stories and I'm so much smarter than you that you can't question me. And we the public and a lot of the press end up believing them. You know, for decades our culture and many of our reporters have bought into the lie
Starting point is 00:03:14 about the tech CEO as heroic genius. Americans have had a permanent hard-on for hero entrepreneurs in the mold of Steve Jobs or Bill Gates, and so we tend to believe anyone who's able to convincingly play that role. But as we talked about when we had Ed Zitron on the show, some of this mythology has been starting to unravel for AI specifically. The use cases for large language models like chat GPT
Starting point is 00:03:40 so far haven't actually been that strong. They're great for coding, for cheating on your term paper if you're an undergrad, maybe for identifying an image or two, but that's about it. And we're starting to see that the environmental costs of these technologies are huge and that the notion of rapid and accelerating growth has simply not materialized over the past few years. And that is just one subsection of the world of AI. So as a lay person, it can be very difficult to tell if the claims an AI booster are making are the real deal
Starting point is 00:04:14 or if it's simply snake oil. But that doesn't mean that we, the public, can't get a better handle on this technology and what it can actually do. So today on the show, we have two genuine experts, two Princeton computer scientists who will help us do exactly that and separate AI fact from AI fiction.
Starting point is 00:04:34 But before we get into it, I just wanna remind you as always, that if you wanna support this show and all the amazing conversations we bring you every single week, you can do so on Patreon. Head to patreon.com slash Adam Conover. Five bucks a month gets you every episode of the show ad free. You can join our community discord as well. We would love to have you there. And if you like stand-up comedy, guess what?
Starting point is 00:04:55 I am embarking on a huge new tour over the next coming months. Coming soon, I'm headed to Baltimore, Portland, Oregon, Seattle, Denver, Austin, Batavia, Illinois, Chicago, San Francisco, Toronto, Boston, and Providence, Rhode Island. Head to AdamConover.net for all those tickets and tour dates I would love to see out there on the road. And now let's get to this week's episode. Arvind Narayanan is a professor of computer science at Princeton and Sayesh Kapoor is a PhD candidate there. Their newsletter, AI Snake Oil, is an essential resource for picking apart
Starting point is 00:05:29 the latest developments in AI and separating fact from fiction. And they have got a new book out based on that work called AI Snake Oil, What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference. Please welcome Arvind and Sayash. Arvind and Sayash, thank you so much
Starting point is 00:05:45 for being on the show today. Thank you so much for having us. Oh, it's great to be here. So you have a sub stack, a book called AI Snake Oil, which is a wonderful title. It's really evocative, brings us back to the bad old days of the 19th century. Why do you choose that metaphor for AI?
Starting point is 00:06:02 Why Snake Oil? Let me take this one. So it started back in 2019. I started hearing about products for HR departments to use while hiring people. These are AI products whose builders claim that the software will analyze a video of a job candidate talking for 30 seconds, not even about their qualifications for the job, but just talking
Starting point is 00:06:34 about their hobbies and whatnot. And based on the body language, facial expressions, who knows what, they'll figure out the candidate's suitability for the job position. And I was like, what? That's insane. That's an insane claim to make for any product. That's what I thought. And you know, there's no evidence that anything like this can possibly work. And so I was invited to give a talk at MIT coincidentally, and I went to say look this kind of thing is snake oil and i gave people more kind of structured computer science way to think about when we can expect to work and when it can't add it was the weirdest experience that talk kind of went viral it was not even the video of the talk i put up the slides online i thought twenty of my colleagues would see it. But yeah, based on the response to that,
Starting point is 00:07:25 I realized people really, really wanted to know more about this. And then I realized I had to actually start doing research on it and that's when Sayaz joined my team and we've been doing research on this for the last five years or so. And what has your research found about, is your research on the capabilities of AI itself
Starting point is 00:07:45 or on the claims or what exactly are you studying? I think we mostly study what developers claim and we compare it against what we find when we do a more technical deep dive into it. So let me give you an example. This is field of political science called civil war prediction. This is an entire field that's basically trying to see
Starting point is 00:08:04 when the next civil war will happen and where. And in this field, at least until a few years ago, there were papers that were claiming that we could predict where the next civil war will happen with like a 99% accuracy. So 99 times out of 100, we can actually predict civil wars in advance. And we dug deeper into it. And what we found was that in essentially all of the cases where AI or machine learning was claimed to do so much better than two decades old approaches, it was because of errors. And when we fixed these errors, it
Starting point is 00:08:35 turned out that AI did no better than 20-year-old tools that could just predict broad statistical patterns. So if the GDP of a country goes down, it's more likely to have civil war and so on. Uh, but not really do anything much better than that. And is that a pattern across, you know, AI as a field? Do you see other examples of that sort of, you know, large claim that is not really backed up when you dig into the details?
Starting point is 00:09:01 So there is a few different things here. One is researchers kind of fooling themselves, right? I mean, we're computer scientists, so we know what it's like to get caught up in this AI hype. And in fact, one of my papers had this kind of area where we were not skeptical enough and we were too optimistic about what AI could do. And that was in the case of using
Starting point is 00:09:23 AI to figure out which piece of code was authored by hackers. So that kind of problem. So yeah, there's lots of researchers, whether it's computer scientists, political scientists, medical researchers, all getting caught up in the hype and making these overblown claims. And then it goes from there to companies, right? So companies are building products and what they're trying to do is just capitalize on AI being everywhere in the public's mind and whatever it is they're selling, they want to slap the AI label on it, right? And claim that it's going to be able to do magical things. And then you have the media, you know, and a lot of the time claiming that AI can predict
Starting point is 00:10:03 earthquakes, for instance, is going to generate clicks. And we've done some analyses of why it is that the media keeps falling into these traps of hyping things up as well. So it's all of those that reinforce each other. Why is that so tantalizing? Why is it that the media keeps falling into that trap of buying into the hype over and over again? I think one reason is sort of structural. So I think it's very easy to come up with clickbaity headlines, either the good ones or the bad ones, right? So either AI will save us all or AI will kill us all.
Starting point is 00:10:39 And those types of headlines, I think, have got a lot of attention. But more than that, I think it's also the fact that, you know, oftentimes it's not the journalists themselves who are writing out, like, who have full control over these articles. So one issue we found, for instance, is in any article about AI, in most articles about AI, the cover image is that of a robot. It's even the case if the article is describing an Excel spreadsheet or a glorified version of that. Yeah.
Starting point is 00:11:03 And that is completely outside the control of a journalist. Like an individual journalist has no control over what the headline is, or what an image is, and that's decided by the editors. And the editor's primary motivation is to get people through the article. And so I think it's not that any one person in this entire chain is sort of acting out of malice or anything. It's just that the incentives of click-driven media are aligned in such a way that you want attention more than you want factuality.
Starting point is 00:11:30 Yeah, do you think that there's a problem with the term AI itself? Because I mean, for the public, right? The Steven Spielberg movie AI came out 20 years ago at this point, right? The public hears artificial intelligence. They think robot, you know? Is it possible that the entire term
Starting point is 00:11:50 is at this point sort of tainted by the public's imagination of what it can do? Cause I often think if you say AI to people, they bring in every science fiction movie they've ever seen. Like as soon as you say it, oh, I have an AI that can do X, Y, Z. They're imagining something that they saw on television.
Starting point is 00:12:07 They're not imagining what is in real life. Whereas if you use, I don't know, even large language model, right, or I don't know, general purpose transformer, whatever, use a technical term, they don't bring that science fiction version into it. Is part of the problem the terminology that we're using or? And that's exactly right. And I think there are two separate problems here.
Starting point is 00:12:26 One is the science fiction aspect. And all of our intuitions have been just so heavily molded by how we have seen AI behave in movies. And the way that AI is being built today isn't really that sci-fi version, this idea of AI having its own goals and desires and deciding what to do, that sort of stuff, you know, maybe one day it'll become possible to build AI like that. And we should make sure through regulation and other means that we don't build AI like that.
Starting point is 00:12:57 And right now, no one is building AI like that. Right. And so this is really thinking about And so this is really thinking about something that might happen if engineers started to build AI very, very differently. It doesn't describe the reality. So that's the first part of the problem. And the second part of the problem is AI, or whatever you want to call it, even if you were to replace it with a different term,
Starting point is 00:13:20 machine learning or whatever, it's an umbrella term that describes a collection of loosely related technologies. And it's true. Chat GPT, large language models, these are a kind of AI or machine learning. And then there is what Sayash was calling the glorified Excel spreadsheet. And that's often what is used, for instance,
Starting point is 00:13:41 in the criminal justice system to try to predict who is going to commit a crime. Like, first of all, why are we even trying to predict that? Right, that's one of the things we talk about in the book. That's not something we should be predicting. Pre-crime is, you know... Because then we risk, what, arresting people before they've done anything.
Starting point is 00:13:59 The point is to stop, is to solve crimes, not to, like, punish people before they've committed them. That's right. That's exactly right. So there's a question of, should we be using AI here at all? But also, there is the fact that that kind of AI is just these very crude statistical patterns. And it's not something that's improving dramatically.
Starting point is 00:14:19 It's not something that has a trillion parameters, as it's called in the chat GPT case, this mysterious black box that we can't understand. It's just basically a simple formula that we can look at. And when we conflate these two things, we come away with a very misleading picture of how powerful this technology is.
Starting point is 00:14:38 And so that's the other problem with this term. Because I've had this intuition that once AI sort of broke big as basically when chat GPT came out around that moment, suddenly a bunch of technologies that previously were just called an algorithm or I don't know any any other sort of technology were suddenly labeled as AI. I think about, you know, Spotify saying we have an AI DJ, where I'm like, well, hold on a second, well, Spotify used to choose songs for me algorithmically, we would call that the algorithm, we often still do, but now they call it AI. And they've added like a little bit of extra polish on top of it to make it seem like a different feature. But it seems like a lot of that conflation
Starting point is 00:15:18 has been purposeful on the part of the companies. That's been my intuition. You're telling me that it's really happening. Yeah, I think that's absolutely right. In fact, we've even seen instances that go one step further. So you have these companies that are selling AI, but what's really happening under the hood
Starting point is 00:15:33 is that they're contracting out to mechanical, or they're contracting out to people who are actually solving the work behind the scenes and claiming that it's AI doing it. That's insane. Like literally they're saying AI is doing it, and there are very low paid people somewhere on the internet just classifying these images one by one.
Starting point is 00:15:50 Exactly. And I mean, I should say that it's not entirely malicious. It does have a very misleading effect, and companies shouldn't be doing this. But the way that companies kind of go down this rabbit hole is they start out by first having people do it so that they can collect the training data and then automate it.
Starting point is 00:16:10 And then they find that it's just cheaper to let people continue doing it instead of paying a machine learning engineer, half a million a year or whatever it is to then come in and try to automate that. I mean, this is something that we returned to soft on the show that so often it is actually cheaper for these companies to mistreat humans than it is for them to, you know, fight. Like the classic example is like self-driving Ubers, like which is more likely Uber is going to manage a fleet of millions of high tech self-driving cars that they own and have to,
Starting point is 00:16:43 you know, maintain and that they're liable for, or could they continue paying human drivers less than minimum wage to show up with their own cars, buy the gas themselves, ensure the cars themselves, and essentially lose money driving? Which one is cheaper for Uber to do? Clearly the one that misuses humans, rather than the one that takes an immense
Starting point is 00:17:02 capital expenditure on their part. That's just like the logic of capitalism, but so often we forget it. And the tech people seem like they're specifically trying to obfuscate and obscure that from us. I think that's right. Yeah. One of the sort of main things we discuss in the book is also that often when tech creates a problem,
Starting point is 00:17:21 tech also tries to solve this problem. And that's actually responsible for many types of AI snake oil that we've seen. So in particular, one thing that comes to mind is chat GPT making homework, cheating, a big issue for teachers. So teachers all of a sudden across the country, across the world, in fact, have to sort of rush to change their teaching patterns, have to rush to modify their syllabi. At Princeton, like at a well-funded institution, we had the time and space to redevelop our curricula around chat GPT, but that's actually not the case for the majority of teachers, even in the US. And so what we've seen now is this whole slew of AI detection startups that
Starting point is 00:18:01 claim that they can detect when a given student's essay has been generated using AI. The problem is though, that there is no known technical way to do this reliably. And when people have found what is classified as AI generated, they found that non-native speakers, for example, have a much higher likelihood of being classified as cheating on their essays or turning in AI generated texts.
Starting point is 00:18:24 And so this is another example of, I think, tech companies externalizing the costs of running their business. Because in this case, making chart GPT available to the broader public, um, was a business decision that OpenAI made, but they didn't have to incur the costs that this imposed on teachers all over the country, um, on all sorts of workers all over the country. Right, instead they, it's another opportunity to make another product and sell something for more money.
Starting point is 00:18:51 Chat GPT doesn't entirely work, but it does cause a problem. And so then they can sell another product that doesn't entirely work to try to fix that problem and make money on both ends. I know these are different companies to a certain extent, but like, you know, the tech industry at large. Folks, our partner for this week's episode is Delete Me. It's a service I have been using for ages
Starting point is 00:19:14 and I am so excited to tell you about it. How much of your personal info do you think is floating around online? Maybe you've got a public social media profile or an email out there, but that's it, right? Wrong. There is actually a mountain of your personal data being bought, sold, and traded by data brokers.
Starting point is 00:19:31 Stuff you never meant to be public, like your home address, phone number, and even the names of your relatives. Anyone with an internet connection and some bad intent can dig up everything they need to make your life miserable. Recently, we've seen an uptick in online harassment, identity theft, and even real life stalking,
Starting point is 00:19:48 all because of this easily accessible information. You know, a couple of years back, I became a target of harassment for people who found my details online. So I signed up for Delete Me, and honestly, it is one of the best choices I've ever made. Their team of experts works tirelessly to hunt down our information, remove it, and keep it gone.
Starting point is 00:20:06 You, your family, and your loved ones deserve to feel safe from this kind of invasion of privacy. So do yourself a favor, check out Delete Me, not just for your security, but for your friends and family too. And guess what? You can get 20% off your Delete Me plan when you go to joindeleteeme.com slash Adam
Starting point is 00:20:23 and use promo code Adam at checkout. That's joindeleteme.com slash Adam and use promo code Adam at checkout. That's joindeleteme.com slash Adam, promo code Adam. So in video games, there's this thing called min-maxing. I'll spare you the technical definition, but it's all about putting your effort in the right place to get the optimal results. Well, you know, I found that kind of thinking helpful in real life too. I used to think of shopping for groceries, picking up dog treats, researching products that are gluten-free for my partner, and being mindful of taking care of my health as separate discrete tasks. But when I realized that I could get all of these done
Starting point is 00:20:53 just by shopping at Thrive Market, it felt like I'd found a way to game the system. Now I do all of my grocery essential shopping with Thrive Market. I get my jovial gluten-free pasta for my partner, shameless pet's dog treats for my dog, all the health conscious goodies I like to enjoy for myself, and I have them delivered straight to my doorstep. And as a Thrive Market member, I save money on every single grocery order. On average, I save over 30% every time.
Starting point is 00:21:18 They even have a deals page that changes daily and always has some of my favorite brands. Best of all, when you join Thrive Market, you are also helping a family in need with their one-for-one membership matching program. that changes daily and always has some of my favorite brands. Best of all, when you join Thrive Market, you are also helping a family in need with their one-for-one membership matching program. You join, they give. So, join in on the savings with Thrive Market today
Starting point is 00:21:34 and get 30% off your first order plus a free $60 gift. Go to thrivemarket.com slash factually for 30% off your first order, plus a free $60 gift. That's t-h-r-i-v-e, market.com slash factually. I wonder if part of the problem here with AI technologies, or again, we're talking about a big bundle of technologies here, but it seems like part of the problem with these technologies
Starting point is 00:22:06 is that they always provide an answer whether or not that answer is correct. In a lot of these examples that you've talked about, detecting whether or not the content is AI generated, detecting whether or not, you know, a person is about to commit a crime, this is a system that gives you an answer, yes or no, and if all you want is the existence of an answer
Starting point is 00:22:28 and you don't care about really how good it is, this product will give it to you. That is, you know, to me seems to be the core feature of Chad GPT. If I ask it a question, it will provide me an answer whether or not it is good. And that seems like on the face of it, kind of weird and dangerous.
Starting point is 00:22:44 I'm curious if that plays a role for you at all. I think that is such an astute observation. It's music to my ears that you brought that up. We've done so many podcasts and I don't think that's ever come up. So this is something that goes back. So this goes back decades. So today's AI and machine learning, in a sense, came out of statistics, right? And one of the interesting features of statistics
Starting point is 00:23:07 as a field of inquiry is that it has the self-discipline to say, OK, this data is actually not suited to solve this problem. So you can't build a statistical model for this. Or we tried building a model, and then we did some tests to check if it's a good model. Nope, it's not a good model. So we can't answer this question for you.
Starting point is 00:23:25 Go look somewhere else. And in machine learning culture, that just doesn't exist. I remember being back in graduate school, and the way we were always taught how to do machine learning is you throw a model at a problem, and it spits out an answer. It might not be a great answer, but there is always an answer. So it's exactly the thing that you pointed out.
Starting point is 00:23:47 And so it's that cultural thing. It doesn't have to be designed that way. ChatGPT doesn't inherently have to be designed that way. There's a process called fine tuning, and it would be pretty straightforward to fine tune ChatGPT to say, oh, you know, this is a medical question. I'm not confident of answering that question.
Starting point is 00:24:04 Go look up, you know, a medical website. I'm not confident of answering that question. Go look up a medical website. And actually, they were forced to do that for election questions because many chat bots kept giving election misinformation, and many election authorities got very worried about that. And so it's easy to do if they want to do that. But it's this deliberate choice not only to always give an answer unless they're required
Starting point is 00:24:26 by law not to give an answer, but also for those answers to be written, so to speak by chat GPT in this very persuasive, authoritative, confident way. And when people read that, it just you know, it's it tricks us, right? It's, it's the kind of style that we see in a textbook or an encyclopedia or another source. And then we kind of style that we see in a textbook or an encyclopedia or another source. And then we kind of lose that natural skepticism that we have. Yeah.
Starting point is 00:24:49 It seems like there is a desire on the part of not just the public to get those clear answers, but also the executive class that like the executives sort of want to believe that there is a wonderful algorithm and AI system that will do the job of a person. And all they look for is that it will output an answer. You know, is this potato good or bad or whatever? They don't really care about how many good potatoes are thrown away by accident in the, you know, potato processing plant.
Starting point is 00:25:19 There's like, oh my God, something that will give an answer yes or no, that's good enough for me. And they don't care about the details that much. And so a lot of times it seems like the people who are making the, in addition to the public, sort of being fundamentally gullible, as well they should be, the public, you know,
Starting point is 00:25:36 doesn't have the ability or the time to like question every piece of technology that's put in front of their face with a big claim because we have busy lives. But also the people who are making the decisions about whether or not to purchase or deploy this software are not only easily fooled, but all they want is that answer and they really don't give a shit.
Starting point is 00:25:54 Does that seem accurate to you? Absolutely, yeah. So one of the claims we also sort of discussed a little bit is AI snake oil is appealing to broken institutions. So when you have an institution that is unable to function like it should, when it is either like overloaded, for example, in the case of the HR departments that are in this group, you have like thousands of jobs, often, or thousands of applicants per job. And so for that type of an institution, for an HR person who's put in that position, it's really easy to sort of say that, oh, I'll let AI sort out who the top 10 applicants are and just
Starting point is 00:26:30 interview them. It doesn't really matter. And once again, the cost is imposed on the people applying for jobs. And what's even more sort of harmful in this case is that now we have, I don't know, hundreds of companies relying on the same AI tool. And so if you are a job seeker and you apply to these hundreds of companies and the algorithm doesn't like you for some reason, you're rejected by all 100. You're not really in the position where, you know,
Starting point is 00:26:55 you might be rejected by some, you win some, you lose some, that's not really the case anymore. You basically have this homogenous outcome. You get like this market where every single company will treat you exactly the same way. And that can be really disheartening. And I can now, I can imagine that world where hundreds of companies are all using the same AI to process, you know, millions of applications. What's the next thing that will happen?
Starting point is 00:27:18 You'll have people making YouTube videos, how to trick the AI into getting you that interview. You'll have people reverse engineering how the AI works to say, oh, if you put XYZ in the algorithm, the AI really likes it this week. The same way people do with the YouTube algorithm, to great effect, you know. MrBeast built an entire career on YouTube based on gaming that algorithm and like figuring out what it wants this week. And so it feels
Starting point is 00:27:45 like again, we're replacing the system that we currently have with something that is stupid or more discriminatory, but also more manipulable in a, in a strange way. Is that the case? I mean, in a way I kind of like it when this happens because it shows that the existing system was already bullshit. So my favorite So my favorite example of this is when someone uses ChatGPT or whatever and they type in a set of bullet points and then they say, oh, you know, turn this into a nicely formatted business document that's three pages long, right?
Starting point is 00:28:16 Because that's what you need to submit in a business context. And then the person at the other end of it, they don't have the time to read that. They put it into ChatGPT and ask for three bullet points. Right. Right. And, and the reason this is happening is that so many of our rituals in the business world are bullshit, right? They don't, they don't have to happen. So maybe we can, you know, maybe when AI comes in and further messes it up, we can use as an,
Starting point is 00:28:40 use it as an opportunity to reflect on maybe we can change the system now. Yeah. I mean, when the only situations in which people say, Hey, it could be helpful to me to use chat GPT to generate some text is a case in which the text is not really important or is not being used or no communication is happening. Um, no one is going to actually use this. And, uh, you know, what a good example is? Instagram has crammed a large language model into almost every part of the messaging feature.
Starting point is 00:29:10 And so when I'm typing a message to a friend, this little icon appears that allows me to write a message to my friend using their version of ChatGPT, whatever it's called. And I cannot imagine doing this. I'm DMing a friend about their Instagram story or whatever, or like, what do you want to do tonight, et cetera. I'm gonna ask a large language model
Starting point is 00:29:32 to help me compose a text saying, hey, do you want to go out for a drink, or maybe I'm trying to flirt with them, or whatever I'm doing. This is the most intimate kind of communication. What I say to this person actually matters to me. That's a situation in which I would never use an A, like let an AI communicate for me,
Starting point is 00:29:49 nor do I need help to do it. The only places we do need an AI's help is the places where we don't give a shit what the text actually says, right? Because we're firing off, oh my, I have to write a cover letter for this job I'm never gonna get, fuck it, I'll let ChatGPT do it. So it's only useful when the output is useless to everyone.
Starting point is 00:30:10 That's interesting. I mean, I do think like ChatGPT is or can be useful in like a constrained set of settings. So one example is when it's easier to verify that something is correct, but it's not very easy to actually write it out. So for example, if someone wants to create a website from scratch, they can look at the website.
Starting point is 00:30:31 They can identify it's all good. But they don't need to be a programmer anymore to create that website. So there are these situations where I do think like, chat GPT and large language models have allowed us to do things we couldn't do before. The issue, though, is that these things are obfuscated by all of this hype that surrounds language models.
Starting point is 00:30:50 And companies desperately pushing in AI into each of their products in basically a bid to find out what works. I think the core issue is that companies have bet billions of dollars on this technology, and they're desperately looking for payoffs right now. I think according to one estimate I read recently,
Starting point is 00:31:10 think companies in total are planning to spend around a trillion dollars building AI and AI related data centers and so on. And so it better start paying off fast. And I think that's some of what we also see right now is like this bid to find a product market fit, this bit to find where people will actually pay for AI. Yeah.
Starting point is 00:31:30 Are there any avenues that you think, you know, could be a successful market because you raised the issue of computer programming or coding of any kind using chat GPT. That is an area where I've, I've read enough blogs by programmers to know that yes, it actually can be a labor saving device for a programmer. They can help them write code as an assistant. This is a professional using a computer and they can use Chad GPT or another large language model to help them use the computer more quickly
Starting point is 00:31:59 and more efficiently. I think another example is generative AI in an app like Photoshop, you know, to help an artist create an image. I can see that as a labor saving device. However, these are not revolutionary uses. This is still the same white collar professional at the same computer doing the same work,
Starting point is 00:32:19 but a bit faster or, you know, having a bit more ability. Are there any other use cases I'm not thinking of on the horizon that you are actually excited by? I do want to give the technology it's due here. Yeah, I can tell you some of the things I use generative AI for. And I'm not going to claim that it's revolutionary,
Starting point is 00:32:40 but I think there are ways in which it's kind of genuinely fun and fulfilling. And to me, the best example of that But I think there are ways in which it's genuinely fun and fulfilling. And to me, the best example of that is when I use AI with my kids. They're five and two. And when we're going on nature walks, for instance, they often want to know what's that tree or what's that bird.
Starting point is 00:32:58 And I have no idea. I mean, it could be a pigeon, and I don't know what it is. But what I can do is actually just take a picture of it with Chachi BT and it will tell me, you know, not only what species it is, but also its migratory patterns or whatever. And all I have to say is I'm here with my kids and it knows to actually, you know, speak out its response in a way that's actually appropriate for a five year old in a way that a five-year-old can understand. These are all, I don't know, I think from the perspective of a kid growing up now, it's really wonderful in many ways if there's the right parental guidance around using these
Starting point is 00:33:37 technologies in a fun and educational way, as opposed to there's also a lot of addictive potential. I think there's a lot of agency that we can exercise. I think there are a lot of good uses, but it's definitely easier to find the bad uses than the good ones. I mean, yeah, you know, I'm a bird watcher and the main bird identification app that I use about a year and a half ago added a sound ID that allows you to identify bird calls by sound And they don't really call it AI I don't know what the technology is but it's it's something in the wheelhouse of what we're talking about clearly It sort of makes a spectrograph
Starting point is 00:34:16 I believe you call it of the sound and it analyzes it visually to find the bird call and it has Revolutionized bird watching for me. I did bird watching before. And then afterwards I'm able to identify so many birds, even if I can't see them, cause I can use that bird call to get a quick ID and then look for the bird and try to verify. And that is a really great use. And so I don't want to imply that none are there, but it seems that the problem as always
Starting point is 00:34:41 is not really the technology, it's the people and how the technology is being sold and over promoted in this really absurd way. I mean, I saw a headline the other day that it was the Kamala Harris team chose to do debate prep against a human Trump impersonator, as has been done for decades. Instead of using an AI, which had been promoted to them,
Starting point is 00:35:04 some AI company came to them and said, you should debate AI Trump to practice for the debate. And they were like, ah, no thank you. And I can just imagine the sort of sweaty AI salesman being like, oh, this is even more like Donald Trump than Donald Trump is, oh, it's gonna revolutionize debating. And it's sort of like, you know,
Starting point is 00:35:22 you imagine these guys go, almost like snake oil salesmen, right? Like this dog and pony show going around trying to, no matter what you're trying to do, sell you an AI that can do it for you, no matter how ill suited it is to that task. Yeah, exactly. And like one of the things we've also been very, very concerned about, which is in this sphere, is AI that's used for text to image applications. So you put in a piece of text, it outputs an image.
Starting point is 00:35:47 And what this type of AI has been overwhelmingly used for in the last two years is non-consensual deepfakes. So this is non-consensual AI-generated images, primarily of women. Basically, there's an epidemic right now in schools. There's an epidemic of AI generated deep fakes, again, mostly of girls. And I think this is horrifying.
Starting point is 00:36:09 And the fact remains that like this is possible to do now on an iPhone, you can generate these images on like extremely low tech devices. And so this is the sort of like misuse that is so much easier to do than actually create genuine artistic expression using AI. And when misuse is so much easier compared to like actually using some piece, some piece of technology for good, I think this is what is bound to happen.
Starting point is 00:36:32 At least in the early stages of technology. Hit, you know, you find like a million bad users for every good one. And then it's really hard to wait through it to figure out what actually works. I mean, clearly debating an AI version of Donald Trump is not it. Um, and so I think we really need to up our standards for what it is that we actually want to do with this technology, which is in some, in some ways, pretty remarkable, but also very, very easy to misuse. Yeah.
Starting point is 00:36:58 And I want to underline that I feel that the technology is remarkable as well. I remember when chat GPT came out, I had so much fun. Like this is unlike any piece of software I've ever used. It's really fun to play with. And it sort of, you know, turned on that light for me that I felt for computers ever since I was a kid of like, oh, look at this wonderful playground, this new thing I've never been able to do before.
Starting point is 00:37:21 But at the same time, yeah, these companies aren't thinking about what can be done with the software. I mean, The Verge had a great piece over the last couple of weeks about the new Google Pixel phones that allow you to sort of edit using text to image any photo. And the problem is not election misinformation
Starting point is 00:37:41 or any of the normal things we hear about in the press. It's that you could take a photo of a person you know and put like a syringe in their hand to make it look like they're doing drugs. Or you could take a street scene and put a fake car accident in there. Or you could use it for, and you look at this and go, oh, this will be used for all sorts of petty crimes,
Starting point is 00:38:01 right, that we will never even hear about will be done with this. And you look at it and go, did they not think through what they released? Like what it lets you do is a little bit alarming. I'm not saying it should be banned before it came out, but it shows you that there's a lack of thought on the part of these companies
Starting point is 00:38:22 of what this software could be used for. Do you, it sounds like you share that concern. Oh, totally. Yeah. I think, uh, you know, AI companies fooled themselves into thinking that AI is so general purpose and so magical that they didn't have to build products with it anymore. They could just put AI into, you know, into people's hands. And because you can just talk to it and tell it what you want to do, it's just a product all on its own.
Starting point is 00:38:49 And that's, of course, not the case. But they're also in a bit of a bind. You could imagine Google Pixel putting guardrails into it so that you couldn't put syringes in a person's hand. But that's going to have so many false positives. In addition to syringes, it's also going to block a lot of things that you didn't mean to block. And people are going to get mad. And that's happened many, many times.
Starting point is 00:39:09 There have been so many outcries about censorship. So yeah, it's not clear exactly how companies can find that line here. And if I can just geek out for a second, the way that we Please. The way this happened with previous technologies that were really new and people didn't know exactly what you could do with it is that some startup comes up with it and a hundred people start playing around with it, right? And they figure out the use cases
Starting point is 00:39:35 and then it gradually grows from there and it takes 10 years to put it into the average person's hands and by that time you've figured out how you can make products with it that are useful. person's hands. And by that time, you've figured out how you can make products with it that are useful. What's so different about GEN.AI compared to previous technologies is that the big tech companies decided to go all in and as Sayesh was saying, they've invested or are on track to invest something like a trillion dollars into it. So they had better get billions of people using GEN.AI on a daily basis if they're going to be able to justify those investments. And that's, that's really the conundrum they're facing. So how much of this is driven by the investment environment?
Starting point is 00:40:12 Um, I know you guys are researchers, so it's a little bit, uh, I don't want to go too far a field here, but. You know, I talked when we had Ed Zitron on the show a month or two back, we talked about how, uh how all of these tech companies have these sky high valuations of their most valuable companies in the world. And that's because of all the products they created in the past.
Starting point is 00:40:31 The iPhone was this incredible leap forward in technology that changed the world. And we went from zero people owning iPhones to billions in a matter of years. And so that creates this incredible valuation for Apple. Similar things happen to Microsoft and Google, but those were the low-hanging fruit products, Search, Windows, operating system, iPhones.
Starting point is 00:40:53 Now they've sort of run out of those innovations and AI is the new thing that looks like they can get everyone excited about it. And so in order to keep their valuations, to keep that growth, they've gotta pump it out. They don't really have a choice because, you know, that is the financial pressure being put on them is to always deliver the way they did a couple years ago
Starting point is 00:41:12 in the past when the reality might be, hey, maybe the iPhone was the one and done kind of thing. Like you're not gonna have another iPhone, buddy. You know, that was one moment. And maybe you're not gonna be the biggest company in the world forever, but they are sort of instead forced to keep up the pretense. Is any of that tracking for you? Yeah.
Starting point is 00:41:31 And I think to be honest, it's not really a pretense. I think many people at these companies generally believe that they're on track to build essentially AI that can replace workers period. They can replace labor. They can basically essentially automate anything that a human can do. I think it's called AGI in these terms, so artificial general intelligence, which is the term for,
Starting point is 00:41:56 I mean, it has a number of definitions, but most commonly, I think, anything that can automate every aspect of human labor. I think if you genuinely believe that you are on track to build something like that, then no investment is too big. A trillion-dollar investment on an infinitely labor-saving device is basically seen as a really cheap way to get away from it. Then you also have the dynamics of the people making the shovels.
Starting point is 00:42:24 NVIDIA, in this case, is I, the company that has profited the most from this AI boom. And that's because they create the devices on which all other companies train their models. So I think Nvidia has like trillions of dollars of market cap at this point, primarily because they're supplying the other companies with the tools they want to invest in. Yeah, they're selling the weapons during the war. Yeah.
Starting point is 00:42:46 Yeah. Yeah. Yeah. Folks, today's episode is brought to you by Alma. Look, life is full of challenges. Even the best of us can feel bogged down by anxiety, relationship issues, or the weight of major life transitions. And, you know, going it alone
Starting point is 00:43:01 is not really a great strategy. I cannot recommend it as someone who's tried a little bit too much to do so. What I can recommend is finding someone who truly understands what you're going through to help you through your tough times. And if you're thinking about getting some licensed expert help to navigate your own challenges,
Starting point is 00:43:18 I really recommend giving Alma a try. Therapy can be an incredibly effective tool to help you get through your day to day. But you know what? I know from personal experience that it is so much more effective when you find someone who feels like they are truly hearing and understanding you.
Starting point is 00:43:33 Getting help is not just about having any therapist. It's about finding the right therapist for you. And that is exactly what Alma helps you do. You can easily browse their directory and filter to find a caring person who fits your needs with preferences like gender, sexuality, faith, and more. Alma is also designed to help you find a therapist who accepts your insurance.
Starting point is 00:43:55 Over 95% of therapists at Alma take insurance, including Aetna, Cigna, UnitedHealthcare, and others. People who find in-network care through Alma save an average of 77% on the cost of therapy. And getting started is effortless because you can browse the directory without having to create an account or share any payment information.
Starting point is 00:44:14 Plus, you can book a free 15-minute consultation call with any therapist you're interested in. It's a perfect way to see if they're the right fit for you so you can find someone you really click with. Alma can help you find the right therapist for you, not just anyone. So if you wanna get started on your therapy journey, visit helloalma.com slash factually to get started
Starting point is 00:44:36 and schedule a free consultation today. That's helloalma.com slash factually. Folks, our partner for this week's episode is Delete Me. It's a service I have been using for ages and I am so excited to tell you about it. How much of your personal info do you think is floating around online? Maybe you've got a public social media profile
Starting point is 00:44:56 or an email out there, but that's it, right? Wrong. There is actually a mountain of your personal data being bought, sold, and traded by data brokers. Stuff you never meant to be public, like your home address, phone number and even the names of your relatives. Anyone with an internet connection and some bad intent
Starting point is 00:45:13 can dig up everything they need to make your life miserable. Recently, we've seen an uptick in online harassment, identity theft and even real life stalking, all because of this easily accessible information. You know, a couple of years back, I became a target of harassment for people who found my details online. So I signed up for Delete Me,
Starting point is 00:45:31 and honestly, it is one of the best choices I've ever made. Their team of experts works tirelessly to hunt down our information, remove it, and keep it gone. You, your family, and your loved ones deserve to feel safe from this kind of invasion of privacy. So do yourself a favor, check out Delete Me, not just for your security, but for your friends and family too.
Starting point is 00:45:51 And guess what? You can get 20% off your Delete Me plan when you go to joindeleteeme.com slash Adam and use promo code Adam at checkout. That's joindeleteeme.com slash Adam, promo code Adam. That's joindeleteme.com slash Adam, promo code Adam. Well, let's talk more about AGI and these giant claims. Because Arvind, you said a few minutes back, and it was something I wanted to return to,
Starting point is 00:46:19 that no company is building this sort of like, you know, science fictional version of artificial intelligence that is the movie version that can do anything and imagine anything. And yet, if you watch the interviews that Sam Altman does on, I don't know, the Lex Friedman podcast or whatever, they're all they talk about is the science fiction version. These CEOs who run actual tech companies are going out
Starting point is 00:46:47 saying, well, the product I'm making could destroy the world. And so we need to think hard about that, right? Or whatever. They're going to the Isaac Asimov version right away, as though they're Harry Seldon from the foundation novels saying that, you know, oh, the chance of humanity being wiped out is, well, I've calculated it and it's a three and a half percent or whatever the fuck. Um, and so, uh, like,
Starting point is 00:47:08 let's talk about that gap. You say that no one is building that type of AI currently they're building the, the smaller scale version that we've been talking about. And yet they're going out there telling the public and telling lawmakers and telling the press that that is what they're doing. So what is up with that gap? I mean, even according to open eyes, own internal communications, there was a document, I forget if it was leaked or they made it public, but they had a five step ladder, which is kind of a decent way, I think, to think about AGI.
Starting point is 00:47:38 And the first step on the ladder is just kind of generating text. And then it's reasoning. And then there's a agents and then there's AI agents, and then something else, and then eventually you get to AGI. So even according to OpenAI's own internal assessment, they're on step one, the first step of that five-step ladder. And it reminds me of self-driving cars, where there were prototypes 20 or 30 years ago.
Starting point is 00:48:02 But then when you want to get to higher levels of automation, when you want to rely more and more on the car itself, as opposed to the human in the car, it takes decades of putting those cars actually out there on the road and collecting data and making sure it's safe enough to deploy. And I think what companies, to some extent, are fooling themselves, but are definitely
Starting point is 00:48:24 fooling the public on, is how hard it's going to be to climb that ladder. And this kind of fooling oneself goes back all the way to the beginning of AI. So back in the 50s or whatever, when they built the first computers, they thought, they genuinely thought that they were just, again, like a couple of years away from AI or AGI or whatever we call it now.
Starting point is 00:48:49 And that's because the way they were thinking about it was, well, AI requires two main things, hardware and software. We've done the hard part, hardware, and now there's just the easy part left, right? Software. And in a way, that attitude still persists. It's kind of like we're climbing a mountain and then from where we are, we can only see like the 10% of the mountain closes to us. And when we think that once we climb that we're done,
Starting point is 00:49:14 but then when we get to it, we see the rest of the mountain. And I think AI developers continually underestimate what it's going to take to get to greater and greater levels of usefulness. Yeah, it seems like, you know, we're talking about science fiction. The science fiction of the 50s was also about artificial intelligence. And when you say this, it makes me realize, ah, yes, because it was based on the claims made by the artificial intelligence boosters of the time.
Starting point is 00:49:42 Exactly. But then they were not able to do what they said they could do. There was the AI winter boosters of the time. Exactly. But then they were not able to do what they said they could do. There was the AI winter and all of that. It went into a deep, you know, sort of deep AI recession. But now it's back and once again, we are talking about, oh, it's right around the corner. It's right around the corner.
Starting point is 00:49:56 It's about to happen. It's about to happen. So I mean, I'm often like, people come at me a little bit for having a skeptical point of view, but when you look at the arc of history, it seems like skepticism is warranted or is the thing that we've always had
Starting point is 00:50:11 not quite enough of, of these claims. Absolutely, and I think there's a selection bias in terms of who ends up working at AI companies, right? So it's the people who self-select, who genuinely believe that AI or EGI or whatever is around the corner, who end up in these positions where their contribution is to move us towards AGI. I think that's also why we've seen so much exaggeration.
Starting point is 00:50:34 Any single time we make even the most trivial advances in AI, it's seen as one step closer to AGI potentially in three to five years. I think there's a running joke in the AI community that AGI is just always three to five years away. And I think that has been true at least during over the course of my career, over the past 10 years, and I've been working on AI. And it's also been true of self-driving cars, by the way, and plenty of other technologies always a couple of years away. Cold fusion is always a couple of years away. I mean, this is, you know,
Starting point is 00:51:09 any, any big massive advancement you want to talk about, it seems like that is always the threshold. Um, but what is odd about AI is that you have people out there making claims that we need to do X, Y, Z because it's about to happen. You have people saying we need to pause AI development because we're about to have. You have people saying, we need to pause AI development because we're about to have a AGI, or we need to accelerate AI development because otherwise China is gonna get it first and they're gonna have an AGI that's going to destroy us or whatever.
Starting point is 00:51:35 And they're not making these claims, you know, speciously, they really mean them. They're making them before Congress. They're making them, you know, to the public and the press. So what do we make of these claims? Do these people really believe them? I think I think they do. And I think it goes back to Sayesh's self selection point. And, you know, for all of our skepticism on AI, the reason I got into computer science 25 years ago or whatever is because in some sense, I really believe in the potential of AI as well. I mean, I might not believe it's three to five years away, but coming back to self-driving
Starting point is 00:52:13 cars, I do think that certainly in our lifetimes, I don't want to give a specific timeframe where they're going to be on every road and it's going to make our lives massively better, and it's going to cut down on the one million fatalities per year in car accidents that we have around the world. And it's those kinds of things that motivate me. And so basically, everyone working on AI is part of a highly self-selected community who, just like you might believe in any kind of mission,
Starting point is 00:52:44 whether you're interested in any kind of mission, whether you're interested in stopping climate change or whatever it is, right? So, yeah, it's this very self-selected group of people, and it's very easy to kind of get into cult dynamics here. And I think that happens sometimes, unfortunately. It's a very self-reinforcing echo chamber. But it's important to keep in mind that there are many top AI researchers, perhaps the majority, it's really hard to measure this, who think that these fears are really overblown as well.
Starting point is 00:53:15 And instead of deferring to authority, whether it's the AI boosters or the AI doomers or the skeptical AI researchers, I think we as a public and policymakers specifically should have confidence in our own critical thinking abilities. You don't have to understand the math behind AI to be able to reason about how the technology might impact humanity. And I think there is a much larger group of people than technical AI researchers who are qualified to really think deeply about what can we learn from history, what can we
Starting point is 00:53:51 learn from other economic transformations, other technologies, how to project this forward into the future, how can we make policies that are robust to different kinds of futures that we can't really predict that accurately. And so the main thing I would push back on is not even necessarily the doom narrative, but what I would push back on is the idea that we should give any deference to what AI companies or researchers are saying.
Starting point is 00:54:14 Right. I love that because these companies love to go before Congress and say, well, here's what my software is going to do. It could destroy the world. So you better do what I say. here's what my software is gonna do, it could destroy the world, so you better do what I say. And that is, they claim the mantle of authority, and I think our lawmakers and people in the press
Starting point is 00:54:33 pay way too much deference to them, and that they should instead have some skepticism and be like, well, prove it to me. Right, like I'm not just gonna believe you, Sam's just some fucking guy who found his way into running a tech company, you know? There's like, the world has made it to some fucking guy all the way down. Everyone's just some person, you know?
Starting point is 00:54:52 And the idea that these people are geniuses that we need to defer to has struck me as ridiculous. So, but I wanna move off of, you know, making fun of them and, you know, being skeptical about their claims. We've been doing that for about 45 minutes now. When you think about AGI or the threat of disruption posed by AI in the future,
Starting point is 00:55:14 how do you actually think about it? What is the sort of responsible middle path here when we're trying to conceive of it and how it could actually change things? Yeah, I think that's a great question. I think we've been on the wrong side of it for the last few types of technologies. I think for social media, for example, I think we were way too late to understand the harms
Starting point is 00:55:36 posed by social media platforms and we were way too late in terms of like pushing back perhaps on the Facebooks of the world. I think for AI, we have some time, and we have a lot of people who are already familiar with what the technology can do right now. So for all of the ills of OpenAI releasing ChatGPT into the world, I think one of the things that, one of the positives that came out of it was that everyone has access to this technology. Everyone can play around with it. They can probe it to see what limitations AI has today. They can probe it to see how well it does. One of my favorite examples is that
Starting point is 00:56:14 GPD is an expert on everything that you're not an expert on. But as soon as you ask it questions that you're an expert on, you try to see that or you start to see the holes. When people actually experience these technologies, I think that's a big plus in terms of understanding the vulnerabilities, understanding what harm it can do. And I think if we stop paying deference to what AI companies are telling Congress and what AI companies are telling the public, I think it's somewhat easier to rely on past transformations by technology.
Starting point is 00:56:45 So we can look at, for instance, social media platforms, how they completely changed how we communicate online, and see what happens if we extrapolate it to AI and to, let's say, your own personal AI that's making decisions on your behalf, small decisions at first, like what to set your thermostat at, or when to order groceries, and so on. What happens if we extrapolate from here to AI systems making decisions for more and more consequential things? I think claims of AGI aside, this is actually quite probable.
Starting point is 00:57:16 And so the sorts of protections we need when it comes to pervasively used AI are maybe pretty similar to what we might think we need for social media platforms. Because you have all of a sudden this type of technology that is essentially acting on our behalf, that is mediating all of our conversations. And so maybe what we want, for instance,
Starting point is 00:57:39 is for these agents to be like private. Maybe we don't want all of this data to be publicly available. Maybe we don't want all of this data to be publicly available. Maybe we don't want all of these agents to be owned by the same company. I think that's the extrapolation that does not require technical expertise beyond the fact that you see that we're moving in the direction of using AI for these small things at first and extrapolating from there. But it does require us to think about what has gone wrong in the past when we've relied on technology to mediate our conversations and essentially commerce, all of these activities. Right.
Starting point is 00:58:12 And I like that answer because there's people at the center of it, you know, because what you're talking about is do we want these agents to all be run by the same company, which is run by people. And that's a question of corporate governance over our lives. It's not, you know, this scary technology first. It's saying, okay, the technology is gonna change things, but how do we want people to be involved with that? It also reminds me of, you know,
Starting point is 00:58:35 the example of the photo editor on the Google Pixel phone. The question there is, okay, let's think about what an unscrupulous teen might do with this. Let's not worry about AGI right now. Let's worry about, you know, what is the potential for harm and abuse by people, which is the same question we have only lately been asking ourselves about social media, for example, is that people are at the center of it. So I really like that answer. I want to ask you about the pace at which AI is improving. Because I remember that around when Chad GPT came out, this happened a lot during the writers and actor strikes.
Starting point is 00:59:17 You had a lot of people saying that, well, it's not that useful right now, but to write, say, a Hollywood script or to recreate the performance say a Hollywood script or to, you know, recreate the performance of a Hollywood actor. Oh, but everything is going to get so good so quickly. This is just the beginning and in a couple years, it'll be ten times better than that, right? And I gotta say, it's been, it's been close to two years now since ChatGPD came out. It does, it's not massively better than it was.
Starting point is 00:59:42 It's still outputting kind of the same output that it was two years ago. They've got the new audio visual version, but that's a new interface to the same large language model, at least based on the demos I've seen. Like literally just chat GPT, the product that people have had the most experience with, has not radically improved in my experience.
Starting point is 01:00:02 Maybe you can correct me on that. But it has raised a question for me about like the idea of scaling these systems. Like, can they really get that much radically better or are we close to the limit right now or where is the limit if there is one? Yeah, I really love this question as well. Thank you for bringing that up.
Starting point is 01:00:23 So I don't think we're close to a major limit. But at the same time, I do think in many ways, people overestimated how quickly things are changing. And I think there is a couple of fallacies that led to that. One, we have a chapter in our book on this. And the theme is that Chatch GPT is built on a series of innovations that go back 80 years, literally 80 years before physical computers
Starting point is 01:00:48 were even built. That's when the mathematical idea of neural networks was actually invented back in 1943. And you know, that's been gradually improved to the point where you got chat GPT, right? So from a computer scientist's point of view, a lot of these improvements have been happening actually very gradually,
Starting point is 01:01:03 but what happened with chat GPT was that that was literally the first time or one of the first times that it became a useful consumer product. Before that you had AI, you know, doing all kinds of things in warehouses or, you know, optimizing shipping routes or whatever, right? The business world was very used to AI, but you never heard about it every day in the news. But as soon as chat GPT came out, every little thing that's happening in AI is now a news item. So that gives you a very inflated sense of how quickly things are changing. So second thing is that
Starting point is 01:01:36 GPT 3.5 is the version that chat GPT initially came out with. A couple of months later, GPT 4 came out. That was in fact a significantly improved version. It wasn't making a lot of the kinds of mistakes that it was making before. Here, even technologists were fooled. They thought, oh my God, a qualitatively new and improved model is going to come out every couple of months. But it turned out that OpenAI had been training it for 18 months. They had just coincidentally released these two models within a couple of months of each other, right? And so these generational cycles, it's a little bit like Moore's law. They're like a couple of years apart.
Starting point is 01:02:12 So I think, you know, it's been a couple of years now. I wouldn't be shocked if in a few months from now, we saw a model that was significantly better. And I think we're also getting to a point where it's not just making the models bigger. We've written about this. It's going to run out at some point, but it's about how do you put those models into more complex AI systems and make those systems more useful. In that sense, I think progress is going to continue. It's not going to be every couple
Starting point is 01:02:42 of months, but it is going to keep going for the foreseeable future. But the final kind of fallacy here is that some of the things where people claim, oh, AI is going to be radically better and it's going to be able to do this, the limitation is not actually the technical capability. The limitation is something about our societies or institutions.
Starting point is 01:03:03 So there is this really nice example that when AI started to play chess better than the human world champion, there were these fears that chess is going to go away now. Why play chess when computers are better? But in fact, what has happened is it's a big boom in chess. And no one wants to see AI play against each other. That's not interesting at all.
Starting point is 01:03:26 What we want to see is humans playing against each other. And because the whole point there is human creativity. And I think something very similar is going to be true in Hollywood as well. The mere fact that something was created by AI is going to devalue it a lot. And so from that perspective, technical capability, maybe won't even matter as much.
Starting point is 01:03:48 That is such a great point. You know, yeah. Why would people want to watch two AIs play chess against each other for the same reason that, I mean, you can crank up a pitching machine to pitch faster than a major league baseball player. Doesn't mean you want to watch the pitching machine play baseball. Like the point is you like to see a human do it. So by that same token, why would you want to watch a movie created by an AI?
Starting point is 01:04:13 And I often ask people this, and I think this is a really good tactic when you're talking to someone who is really far down the rabbit hole of doomerism. I'll talk to people in Hollywood who work in my industry, and they say, well, pretty soon we're gonna be out of a job because AI is gonna make all the movies. And I'll ask them, well, okay, would you watch a movie
Starting point is 01:04:34 that's made by AI? Would that be an interesting movie for you to watch? And they go, oh, well, no, not me, but people are stupid, other people will watch it. And I'm like, well, now you've shifted the goalposts, right? You've said that, okay, I'm very smart, but everyone else is dumb. I'm sorry, I don't think that you, the person I'm talking to,
Starting point is 01:04:52 is radically smarter than everyone else on the planet. I think that if you wouldn't watch it, most other people wouldn't as well, right? That we have to give other people the benefit of the doubt to that degree. Like, it's just poking at these ideas a little bit. You know, these fears that are built on hype, they do start to collapse a little bit.
Starting point is 01:05:12 And I find when you return to the idea of human systems, what do humans want, what are humans interested in, what are humans capable of, it sort of grounds you in these conversations a lot more than, rather than when you just start thinking about technology, technology, technology. I don't know if you relate to that at all. Yeah, absolutely. I think some of the sort of biggest questions in AI and some of the biggest concerns that people have are actually not questions about the technology at all.
Starting point is 01:05:38 So one example is we've often seen AI being used on social media platforms to take down posts, take down images. And when something like that happens, when let's say an AI misfires and the wrong photo is taken down, the first reaction people often have is, oh, AI messed up. But actually what we've found is, more often than not, the deepest concerns that people have, let's say whether it's about censorship or about leaving stuff online, are actually about people. And these are decisions that are made by people. So one example is, there's this very famous photo of the napalm girl. This is an iconic war image.
Starting point is 01:06:17 Maybe many of the listeners might be familiar with this, which is basically like in 2016, I think Facebook started taking down these photos. And at first, people assumed that this was because of AI systems misfiring. But what was actually the case was this was an informed policy decision by Facebook to decide what the boundaries of acceptable speech on that platform are. And I think even when let's say Mark Zuckerberg goes in front of Congress and says that AI will solve all of our problems when it comes to Facebook and content. Um, it'll take down hate speech.
Starting point is 01:06:48 I think the hard part of content moderation, the hard part of deciding what's left up on social media platforms is not about AI at all. The hard part is where to draw the line. I mean, is it frustrating to you guys, you know, being actual computer scientists watching the media frenzy, the public being actual computer scientists, watching the media frenzy, the public frenzy around AI, watching, you know, congressional testimony and knowing what you do, uh, about it. Like, does that ever get to you?
Starting point is 01:07:14 Are people just shaking their heads at the Princeton water cooler going like, Oh my God, what the fuck is happening out there? I mean, it's a weird position to be in to keep pointing out that a lot of the alarmism is overblown. And specifically, the weirdest thing about that is the hate that you get for saying, no, we're probably not all going to die. And just, you know, people are just so committed to some of these fears that they have. And it just becomes a part of tribal identity.
Starting point is 01:07:46 And there's so many kinds of warring camps in AI. We're used to this in our culture wars, but there are versions of the culture wars in AI with various groups of AI stakeholders. Yeah, it's really hard to know how to navigate that, frankly. Yeah. I mean, you could look at the comments of this video when it's, it's, it's, it's really hard to know how to, how to navigate that. Frankly. Yeah. I mean, you could look at the comments of this video when it comes out on YouTube and see people say, Oh, Adam, you're all wrong. No AI is going to kill us all. It's blah, blah, blah, blah, blah. It happens every single time I talk about this. I it's, it's amazing how people wed themselves to these incredibly negative ideas and make it part of their identity.
Starting point is 01:08:28 Why do you think that happens? I mean, you said tribalism, you said earlier that there's a cult mentality at times, but I mean, we're just talking about, you know, computer programs here. It's a kind of a weird thing to build a cult around. I think once you start to think of these programs as like, you know, beings, sentient beings that you're birthing into the world, I think things take a sort of different flavor.
Starting point is 01:08:53 There's this very nice New York Times article about Anthropic, which is one of these bigger AI companies. It's a competitor of OpenAI's. And essentially the entire company has this culture of really believing that what they're building might kill all of them. And so once you build this company identity around AI and like your products might kill you. I think once you start from that contradiction and in some
Starting point is 01:09:18 sense, you can basically go on to sort of convince yourself of anything. I mean, that yeah, because if you believe that, why are you working for that fucking company? If you believe your product could kill you and kill everybody, the most logical thing to do would be to not work for the company at all. So I guess that's a good way to put it. Once you've swallowed the largest contradiction possible, you will believe anything. I don't know, Arvind, I see you smiling.
Starting point is 01:09:41 Maybe you have a thought on this. I mean, the beauty of it is that it's an internally consistent worldview. Anybody at Anthropic will give you the answer that we have to do it because we're the only ones who can figure out how to do this safely. If we don't, someone else will do it, and that AI will kill you.
Starting point is 01:09:58 And so they have an answer to everything. And yeah, I think when you're in that kind of environment where everyone believes that it, it just seems very compelling. I mean, look, it's wonderful to wake up every day and have a mission. You know, I've, I felt that way myself when I'm working on a TV show. Oh my God, we got an episode due. It's a lot of fun to wake up and do that. But at the same time, are we talking about a company or are we talking about a death cult? Like, in the future are we going to look back and say,
Starting point is 01:10:28 oh my god, these are the people who drank the Kool-Aid, you know? These are, you can, people who believe such a bizarre thing are capable of doing some weird, bad shit. And again, this is not a concern about the technology, this is a concern about the humans and the social structures that we build. I mean, is there a fear that these companies, the culture is getting so perverse inside them that they might do bad shit just as a result of that? I mean, I think the good thing is that AI companies are getting bigger by the day.
Starting point is 01:10:57 And so Entropic, I think, has raised billions of dollars. And what that also means is they have to hire outside their regular hiring pool at this point. And so as a result of that, I think there are some feedback loops that sort of take it back to like being a more normal company. I think this is also visible in their product strategy, for example. So for the longest time, Entropic was just like a search bar. It was almost as if they didn't want you to use their product. They had no app, they had no sort of presence on Android, on iPhone.
Starting point is 01:11:25 And now they've actually started making products that might be useful for like day-to-day customers. So I think it is sort of, I'm hopeful at least that we'll solve our way out of this problem by just AI companies becoming bigger and hiring people outside the regular hiring rules. And just becoming normal. They just need to hire a Susie from accounting.
Starting point is 01:11:45 And Susie's like, I don't know what you guys are talking about with this existential threat, extinction of humanity thing. I'm just here to like do the accounts receivable. And maybe that'll help the company culture a little bit. Exactly. In other words, it's kind of capitalism to the rescue here. You know, when you need to make money, make money, it's actually very counterproductive to believe a lot of these irrational things.
Starting point is 01:12:10 I've talked to a lot of people about, is AI going to kill us all? I have to say the most grounded conversation I've ever had was with a hedge fund person who was asking me about this so that they can figure out where to invest Right and just that just the mental clarity that they brought to it right and you know not Taking any partisan sides here and being open to all points of view That was an incredibly refreshing conversation for someone who works in AI and anytime I talk to anyone in AI They always are coming into it with some point of view. I'm coming into it with a point of view, right?
Starting point is 01:12:48 And yeah, and this hedge fund person just had this total clarity of mind. And that really clued me in to the fact that actually capitalism can be a clarifying force in some of these debates. Yeah, I mean, that often happens with business people where when you're trying to figure out where to invest your money,
Starting point is 01:13:04 you need the actual facts. If you're going to do a good job of it, you can't fall sway to group think or cult like thinking plenty of people do. But if you do, you're more likely to lose your money. And that's why, you know, uh, that's why some of the most reliable news is business news, cause like people actually need to fucking know, um, they don't need fluff. They need like the real facts.
Starting point is 01:13:24 Um, so if we're trying to get the real facts about AI, let's end it here. For the public who is trying to separate the truth from the cult-like fiction here, what tools can we use? What heuristics can we use mentally to try to, you know, think about, all right, here's what I actually worry about from here's the hype I can dismiss. I think there's something for each of the parties involved. So I think when the public is reading news about AI, I think it's important to keep in mind that the journalists, but also the editors, might have incentives apart from just reporting the news truthfully and in a neutral
Starting point is 01:14:05 way. There are these incentives that aren't ticks. When it comes to companies, I think it's very important to keep in mind that all of these companies are rushing to make profits off of AI. I mean, from the point of view of their bottom line, they better make profits very, very soon. And so anytime a company claims that it is doing such and such thing for humanity or solving this problem,
Starting point is 01:14:25 this huge massive problem, I think it's important to take it with a grain of salt. And then finally, even for researchers, I think it's hard to be grounded. And so, I mean, while science is this trusted community, this institution that I would otherwise sort of be wholly behind, I think in the case of AI research, even researchers have these sort of perverse incentives, some because of the funding, which majorly comes from industry for these large AI labs, some because I think it's easy to fool oneself, as Arvind mentioned earlier. And so just being a little bit more skeptical when hearing from all of these different stakeholders and actually trying things out for yourself,
Starting point is 01:15:02 rather than just going by what the news says, can be a huge deal. Like we are in this, I mean, in some ways, incredible moment when most people have access for free to these tools that the best researchers in the field also have. So that's a downside when it comes to bad users of it. But that also means that if you log on to ChatGPT or Anthropics
Starting point is 01:15:23 website or whatever, you actually have access to the best that any of us have access to. And so it's worth playing around with these things to realize what the vulnerabilities are, where they might be useful, especially in things you have expertise on. That's a wonderful answer.
Starting point is 01:15:39 Arvind, I wonder if you have any thoughts on the same question. I mean, I want to second the agency that people have. And I think people should trust themselves over what they're hearing from supposed authorities or watching in the media. So when it comes to, let's say, suppose you're a lawyer and you're thinking about what impact AI is
Starting point is 01:16:01 going to have for law, I think you're in a much better position to figure that out than an AI researcher is. Because what matters there is the domain expertise, much more than the AI expertise. And when someone tells you, oh, AI is so hard to understand because of all the math behind it or the engineering behind it, that's bullshit. I mean, it's hard to build AI because of how complex
Starting point is 01:16:22 it can be, but it's not hard to understand what effects AI is going to have and being a technical AI expert actually doesn't help much with that. So I think it's relatively easy to gain the level of familiarity you need with AI to be able to come to your own informed conclusions about the questions that matter to you. That is such a great answer. And it brings me back again to the writer and actor strike when, you know, again, I would talk to writers who would be so concerned,
Starting point is 01:16:50 oh my God, we're all gonna be replaced by AI. And I would say, hey, think about your own job. Like think about what you do all day, right? You have to talk to the executives on the phone. You have to talk to all the actors and directors. You have to like go be a part of a writer's room. The human element is most of your job, right? Beyond just sitting and out putting text,
Starting point is 01:17:08 or at least it's a good deal of your job. Do you really think that could be done by anything that you have played with in ChatGPT? That doesn't mean that, you know, there isn't a threat, there were still contractual provisions we needed to win, there are ways that companies could have used it to hurt us. But if you think back to your domain of expertise, think about how this product would actually be used,
Starting point is 01:17:27 you know, in your human world. And that'll get you, lead you to a much more accurate view than just simply swallowing the hype that you're being fed by, you know, the Sam Altmans of the world. So thank you so much for coming on the show. This has been an incredible conversation. The book is called AI Snake Oil,
Starting point is 01:17:44 comes out September 24th. You can pick up a copy, of course, at. The book is called AI Snake Oil. It comes out September 24. You can pick up a copy, of course, at our special bookshop, factuallypod.com, slash books. Where else can people get it, and where else can they follow your work on the internet? We have a Substack where we offer commentary on what's happening in AI this month, so to speak.
Starting point is 01:18:00 And it's kind of complementary to our book, which is about more foundational knowledge in AI. So I hope people check out the Substack, which is also called AI Snake Oil. Awesome. Sayas and Arvind, thank you so much for coming on the show. Thank you so much for having us. Thank you.
Starting point is 01:18:14 This was really fun. Well, thank you once again to Arvind and Sayas for coming on the show. I hope you'd love that conversation as much as I did. If you want to support the show on Patreon, head to patreon.com slash Adam Conover. Five bucks a month gets you every episode ad free. For 15 bucks a month, I will read your name on the podcast
Starting point is 01:18:31 and put it in the credits of every single one of my video monologues. This week, I want to thank Steven Volcano, amazing name, Angeline Montoya, code name Italian, Matthew Reimer, Ethan and Barack Pellead. Head to patreon.com slash Adam Conover if you would like to join them. Once again, coming up soon, I'm headed to Baltimore,
Starting point is 01:18:49 Portland, Oregon, Seattle, Denver, Austin, Batavia, Illinois, Chicago, San Francisco, Toronto, Boston and Providence, Rhode Island, adamconover.net for all of my tickets and tour dates. I'd like to thank my producers, Sam Roudman and Tony Wilson, everybody here at HeadGum for making the show possible. Thank you so much for listening and I will see you next time on Factually.
Starting point is 01:19:08 I don't know anything! That was a Headgum Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.