Factually! with Adam Conover - The ACTUAL Danger of A.I. with Gary Marcus
Episode Date: July 12, 2023Whether we like it or not, artificial intelligence is increasingly empowered to take control of various aspects of our lives. While some tech companies advocate for self-regulation regarding ...long-term risks, they conveniently overlook critical current concerns like the rampant spread of misinformation, biases in A.I. algorithms, and even A.I.-driven scams. In this episode, Adam is joined by cognitive scientist and esteemed A.I. expert Gary Marcus to enumerate the short and long-term risks posed by artificial intelligence. SUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgumSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
This is a HeadGum Podcast. why I am so thrilled that Bokksu, a Japanese snack subscription box, chose to sponsor this episode. What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill
grocery store finds. Each box comes packed with 20 unique snacks that you can only find
in Japan itself. Plus, they throw in a handy guide filled with info about each snack and
about Japanese culture. And let me tell you something, you are going to need that guide
because this box comes with a lot of snacks. I just got this one today direct from Bokksu.
And look at all of these things.
We got some sort of seaweed snack here.
We've got a buttercream cookie.
We've got a dolce.
I don't I'm going to have to read the guide to figure out what this one is.
It looks like some sort of sponge cake.
Oh, my gosh.
This one is I think it's some kind of maybe fried banana chip. Let's try it out
and see. Is that what it is? Nope. It's not banana. Maybe it's a cassava potato chip. I should have
read the guide. Ah, here they are. Iburigako smoky chips. Potato chips made with rice flour,
providing a lighter texture and satisfying crunch. Oh my gosh, this is so much fun. You got to get one of these for themselves and get this for the
month of March. Bokksu has a limited edition cherry blossom box and 12 month subscribers get
a free kimono style robe and get this while you're wearing your new duds, learning fascinating things
about your tasty snacks. You can also rest assured that you have helped to support small family run
businesses in Japan
because Bokksu works with 200 plus small makers to get their snacks delivered straight to your door.
So if all of that sounds good, if you want a big box of delicious snacks like this for yourself,
use the code factually for $15 off your first order at Bokksu.com.
That's code factually for $15 off your first order on Boxu.com. I don't know the truth.
I don't know the way.
I don't know what to think.
I don't know what to say.
Yeah, but that's all right.
That's okay.
I don't know anything.
Hello and welcome to Factually.
I'm Adam Conover.
Thank you so much for joining me once again as I talk to an incredible expert about all the amazing things that they know that I don't know and that you might not know.
Both of our minds are going to get blown together and oh my God, we're going to have so much fun doing it.
And today we're going to continue our discussion of artificial intelligence.
Pretty much everyone agrees that artificial intelligence is dangerous,
but no one can agree on precisely how.
Even the companies that are developing AI
claim, weirdly enough, to be terrified
about the future of their own technology.
And recently, we've been treated
to news story after news story
about the people who run those companies
testifying before legislators
or meeting with world leaders
begging for them to regulate them. But, you know, shouldn't we be suspicious of that?
After all, when the CEO of a capitalist company is begging for the government to regulate them,
it stands to reason that they might be doing so because, you know, they think they can extract
specific laws that will help their bottom line and hurt their competitors. And at the same time, you know who's been excluded from those meetings?
Scholars and scientists who actually study the real harms that AI could have,
not to mention the people who have already suffered those harms.
Forget a super powerful AI taking over the world.
Past guests Emily Bender and Timnit Gebru have raised the alarm about real world harms
like algorithmic bias or the spread of misinformation.
And those harms barely come up when the tech CEOs are hanging out with the presidents and prime ministers.
The point is, when someone who runs a billion dollar company tells you what they think you should be scared of, you should be a little bit skeptical.
And instead, you should look to the real experts, the scholars and scientists, to find out what their concerns are.
And that is what we are going to continue to do on this show.
Now, I want to be clear, because this is a new, fast-moving field, not all of our experts are going to agree with each other or even agree with me about every detail.
And that's why I'm going to continue to speak with a range of experts on this issue so that we can explore different perspectives in hope of getting closer to the truth.
And today, we have a fantastic guest for you. But before I tell you who he is,
I want to remind you that if you want to support this show, you can do so on Patreon. Just head
to patreon.com slash Adam Conover. You can get every episode of this podcast ad free for just
five bucks a month. And if you love standup comedy, come see me on tour. I'm back on the
road, heading to Baltimore, St. Louis, and brand new Buffalo and Providence, Rhode Island.
If you live in or near any of those areas, head to adamconover.net for tour dates and tickets.
Now, let's get to this week's guest. Gary Marcus is a cognitive scientist, author, and a leading expert on AI.
He recently testified before the Senate, and he publishes a regular newsletter called The Road to AI We Can Trust.
I know you're going to love this conversation. Please welcome Gary Marcus.
Gary, thank you so much for being on the show.
Thanks for having me.
So what are your biggest concerns about AI? I mean, I've heard from the tech CEOs.
We've heard some other academics. I want to hear from you. What are the biggest
concerns that we should be looking at?
Man, the first thing I would say is that there is no one thing. There's like within the AI field, there are people fighting. They're like, my risk is more important than your risk.
My short-term risks are more important than your long-term risk. No, my long-term risks are more
important than your short-term risk. It's like if somebody worked on car accidents, they'd be like,
you cancer people, that takes like 40 years. I've got car accidents right now. And the cancer
people would be like, yeah, but 10 times more people die of cancer than
car accidents.
Like, why are you?
And it's ridiculous, right?
You want to study both.
And we have short-term risk and long-term risk.
AI is increasingly, I won't say powerful, but empowered to control our lives.
And that has lots of ramifications.
And some of them are short-term.
Like the one I care personally most about is
democracy. Misinformation from AI is going to make the 2024 election a total shit show. You know,
we've got deep fakes, we have fake texts from large language models. You know, the voice cloning,
all this kind of stuff is going to make 2024 election. So nobody trusts anybody. And that's
not good for democracy. That's my biggest personal fear.
But there's also going to be tons of cybercrime, for example. So people are already using voice cloning to do these rackets where they call you and they say your kid's been kidnapped, send
Bitcoin to this, and people actually send the money. And that's a souped up version of a scam
that already existed. My grandmother, before she passed away, was a victim of a scam like that,
where a scammer called her up, pretended to be me and said he was in a car accident and, you know, was crying over the phone.
So she couldn't tell it wasn't really my voice. And she was in her nineties at the time. She went
to Western union and wired the guy a couple thousand dollars. And then there was nothing
to be done about it. Now imagine combining that with somebody taking my voice print,
which they could easily do. There's hundreds of hours of me talking on the internet.
Now they only need like 30 seconds.
I can do it to anybody.
A lot of these things,
a real risk.
A lot of the risks that I'm worried about existed before,
but I think of the analogy of like the difference between a knife and a
submachine gun,
like a submachine gun makes it possible to do wholesale.
What you had to do retail before,
suddenly kill a lot of people in five minutes instead of one at a time.
It changes the dynamics of things. Misinformation is thousands of years old, probably,
but the ability to do it essentially for free, not even in your native language with, you know,
make it sound like scientific and stuff like that, changes the game. It's not new, but it's much
worse. So you have all of those things. Then you have the possibility that people will use this not only to manipulate our elections, but also our markets. Somebody made a fake picture
of the Pentagon exploding. And this was a couple of weeks ago. And the whole market moved down
like 5% for four minutes or something like that. In response to a fake image of the Pentagon
exploding? Some people thought it was real. And this is something that's easily fact-checked like if you live close to the
pentagon you can look out the window and see it's not real but that's something that could have been
done it went viral yeah so the market actually moved for a few minutes and then people figured
it out okay and so you know some less than shiny human beings are going to take note of that they're
gonna be like wow i can short the market by putting out misinformation.
So we're going to see a whole lot of market scams.
But some of the things that you're talking about are things that were already possible.
I mean, someone could have photoshopped an image of the Pentagon exploding without using any kind of generative AI model.
They could have done that five years ago.
And we already saw scams like that.
It's all about the volume.
The volume is going to increase rapidly.
So you have those risks.
And then there are further risks.
So some people talk about existential risk, that machines are going to turn us all into
paperclips.
I don't think that's very plausible.
It's certainly not plausible anytime soon.
But there are serious risks.
People are in love with these systems. They
don't realize how dumb they actually are. And they want to hook them up to everything like
driverless cars. And, you know, people may eventually hook them up to like power grids,
and I don't know what. And you can imagine bad actors, you know, causing lots of mayhem,
you know, we blame the Russians for it, it's not actually the Russians fault. And we start yelling
at them and they yell at us. Next thing you know, there's like, you know,
some conflict that could really escalate.
And so there are a lot of ways in which fundamentally,
I think these technologies are destabilizing.
That's not to say there aren't any positives from them.
They're very good, for example,
in helping computer programmers code more efficiently and get more code
written. So there are upsides and there are downsides.
The only intellectually honest thing to say is this is all new and we don't know if the upsides are going to
outweigh the downsides, but we need to figure this out and not rush quite so fast.
But how much of what is new though, is the actual technology versus how we're talking about the
technology? Like large language models are, you know, and the sort of image-based
models that are similar are a very cool advancement. The tech is really interesting. It also
is limited in what it can do, as you've pointed out many times on your sub stack. And yet you have
now people like Sam Altman types and people of that ilk who are talking about, okay, well,
now that we've built large language models that can output text, we're on our way to general intelligence and it's going to turn us all into paperclips.
And also it's going to have all these huge ramifications. The technology is so incredible.
It's so powerful. Uh, and, uh, yet at the same time, you're saying that one of the big risks
is that we might hook up bad technology that is actually pretty limited to systems that harm us.
bad technology that is actually pretty limited to systems that harm us. It seems to me sometimes that really what were the problem here, half of the problem is the hype itself, that we have people
over-hyping technology that can't do what they claim it can. What are your view?
You know, we should be able to tell the machines basic moral values, like don't harm humans and
be honest. And the reality is they're too dumb to do those things
they were particularly too dumb to be honest they don't really understand the facts in the world like
if you compare to classical databases you tell them something and they don't make anything up
ever you know they retrieve what you told them but large language models because of the way they
work they're basically like putting little bits and pieces together that are statistically associated, but not necessarily causally connected or not truthfully connected.
They just make stuff up.
Like one of my favorite examples is a system that said on March 18th of 2018, Tesla CEO Elon Musk was involved in a fatal car collision. Well, it would take any human being
two seconds to realize that that can't be true. Who is in the news every single day since 2018?
Only Elon Musk, right? He's the most popular person, or popular is maybe a complicated word
there. He's the most covered human being. He's on Twitter every day. There are news stories,
many news stories about him every day. Obviously, he's still alive. If these systems were rational in the way that some people
misunderstand them to be, they could just look that up. The reason that they say it anyway is
they don't know how to look it up. They don't know how to do the least bit of fact-checking.
But statistically, a lot of people have died in car accidents. Some of them died in Tesla.
And he owns Tesla.
And they don't understand the relationships between owning a Tesla company and owning a Tesla car.
Those are different.
And, I mean, fundamentally, it's because what it's doing is it's mashing related words together, right?
That's all it's doing.
It's very hard for people to grasp.
But you have got it. That's all they're doing. They're mashing words together. They do it in a very sophisticated
way. They're the best word mashers that we've ever invented, but that is ultimately all they're
doing. And that's why you can't trust them. Now, do you feel that at some level, even the phrase
artificial intelligence is a marketing term that is being used to mislead people about what this
is capable of? Because again, I think large language models are cool. I enjoy playing with
them. They can do things with words that previous algorithms could not do. Are they intelligent?
Like, and you know, is the connection of artificial intelligence as a phrase to
the entire history of that phrase, the way it's used in science fiction, the way it's been used
in, you know in computer science historically.
Is that accurate?
Or is it somehow, to some degree, a concession to marketing that is misleading us about these
technologies?
I'll give you two answers to that.
One is it's absolutely marketing.
It might be possible, but it does make me think of Gandhi on Western Civilization.
Do you remember this?
Somebody says to Gandhi, what do you think of western civilization and he says well it would be a very good idea right yeah you know like maybe someday we really will have artificial intelligence
in the intuitive sense of like systems that are actually intelligent that can reliably reason
about the
world right now we have things like chess computers and if you want to call them intelligent
that's a matter of definition but it's certainly not what we think about when we think about like
the star trek computer where you would come to it you'd say i have this problem so i have you know
seven dilithium crystals and i need to make it home what do i do right and it would like do the
computations for you and come up with a good idea. We don't have AI that can really do that. We have AI that can do very specific
problems. And then we have large language models, which are much more general, but they don't do
anything in a reliable way. And so you can take your pick right now and you can make your
definitions what you want, but you know, we don't have machines that are intelligent in the sense
that you are like, you can hold this conversation. You've had a few others on this general topic,
but you know, you're keeping up, even if it's not your area, you're able to assimilate these
ideas into what you already understand. We don't have systems that really do that.
Yeah. And it seems a little bit like we are a part of the problem is that a panic around the
topic has been created when we have an advance,
but it's not necessarily a sea change. I mean, you mentioned chess computers as an example of
artificial intelligence. They absolutely are by any definition, but you know, deep blue beat
Garry Kasparov in what the late nineties, early two thousands. Um, and I'm sure there was a bit
of a panic at the time as well, you know, and read, you know, news week or whatever about,
Oh, computers are going to replace everybody's jobs. Well, now we have something that can output
text and that's an advancement. But the level of hysteria that we're seeing around the topic
seems to me out of proportion. And it seems like it might itself create harms
and create an opportunity for business actors. There's actually a, there's a little bit of a,
an undercurrent,
let's say in the Twitter verse where people are wondering whether the big
tech CEOs are actually deliberately focusing on the most sensationalist stuff
in order to divert attention away from the most immediate things.
So the real,
I,
that is what I actually believe.
I believe that that is what it seems like to me.
So what do you think of that idea? I mean, I think different, different people
running the tech companies or divisions of the tech companies are different people. And, and,
and some of them are genuinely worried about the long-term risk and really like can't sleep at
night because they think it's so imminent. Maybe they take current AI more seriously than I do.
And some of them are like, you know, what can we do to get the heat off of us around misinformation,
which we can't do, can I say fuck all on your show about?
Yes, you can.
As opposed to the long-term risk, which we can't do fuck all, but nobody's going to hold us
responsible for that. I mean, the most cynical thing that I saw is somebody that said, yeah,
they're basically positioning themselves so they can say,
yeah, well, the world went to hell. We could have stopped it, but we chose not to. But we
wrote this letter saying, we warned you. And like, great. Thanks a lot for the letter.
Or we warned you about something else that we think might happen 500 years from now.
Exactly. And in the meantime, you lost democracy, but you know,
at least you didn't turn into paperclips yet.
And of course the fact is if we lose democracy and we,
you know,
turn into an authoritarian nation state that seizes all of the
intellectual property and they turn it into a paperclip making machine,
then we're still screwed.
Right.
So like,
it's not a great, you know, step for the long-term outcome to, in the short term,
allow democracy to be destroyed. So, yeah.
Yeah. Now, I imagine, you've spoken before the Senate, I imagine that you have a view on how
we need to regulate AI. I certainly believe we need to have some regulations in place.
I think we need some regulations in place about most things. And that, you know, that is the duty
of lawmakers is to figure out what those regulations should be. And tell that to Silicon Valley.
They've like invented this idea that you can't regulate anything. I'm like, do you go on
commercial airlines? Like we have regulation all the time. Would you really like us to take back
seatbelt laws,
knowing the statistics on how many lives they save? But the techno-libertarian left,
like Marc Andreessen, who blocked me on Twitter, these guys, I don't know what to say. Of course
we can need regulation. The real question is what regulation we need. Exactly. And we've been
treated to this strange spectacle of some of these AI CEOs
calling for regulation, but of regulations of their own design, which immediately makes me
suspicious when someone is saying, hey, regulate me. And I have some great ideas for what the
regulation should be. I'm like, oh, hold on a second. Sounds like you want to write some laws.
And I wonder if those might be because they benefit themselves.
Did you see the expose in Time Magazine by Billy Perigo today. No, I didn't. Please tell me about it. He filed a freedom of information act and got access to a lobbying document that open AI made.
And so here you have Sam Altman appearing next to me on the Senate or more like I appeared next to
him, but you know, and saying how important, you know, regulation is, but his lobbying team is going to the EU saying, you know, this stuff is too strict.
We're not going to be able to do our large language models. You got to tone this down.
And so this is, you know, very much at least the company speaking from two sides of its mouth.
Is it pretty important expose that came out earlier today? And that's exactly what we don't
want is regulatory capture. So the worst thing we could have is no regulation. Like that's obviously a disaster. But the second
worst thing that we could have is laws that are made entirely by the companies, by the larger
companies that are affected them. That would have two negative consequences. One is they would shut
out any kind of innovation from the startups if they wrote the laws to their choosing, like put huge
regulatory requirements that startups can't handle. And also, like lots of things that we care about,
they don't care about. So here's an example, transparency, like Microsoft says they care
about transparency, but they use GPT-4. And we don't know what's inside of GPT-4. We don't know
what data it's trained on. The data that's trained on matters for things like bias. And so like they're going to write regulations where you give lip service
to transparency, whereas we need regulation that actually demands it. Yeah. Or copyright. I know
that, you know, while Sam Altman is clamoring for types of regulation he likes, the EU regulation,
I believe, was about copyright, right? About the copyright status of the materials they train the
AI on. And they said, oh, if you put that into place, oh, we'll just have to leave entirely and
it'll sink us. We don't like that type of regulation. And I, as someone who, you know,
makes shit for a living, am a little bit concerned about my shit that I make being used as training
data that could be used to rip me off. You should be very concerned. And I mean,
data that could be used to rip me off. You should be very concerned. And I mean,
there's another case where existing law just doesn't really envision the scenario that we have. So we have copyright law, but it doesn't envision a world in which the internet can just
be gobbled up wholesale and basically no compensation to the artist. So like, you know, it shouldn't be that every podcast
that you ever did is now germane
and they could just copy you.
But right now, you know, there's great limits
to what you can actually do.
And I'll say there's this weird argument
that I always get in the comments
when I talk about this on YouTube videos,
where they say, well, when an AI internalizes,
you know, sucks up and trains itself
on every single piece of content that you ever make and then makes an imitation of your voice or your image or your writing style or anything.
Oh, well, that's all that humans do.
Humans do that too.
That's intelligence.
That's how intelligence works.
And no, it fucking isn't.
Like what GPT-3 or 4 does is not actually similar to the process of writing something new.
It's literally using a stochastic model to choose the next word in a sentence that will most resemble its training data.
That's not what it is for a human to be writing.
You get on, and I just thought of a different way of making that argument, which is when you write about love, you've actually experienced love.
When it writes about love, it's just putting together words that other people said about love. And so, I mean, in that
sentence, in that sense, it's creative process, so to speak, if you even want to call it that
is completely different from a human's creative process. Yeah. I mean, I, the example I've used
in my past videos on this is that if you ask it to output a script for Adam ruins everything,
my old show on true TV, it can do it. But the only reason it can do it is because I came up with the show in the first
place and it cannot come up with that show. Um, it is completely incapable of all it can do is
reproduce things it has already seen. Whereas I can come up with things that are new and now
people are going to get into the comments and say, Oh, well, they're going to come with a better one
in five years. It'll be able to do that. Fuck you. Let's not talk about hypotheticals. I'm sorry,
Gary. I'm getting mad five years and talk about it. to do that. Fuck you. Let's not talk about hypotheticals. I'm sorry, Gary. We can talk about hypotheticals in five years
and talk about it.
Show me.
Like for 30 years I've been in this field
and I keep getting promissory notes.
Yeah, but when we make a bigger model,
it'll solve it.
Like in 2016, I said driverless cars
are much harder than anybody thinks they are
because you have this problem of outliers
that weird things happen.
They're not in the trainings.
Systems don't know how to deal with it. I said, there's lots of outliers. um you know weird things happen they're not in the training set systems
don't know how to deal with it i said there's lots of outliers and they said yeah we're just
gonna like make more data in grand theft auto we'll do it in video games we'll have enough data
we'll make the models bigger it'll all be hunky-dory and they spent another hundred billion
dollars and yeah literally and that's not an exaggeration they literally spent a hundred
billion dollars since i said you know know, maybe this isn't going
to work out the way you think it is with this kind of software.
And so here we are, you know, seven years later and I still hear, yeah, we'll make it
bigger.
Come to me when you have.
And they make these arguments about both AI and self-driving cars that are sort of almost
designed to be just stupid smart enough for people to repeat in the comments to YouTube
videos. So for instance, people will say about self-driving cars, which are currently killing
people on the roads, they'll say, oh, well, they kill people at a lower rate than human drivers
because human drivers are so bad. It just came out. It's actually the opposite. Now that we
find the opposite, the Teslas are killing more people than human or they're killing people at
a higher rate than human drivers. And so that talking point that they came up with 10 years ago in order to force self-driving cars down our throats because, okay, well, they're better, so we have to accept them.
Turns out it was bullshit all along.
Yeah.
And so we really need to be really, really skeptical of the AI boosters who are leveling similar arguments.
Which is tragic.
Oh, this is inevitable. This is tragic because someday they really will be better than human drivers. And people will be so used to having been bullshitted if that's the right
part of simple, it would be so used to being bullshit. They won't believe it. So we will
transition. It might be 30 years from now, but to a state where it would actually be better to have
the cars drive than the people, but people will be like, I heard that before. And they won't use
the cars because there's been so much lying and manipulating of the data and misrepresentation and putting it
in the right light or whatever that like people are, you know, start to get pretty skeptical about
it. I don't think that, I don't think that we are going to have them. I think we're going to build
a goddamn public transportation system in this country that is going to obviate the need for
self-driving cars. But that's my
vision of the future and I'm boosting it. We can live in hope for that one.
Well, it's as plausible as what these folks are claiming in my view. But so I assume that you do
have a view on what regulations you feel that we actually need around AI. I do. So let's talk about
what a few of those might be. So I have suggestions kind of from the top
level, like macro level all the way down. And, you know, I don't know how much time you want to go
into it, but I'll start with, I think that the US and other countries similarly need a central agency
or a cabinet level position or something like that. you know, a secretary of AI with supporting
infrastructure whose full time job it is to look at all the ramifications of this because they are
so vast. And because even though we have existing agencies that can help, none of the existing
agencies were really designed in the AI era. And there are all kinds of cases that sort of slip
through, like, what do you do about wholesale misinformation as opposed to retail misinformation?
Like if some foreign actor makes a billion pieces of misinformation a day, like maybe
you have to rethink how we address that.
And so we definitely need a central, you know, someone whose responsibility, somebody who
lives and breathes AI, follows all of this stuff.
We don't want to leave it to like the Senate has to make different rules when GPT-5 comes out from GPT-4 and from GPT-6. Like that's not what they're there for. They're there.
So we need a regulatory agency similar to the EPA or another agency where when facts on the ground change, that agency can issue new regulatory rules without having to go through Congress,
which is how we regulate.
We've got the FAA, we've got NHTSA for highway safety, etc.
We obviously need this for AI.
I mean, it's obvious to me, it's probably obvious to you.
Not everybody in Washington agrees.
People will tell you it's very hard to stand up a new agency,
which is true.
There are complications.
It's not trivial, but we need it.
So that's one thing I would say.
Similarly, do you have any concern?
Let me just ask you, Gary, about that first.
Because agencies of that type in the past have become captured by those groups.
If you look at the FAA and the Boeing 737 MAX, that really falls at the feet of the FAA's sort of having lax regulation.
You can look at other agencies that have that problem.
And why is that?
It's because you have the revolving door. They got my understanding. I'm
not an expert, but my expert, my non-expert understanding is that they got tricked on that
one. They got told this is not really a new vehicle. And it really was, it was, there were
not fundamental changes. I think that the general answer to that question is you have to have
independence, mostly scientists, outside scientists
who can raise their hand and say, no, they're telling you that this is, you know, just the
same airplane, but they've gutted all of these systems and replaced them. And we need to understand
these new systems. They're, you know, nice on paper, but we need data to see if this is actually
going to work. We need, for example, to understand how the pilots are going to respond to these new systems,
which in principle, you know, sort of mathematically correct.
But if they fool the pilots, then you're going to have all kinds of mayhem.
And we need to look into that.
And so you have to have independence.
What you don't want is regulatory capture where the companies being regulated.
We already talked about this are the ones who are making the rules.
And so, you know, Boeing shifted things and framed things
in a way that suited their purpose, but didn't suit the public's purpose.
Yeah, that's my concern is that, you know, we stand up this agency and then 10 years from now,
the person running it is like Sam Altman's brother or whatever, because he has the power to get
his buddy appointed to run the thing. And that's been the case with agencies in the past,
you know, especially when an administration changes.
But that's just good government.
That's a problem of good government that exists for any field.
And it's a serious problem, right?
I mean, it's not to be ignored.
But I think we have to face it.
So my second, you know, recommendation I just already talked about, which is scientists have to be involved in this process.
You just cannot leave it to the companies and the governments alone. And the governments have been running around putting out press releases and doing photo ops with the leaders of the companies without having scientists in the room or without prominently displaying the scientists that are there. And that
turns my stomach every time I see that. Like they did that in the UK, they've done that in the US
where, you know, they roll out some top government official and they have like, you know, open AI and
deep mind CEOs or things like that. And you have to have scientists there to send the message that this is not just, you know,
my brother-in-law running the organization kind of thing that you just talked about.
I mean, not only do you need to have scientists there, it would probably be better not to
have the companies that you are seeking to regulate in the halls of power.
You know, if the point is to regulate the use of AI
and regulate these companies,
then you probably shouldn't, you know,
welcome them all to the White House for a big summit
where you do what they say.
They should speak when spoken to, right?
I mean, you actually do need them in the room.
I mean, they have a lot of knowledge
about what's practical and where things are.
I mean, they should have a voice.
And, you know, they're affected
and we don't want to, you know,
regulate our way into losing the AI race with China.
Like there are lots of reasons to have the company in the room, but it has to be in moderation
with other, other voices too.
You just can't trust them for the whole deal.
You mentioned the AI race with China, not to interrupt your, your regulation list, because
I want to get back to it.
But is, is that a real concern?
Because that to me, again, sounds like the sort of story that's told by someone who wants me to get back to it. But is that a real concern? Because that, to me,
again, sounds like the sort of story that's told by someone who wants me to do what they say.
Oh, if you don't do what I want you to, we're going to lose the AI race with China. There's
certainly a political risk there, isn't there? There's like a short-term and a long-term
version of that. The short-term version is if China gets GPT-5 six months before us,
it doesn't really matter what are they going to
do they're gonna write more boilerplate text with it like it's just not artificial general
intelligence they're not gonna like figure out interstellar travel and get ahead of us
on minerals because they can go to you know other solar systems and we can't like it's not that
powerful and i think a lot of people you know but china and then you're like other solar systems and we can't like, it's not that powerful. And I think a lot of people, you know, but China, and then you're like, what about China? And like, but if they get GPT-5
first, and you're like, well, yeah, so they get it first. So like, I mean, it'd be, you know,
it probably raised their productivity a little bit, you know, it'd be nice to have, I'm not
saying it wouldn't be nice to have, but it's not game changing the way that a lot of people think.
I do think that whoever gets to
real artificial general intelligence, which I don't even think we're particularly on the right
track to, that's going to be a big advantage when we get there. But there's a lot of factors that
go into who makes those discoveries first. Just building a bigger, larger language model,
trained on more data, but basically more of the same. That's not what innovation is.
We need paradigm shifts here
because the fact is large language models make stuff up. They cannot help themselves. They make
up authoritative sounding bullshit. I just learned this great phrase, which is frequently wrong,
never in doubt. That's what large language models are. That is not what we are looking for. We want
something that is always right, does not make shit up, that you can count on.
And whoever gets to that AI, that's a significant advantage.
But how do you get there?
You know, pouring more money into bigger clusters of computers to run a technology that you
know doesn't really work correctly.
That's not the right idea.
The right idea is like, how do you foster a lot of science to look at different angles
that we've overlooked?
That's what we really need to do.
And if China takes a broader perspective on AI, then they could actually get there first.
If we take a broader perspective, maybe we'll get there first.
I want to dive in a little bit more into the differences between AGI, general intelligence, and the sort of technologies we have now.
But we have to take a really quick break.
We'll be right back with more Gary Marcus.
the sort of technologies we have now, but we have to take a really quick break. We'll be right back with more Gary Marcus. Okay, we're back with Gary Marcus. So Gary, please walk us through your other
proposals for regulating AI. So next thing would be global AI governance. I think we need to
coordinate what we're doing across the globe, which is actually in the interest of the companies
itself. You know, large language models are expensive to train, and you don't want to have 195 countries,
195 sets of rules requiring 195 bits of violence to the environment because each of these is
so expensive and so energetically costly.
So you want coordination for that reason.
The companies don't really want completely different regimes in each place. And ultimately, as things do get more powerful, we want to make sure that all of this stuff is under control. And so I think we need some coordination around that.
AI at large scale. So it's one thing if you want to do research in your own lab, but if you're going to roll something out to a hundred million people, you should make sure that the benefits
actually outweigh the risks. And independent scientists should be part of that. And they
should be able to say, well, you've made this application, but there's this risk and you haven't
done enough to address it. Or, you know, you've said there's this benefit, but we look at your
measures and they're not very solid. Can you go back and do some more? So there should be a little bit of negotiation until things are really solid. Another thing we
should have is auditing after things come out. Make sure, for example, that systems are not being
used in biased ways. So like, are large language models being used to make job decisions? And if
they are, are they discriminating? We need to know that. But now all of these regulations sound great to me. They sound important having an FDA style
agency, et cetera. That, that sounds like a great thing to do when you've got a technology that's
causing problems. The history of that sort of regulation in the United States is that when
you have a new field, that field desperately resists regulation with every fiber of its being.
And it isn't until there are real massive harms, people dying in the streets from tainted food,
that we get food regulation instituted by Teddy Roosevelt.
I told that story on my Netflix show, The G Word.
It requires generally wholesale death and devastation before we start regulating these things.
like wholesale death and devastation before we start regulating these things. Do you feel that there's any prospect in the near term for the kind of regulations that you're talking about?
Or are we going to have a lot of harms first? It's difficult to say. I mean, when I gave the
Senate testimony, there was actually real strong bipartisan recognition that we do need to move
quickly, that government moved too slowly on social media, didn't really
get to the right place. And so there's some recognition that there's a need to do something
now. Whether that gets us over the hump, I don't know. Part of my thinking is figure it out now
what we need to do. And even if it doesn't pass, we'll have it ready. So if there is a sort of 9-11
moment, some massive, you know, AI induced cybercrime or something like that, we'll be there.
We'll know what to do.
And so I don't think we should waste time right now being defeatist.
I think we should figure out what is in the best interest of the nation and really of the globe and be as prepared as possible, whether it passes down or later.
I agree that we should do as much as possible.
whether it passes down or later.
I agree that we should do as much as possible.
I'm just a little bit concerned about the amount of power wielded by the tech industry,
you know, that this is one of the most profitable industries in America. So it's very easy for those CEOs to go and get a meeting with Joe Biden, whatever they
want.
And it's harder for folks such as yourself or some of the other, you know, academics
we've had on the show to have those conversations.
But I agree that we need to be
having a conversation. I'm in a little bit of a special category, especially after the Senate
testimony. But right now, it's actually very easy for me to get meetings. I met,
well, I guess I shouldn't be too explicit, but I'm able to talk to whoever I need to talk to
in Washington and Europe and so forth right now. So people in power right now are recognizing
that they don't entirely trust the big tech companies,
that they do need some outside voices.
And for whatever reason, I right now am in that position
and they're taking me very seriously.
If I say I'm going to be in Washington,
could you meet next week?
People say yes.
And in fact, I was just in Washington,
met a lot of very high ranking people. And then I got on the airplane and then some other high
ranking people are like, when are you coming back? And so I think just by coincidence, but
people notice the testimony that I gave, want to solve the problem. They're sincere in wanting to
solve it. There's a problem that not everybody agrees about what to do, and everybody's trying to
take credit for having the one true solution.
In some ways, it's an embarrassment of riches.
Everybody's trying to help.
In some ways, there's a coordination problem.
But I would say that more than any time I've ever seen before, the government is reaching
out to at least some of us who are experts in the
field trying to say, you know, what would you do in this circumstance? So I give them some credit
for that. Okay. And it's always nice to feel wanted. I know. I'm sure that's very emotionally
validating for you. Exactly. I feel good. Who cares if the world goes to hell in a handbasket?
It's been a good month for me. No, I'm just kidding. Well, let's talk a little bit more about general intelligence, because I talk to people
every day who they play with chat GPT for a few minutes, and then they are convinced that we are
around the corner from having data from Star Trek wandering around, taking everybody's jobs,
right? Doing not just the work of human, but living human lives and et cetera. And it's been difficult for me to explain
how this is an algorithm that outputs words,
just like Deep Blue was an algorithm that played chess
and we're still a long way away from AGI.
You used the metaphor before
of someone inventing a knife versus a gun.
I'm like, they've invented,
they have invented a knife, right?
They have not invented a machine gun, right?
And they don't even know,
they haven't even invented gunpowder.
They don't even know that there should be
a circular hole in the middle for a bullet to come out.
And they think they can do everything with this spatula.
And you know, you can do a lot,
you know, you can use a spatula as a hammer
if you really have to, it doesn't work that well.
That's sort of where we are.
Here's an example crossing your two streams a little bit, which is is gpt4 has been trained on a lot of chess games and i can i can
infer that but i can also kind of prove it i'll tell you in a second um and it's also been exposed
to the rules of chess like we know it's read wikipedia the rules are there um and if you
haven't played chess for like the first 15 moves when it can stick to what has a lot of data on,
it will follow the rules.
And you,
you will think that it understands the Rui Lopez opening or something like
that.
But when you get out of the opening book,
it will start doing weird stuff like having bishops jump over Brooks.
And it's at that point that you realize,
even with all of this data,
it can't actually infer the rules of chess,
which have not except in tiny little ways,
changed in the last 2,000 years.
Once I said this in a podcast, it hadn't changed
at all, and I got a long memo of the
five changes that have happened over the last
2,000 years. Oh my god, the chess nerds, back off.
It's okay. The
rule I knew was new, but there were a couple others.
So, by and large,
they've been stable for 2,000 years.
There's lots of data in the database, and the system still can't figure it out.
Like that is, you know, artificial general intelligence should be able to read the rules of chess and figure out for itself how to play.
But instead, because it's it's not a chess engine.
It's not a, you know, a general deducing engine.
It's not a text generation engine.
And so it's able to generateucing engine. It's not. It's a text generation engine. And so it's able to generate.
It's a text generation engine.
And it's so hard for people to grasp that
because the text looks so good
because it has so much text to mimic.
The people kind of attribute to it
these magical powers that just aren't there.
Right.
Because all it's doing is chopping up
previous moves that it's seen.
It's actually two spatulas pulling together little bits of text.
Each individual bit has been lifted,
but the spatulas don't actually know what the hell they're talking about.
But ChatGPT as a product is brilliant because I believe, based on my use of it,
it's designed to make you think that there is a deeper intelligence behind it by producing text, which to a human, we really use language use as a sign of intelligence.
If my dog could suddenly talk to me, I'd be like, oh, my God, my dog is as intelligent as I am.
The fact that my dog can understand a couple of words I use as a sign of intelligence.
Right.
And in some ways, your dog is actually smarter because your dog.
Actually, I haven't met your dog, but dogs in general are-
She's very smart.
Are able, everybody says that about their dog, but it's the Lake Wobegon effect. Every dog is
above average. But your above average dog, like all above average dogs, can navigate in the world,
right? And can figure stuff out.
It can go into new environments and figure out, you know,
how to swim or what to jump over or whatever.
There's a kind of flexibility that your dog has that these machines don't.
And there's actually interesting research that dogs have a little bit of what
we call theory of mind,
understanding people and their desires and stuff to some degree.
Oh, they absolutely do.
And these machines, they're just mimicking text.
Sometimes they get it right.
Sometimes they get it wrong.
But I would say your dog is actually smarter.
That we have not, in fact, solved dog-level intelligence, let alone like chimpanzee intelligence
or something like that.
So what are some of the qualities that AI,
as we currently have it, is dramatically missing that's separated from any kind of general
intelligence? Yeah, let's make this concrete. I think it's barking entirely up the wrong tree
is the first thing I'll say. So the right tree for an intelligent creature to have is to recognize
that there is an external world
and that it is guessing at that external world. So right now I'm looking at you and I have a model
of the world where you're sitting. So there are chairs, there are pillows, there's a microphone.
I could be wrong. The whole thing could be faked. I could be a brain in a vat, but I have a theory
about what's out there. Probably most of the time those theories are right and then i can
make inferences about it so like if watterson suddenly came into the room i could look at the
level and be like you know maybe we should save this podcast for another time i think you know
it's getting a little risky in there right yeah um and so so we can reason about the world we have
models of the world if we watch a movie we make a model of the movie, of the world
in the movie. So what the characters are doing and why they're doing it and so forth. So we're
constantly going through this process of building models of worlds. Same if we play a video game,
we kind of learn there might be a different physics and the rules that control that world.
We actually do even more than that. Because when I play a video game, I think a human somewhere designed this game.
I know about the history of game design.
I know that the human probably wants me to have a fun experience.
And so they probably,
if there's a,
if I go over here,
there's probably going to be a guy giving me a quest there because in a
video game,
the game designer wants me to have fun.
And so they're going to put quests around.
I literally am applying my theory of mind to the game itself.
Like when I was a kid, games didn't have tutorials and now they do.
Right.
And so we've all learned that.
So that's part of our expectation about the whole world of video games.
And, you know, if I put you in front of Pac-Man, you might think to yourself,
there probably wasn't a tutorial for that because it was an older game or whatever.
So you have all of this very rich knowledge of the world which again a text you know we've kind of beat that together but um a text um pastiche
machine doesn't have um so that's the first step i think to genuine intelligence a second step
is to be able to be flexible and deal with circumstances that are different and to reason
about them so like if a tidal wave did come into your,
your studio right now, like, I've never seen that before, but I could still reason about it
and think like, you know, should we renegotiate when we're going to finish shooting? Should I
ask about that? Will I seem like an asshole? Cause I'm more interested in my recording than in your
health. Maybe I should phrase it a different way. And so I can do all of that, you know,
analysis relative to your circumstance, even though I'd never seen a tidal wave in a studio before. Hopefully I won't today.
But I could do that. I could reason about these weird circumstances. That's, I think,
part of intelligence is to be able to do that. And these current systems just don't really do
that. And in fact, they don't reason at all. They kind of retrieve stuff and they
make synonyms around them, but they don't
have an abstract ability to reason. Yes. And that abstract ability and the ability to take in
new information from around the world, have a model of the world in your mind, have a model
of other minds in your mind is a completely different type of thing than what the large
language models do. And this is to me, the crux of my argument with people who are going to come into the
comments of even this video when we put it on YouTube.
They're going to come in and they're going to say, Adam, you don't understand the AI
is going to get much better and they're going to build a thing that does that.
And I'm like, I don't think they are because I think the thing they currently built is
an entirely different type of thing from what general intelligence is.
I look like a large line.
Do you agree with that?
I look forward to the comment section.
Maybe I'll even drop by.
I don't usually.
Please do.
Because, no, you're totally right.
I could put a little nuance around what you say, but I think you're basically right.
So there are pathways to get to AI.
We're on a particular path.
And I don't think that path is really leading to artificial
general intelligence. Sometimes people use the metaphor of like climbing mountains. It's like
we've climbed one mountain, but the reality is the path we're on is no guarantee whatsoever of AGI
that, you know, look at driverless cars. You know, people have been saying for years and years,
we'll just get a little more data and it'll work. And it hasn't. And I think we can expect the same thing with the hallucination problem.
It's not going to go away. It's inherent in how these systems work. So we need some new ideas.
And the question is, how do we foster those ideas? I think the problem is not just that we're on the
wrong mountain. We're not on a mountain at all. We're climbing a jungle gym. Maybe we're going
spelunking, you know, but now you're not being fair. Okay. I mean, the view from this mountain is, you know,
nicer than any view we've ever had, but also you can look up and you're like, yeah, okay.
It's not really, you know, it's pretty cool that we're up here. You know, people 20 years ago
couldn't get this far. It's cool. It's genuinely cool. Like, I don't mean to say that it's not
cool. It has risks associated too, but it's amazing that it works as well as it does, but it's also not the right thing. But I think one of the biggest myths about
this industry and the tech industry generally is the myth of inevitable progress. Because we have
chat GPT today, that means we are guaranteed to have the science fiction version that I'm
imagining or that I'm selling you. And that I think is the most important link in the chain to break,
not just for the public, but for lawmakers.
Because I think lawmakers are liable to believe it as well.
Yeah, absolutely.
And I think like the tech executives signing this,
like we're all going to turn into paperclips thing
is their way of trying to perpetuate that.
Right.
I mean, not for all of them, but for some of them,
it's a way of perpetuating that myth.
There is absolutely no guarantee that the path that we're on is going to get us to AGI any more than there was a path, a guarantee that the path of bigger data was going to get us to driverless cars.
It didn't work.
Well, let's talk about something very specific to my own heart.
I'm a member of the Writers Guild of America.
I'm also on a negotiating committee.
We're on strike right now.
One of our strike issues is regulating AI on our contract, that we have specific terms that we want to put in place to prevent AI from being used to either passed off as our work product or that we'd be forced to adapt the work of AI.
the work of AI, basically putting in place measures that we hope will protect us from companies using this nascent technology for scurrilous purposes. I'm curious, in your view,
what is the threat of, you know, we're talking not AGI, but the large language models we have today,
such as ChatGPT, GPT-4, 5, whatever, for undermining workers and causing other problems
in the near term? I don't know. Well, I mean, I guess you could ask that about writing. You
can ask it about many domains. So voiceover actors are in trouble. You can clone anybody's
voice. There is nothing I can do to really make them feel better. I mean, I guess there's some
stuff about emotion that might be a little hard to capture, but
voice actors are clearly in trouble.
Screenwriters, I think, are in less trouble, but they might find their jobs shifting some.
What makes a really good movie is a plot twist that you didn't expect or really believable
dialogue, interesting story.
And these systems are not that good at it. didn't expect or really believable dialogue interesting story and this story sorry these
systems are not that good at it like people have written dialogues with me in them um
where i'm debating somebody or something like that and they get my top line points but they
never get any of the subtlety of the arguments that i make you know people say these things
imitate shakespeare or they imitate Agatha Christie or whatever.
They get like some of the statistics of the words.
They don't really get what these systems don't really get what makes those authors special.
And I think it's a ways from that.
I do think the screenwriters are right to raise the question of like, how are we going to think about all of this?
I don't think immediately they're in too much jeopardy. Although the thing to worry about, I think, is you could write a half-assed first draft with the machine, and then you make the writer do all the actual work.
But you say, well, it was just a rewrite.
We're not going to pay you that much for that.
That, I think, is a real issue.
That, in fact, is exactly our issue and our concern is that, look, here's the way I'll put it.
is that, look, here's the way I'll put it.
If I, and I think I've framed it this way on this podcast before,
but look, I, as a comedy writer,
if I write a joke, right?
Or if I'm just trying to tell you a joke right now, Gary,
right, I have to use all my contextual knowledge
about the world that we discussed earlier.
I have to say, who is Gary?
What things does he know about in his,
you know, my theory of how his mind works?
But looks aren't everything.
Well, no, it's not just looks.
It's not just looks.
But I have to think, oh, right, you know, what age are you? What are your cultural references? What has recently happened in the world? You know? And, uh, what, what is a
observation that I can make that is going to strike him as true that he hasn't heard anyone
made before that'll connect to what we're talking about at this present moment in a surprising way.
Right. Uh, in a certain sense, anytime a joke is written, it needs to be a brand new joke
for that person or for that audience, right. For that scenario. If sense, anytime a joke is written, it needs to be a brand new joke for that
person or for that audience, right? For that scenario. If I'm writing a late night joke,
I need to have read the newspaper today and know what most of the public thinks about the news so
that I can write a joke that's going to track for what they think. And when you get into that job,
that you literally need data from Star Trek in order to process all that. But our bosses do not know that
our bosses don't understand how writing works. They don't understand what is the difference
between good reading and bad, good writing and bad writing. And they also are stupid enough to
believe the oversold hype from the tech companies who have told them that they can use it to
undercut writers. And so they'll, they'll use it as a, as a scheme to say, okay, we're going to
have the AI write this and, oh, the AI is the writer. We just need you to punch it as a scheme to say, okay, we're going to have the AI write this.
And, oh, the AI is the writer.
We just need you to punch it up, go to set, do all the collaborative work with everybody else.
And basically our concern is not the technology per se.
It's the technology being used by the companies to undermine us.
Is that something that you see happening in other industries as well? Is that
a general concern or is it just one that we have as the Writers Guild? I'll give you another example
that I think worked a little bit that way, which is CNET started having AI write its news stories.
They had editors look at them, but because it was sort of polished, the editors just thought it was
fine. it was all
grammatical and so forth and then they put out 70 stories and 40 of them had mistakes and so that's
an example where people are not really putting the heart into it um you know if you're editing
jokes you're going to put the heart into it but most of the work is really the human who's later
in the loop in my own podcast podcast, Humans vs. Machines,
we had Bob Mankoff on.
He used to be the cartoon editor at the New Yorker.
He invented the caption contest.
And he's been playing with these things quite a bit.
And he says, you know, sometimes they write good jokes now,
but they write a lot of bad jokes too.
And the systems themselves don't have any sense
about what are the good jokes and what are the bad jokes.
So you can use them as a tool as a human, you wouldn't trust it to you know write a set um
and we actually went through a routine that he had um uh about uh evolutionary altruism and long
termism and it was like it was an okay draft but you know we walked through and like a lot of it
you wouldn't want to really use.
But here's what I'd argue is that you'd say, you know, someone might say, well, okay,
it writes a couple of good jokes and a lot of bad jokes. So eventually the ratio will get better.
My, what I would say in contrast is no, actually knowing the difference between a good joke and a
bad joke, that is what writing is. And it, you know know, so so if Bob Mankoff is looking at all
these output cartoon jokes and saying that's that's the one good joke, I'll make a cartoon
out of that. Guess what? He's doing writing, you know what I mean? In the same way that if you if
you're rearranging magnetic poetry, right? If I can keep one piece of the equation,
the large language model or Bob, I'm going to choose Bob because he can actually tell me which joke is funny.
Well, look, where should we wrap up here?
How do you see the next few years of artificial intelligence playing out?
What is your best case scenario and what is your worst case scenario for the next five years, say?
Like, what is the choice that we're facing right now in terms of how we approach this?
It's funny you should ask that because in some of the public talks that I've been giving in the
last couple of weeks, I started actually making two slides to illustrate that. And one of them
was kind of utopia. It's like, we get our act together, we come up with good global regulation,
we start using AI, we develop new forms of AI. We're no longer stuck on the large language models. And we start having AI live up to its potential, help with medicine and climate change and scientific discovery, maybe build elder care robots to help with the demographic inversion. That all the regulation. Nothing is transparent. Privacy is constantly invaded.
Misinformation runs the next election.
And basically, we wind up with anarchy before long.
The point that I make is not that I know which of these.
I think both are still possibilities, but rather that we need to make the right choices
right now.
We don't have a lot of time.
And the choices that we make about how we're going to regulate it, what research we're going to fund and so forth is going to affect probably the next
century. Like we really want to get this right. I think that is incredibly true. But I actually
want you to expand on one thing because you talked about we could have all the benefits of AI. And I
want to confess to something. I'm a skeptic on this topic. I think critical thinking is really, really important.
And so that's what I've been focusing on in my own communication.
I think something I have not focused on enough is the potential benefits of some of this
technology if used properly and not in a way that is going to undermine workers or create
misinformation or overhype bullshit.
So paint me a little bit more of that picture of what are the benefits that
could come from this technology when used well? I think the kind of stuff that Peter Diamandis
talks about for abundance is still possible. I don't think it's the likely outcome, but where
we just make everything faster, cheaper, better because we use AI to advance material science and advance medical
science and so forth.
Like, I think all of that is still possible.
It has to, I think, come with policies around tax credits and things like that to make sure
that the wealth doesn't just flow to the top.
There are lots of things we'd have to get right.
It sounds like the Industrial Revolution all over again, which was a little bit of a mixed
blessing.
I think we could all agree. You know, we are deep in, you know,
potential mixed blessing land. I think we have an opportunity to shape a positive, thriving world with AI, but it's not the default. Like we're going to have to put our asses into it in order
to get, you know, to a good place. Well, Gary, thank you so much for coming on the show to walk us through it.
I hope you'll come back at some point in the future,
maybe after we've gone a little bit down
one of those two futures that you described.
Your podcast is called Humans Versus Machines.
You can get that wherever you get your podcasts.
GaryMarcus.substack.com for your excellent newsletter.
It's one of my favorite sources on AI.
Thank you so much for being here.
Is there anything else you'd like to plug
before we let you go?
At Gary Marcus on Twitter. And this was a really wonderful conversation. Thanks for having me. Thank you so much for being here is there anything else you'd like to plug before we let you go at gary marcus on twitter and this was a really wonderful conversation thanks for having me thank
you so much for being here gary well thank you so much once again to gary marcus for coming on
the show for that fascinating conversation if you enjoyed it as much as i did head to patreon.com
slash adam conover to support the show just five bucks a month gets you every episode of our
podcast ad free you can even join our community. We do a live community book club over Zoom.
It's so much fun.
And if you support us at $15 a month, I will read your name out on this very podcast.
This week, I want to thank Hugo Villasmith, Shannon J. Lane, Matt Clausen,
Eki, Eki, Eki, Eki Patang, and Joseph Ginsberg.
Joseph, work on your screen name.
Eki, Eki, Eki, Eki Patang kind of outdid you there.
Once again, I want to remind you that I'm going on tour. If you live near Baltimore,
St. Louis, Buffalo, or Providence, Rhode Island, head to adamconover.net for tickets.
Thank you to Sam Rodman and Tony Wilson for producing this show with me. Everybody here
at HeadGum for making it possible. And I will see you next week on Factually. Thank you so much for
listening.