The Daily Zeitgeist - Resist The Urge To Be Impressed By A.I. 02.27.24
Episode Date: February 27, 2024In episode 1631, Jack and Miles are joined by hosts of Mystery AI Hype Theater 3000, Dr. Emily M. Bender & Dr. Alex Hanna, to discuss… Limited And General Artificial Intelligence, The Distinctio...n Between 'Hallucinating' And Failing At Its Goal, How Widespread The BS Is and more! LISTEN: A Dream Goes On Forever by VegynSee omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
I'm Jess Casavetto, executive producer of the hit Netflix documentary series Dancing for the Devil, the 7M TikTok cult.
And I'm Clea Gray, former member of 7M Films and Shekinah Church.
And we're the host of the new podcast, Forgive Me for I Have Followed.
Together, we'll be diving even deeper into the unbelievable stories behind 7M Films and Shekinah Church.
Listen to Forgive Me for I Have Followed on the iHeartRadio app, Apple Podcasts,
or wherever you get your podcasts.
I'm Keri Champion, and this is season four of Naked Sports.
Up first, I explore the making of a rivalry,
Kaitlyn Clark versus Angel Reese.
People are talking about women's basketball
just because of one single game.
Clark and Reese have changed the way
we consume women's basketball. just because of one single game. Clark and Reese have changed the way we consume women's basketball.
And on this new season, we'll cover all things sports and culture.
Listen to Naked Sports on the Black Effect Podcast Network,
iHeartRadio apps, or wherever you get your podcasts.
The Black Effect Podcast Network is sponsored by Diet Coke.
I'm Keri Champion, and this is season four of Naked Sports.
Up first, I explore
the making of a rivalry.
Kaitlyn Clark versus Angel Reese.
Every great player needs a foil. I know I'll go down
in history. People are talking about women's basketball
just because of one single game. Clark
and Reese have changed the way we consume
women's sports. Listen to the making
of a rivalry. Kaitlyn Clark versus Angel
Reese on the iHeartRadio app,
Apple Podcasts podcast or wherever
you get your podcast presented by capital one founding partner of iheart women's sports
hello the internet and welcome to season 326 episode 2 of der daily's i guys stay
production of iheart radio this is a podcast where we take a deep dive into American shared consciousness. And it is Tuesday, February 27th, 2024.
Oh, yeah.
I mean, what is that?
That means it's Tuesday.
We said 27th?
Is that where we are?
That's right.
22724.
It's Anosmia Awareness Day.
It's National Retro Day.
So I guess honor your see-through telephones and vinyl players.
National Polar Bear Day.
National Strawberry Day.
National Kahlua Day.
For all my Kahlua lovers out there, the day is yours.
The day is yours.
The white Russian.
Is Kahlua white Russian or no?
That's Bailey's, right?
I think so.
I don't know.
It's just mudslide.
Either way, it's just way too much sugar. And you will have a bad time if that's all you drink.
So, yeah.
That's right.
Well, my name is Jack O'Brien, a.k.a.
Don't you wish your snack food were sweet like this?
Don't you want more vague dissatisfying bliss?
Don't you?
Oh, that is courtesy of Clio Universe.
ying bliss don't you that is courtesy of cleo universe in reference to the fact that some of the greatest science of the late 20th century was spent focusing on how to get the perfect
balance of mouth feel and like enjoyment and dissatisfaction in nacho cheese Doritos. And yeah.
Just got to keep on popping them.
Got to make them keep popping.
That's who we are as a civilization, unfortunately, at times.
And I'm thrilled to be joined, as always, by my co-host, Mr. Miles Gray.
Miles Gray, a.k.a.
Pastor Warren Bond at Rockhandside. Pastor Warren on the rock hand side.
Pastor Warren on the rock hand side.
And that is the reference to Senator Elizabeth Warren saying her dream blunt rotation was just to smoke with the rock.
And I, my body curled into itself and I became a snail and drifted off into the sea.
Yes, thank you to La Caroni for that one,
for letting us know that's where you're going to pass the dutchie,
on the rock hand side.
On the rock hand side.
I wonder what the rock would be like high.
I feel like...
Oh, dude.
Probably pretty fun, right?
Because you know he's like simmering conservative underneath the surface.
So not good?
It would just be like getting high with Joe Rogue.
I feel like it's you
know he'd say something weird and then being like he'd probably like make an observation about your
physique he's like yeah dude he's like you're right-handed huh yeah i could tell man based
on how your shoulders built then you're like okay what is going on please explain it all yeah
yeah have some workout tips for you unsolicitedited. Don't need that. Miles, we are thrilled to be joined by the hosts of the Mystery AI Hype Theater 3000 podcast.
It's Dr. Alex Hanna and Professor Emily M. Bender!
Hello!
Hello!
Hello.
Welcome.
Welcome to both of you.
Yeah.
In addition to being podcast hosts, the highest honor one can attain in our world.
You both have some pretty impressive credits.
So I just want to go through those as well.
You are a linguist and professor at the University of Washington, where you are director of the Computational Linguistics Laboratory.
Alex, you are director of research at the Distrib Linguistics Laboratory. Alex, you are Director of Research
at the Distributed AI Research Institute.
You're both widely published.
You've both received a number of academic awards.
You both have PhDs.
What are you doing on our podcast?
Yeah, how?
How do you agree?
Like, our booker is really good.
Shout out to super producer Victor.
But this is, I don't know what's going on here.
Yeah, we're out of our depth here.
We're excited to be here.
And part of what's going on is that, you know, we're talking to the world about how AI, so-called AI, isn't all that.
Right.
And so a chance to have this conversation is really helpful and important.
We've been talking about that for a while, but listening to your podcast really drove some things home.
So I'm very excited about this conversation. Yeah. Every time we have, like we've had Dr.
Kerry McInerney on before, I think it's been on your show as well. Every time we pick and
out in New York, we were just always talking about like our own evolution with how we felt
about AI. Because before I was like, it's going to take all the writing
jobs. And then people were like, it's not. I'm like, how do you know? And they're like,
here's some more information. I'm like, it's vaporware to make money. So it's always nice
to like, yeah, it's always nice to kind of keep, you know, pushing my own understanding along
because I also encounter, you know, a lot of friends and family who also kind of have the
same thing. They're still at the like, this stuff looks like it's going to change the world forever in a way
we'll never know. And yeah, so it's always great to have the input of actual learned experts on
the topic. Learned expertise. So we're going to get into all of that. But before we do,
we like to get to know our guests a little bit better
by asking you, what is something from your search history that is revealing about who you guys are?
I have an exciting one. I can start. Hey, this is Alex. Yeah. So I'm building a chicken coop
because my second job, or rather one of my third jobs is kind of being a little suburban farmer.
And so I'm getting some chicks delivered from a hatchery. And they said, you need to put this
directly into the brooder. And I'm like, what's a brooder? And I had looked up what a brooder is,
and then came up and found that it can be anything from a rubber made tub where you put the chickens while they're very small until like a repurposed rabbit hut.
So then I started thinking all morning about how to build the chicken brooder.
That's my strategy.
And wait, just so it's just like a like it's like a pen, like a mini pen for the to just kind of.
Yeah, it's like a little pen for the chicks.
And you put in a heat lamp
because they need to be warm.
They have only
got that chicken fuzz. They don't have the chicken feathers.
Yeah, yeah.
To me, this
sounds like a job for a shoebox.
But that's probably...
Too small.
Unfortunately, that's what they're getting
if I'm building a chicken coop. That use what i got if you're in if you're in a if you're in a rush you can use your bathtub
if you have one of those and put some puppy pads right i learned a lot about this today
as i was just waiting for another meeting to start and just read everything i could
nice and of course that chicken coop is going to come in handy
free eggs during the robot apocalypse, right?
Which Sam Altman has told me is coming.
So I have to assume that's also what you're thinking, right?
Right.
I mean, she's prepping.
She's prepping for the eventual demise.
Yeah, I think during COVID,
I think there was a run on chicken eggs.
So I'd go to Costco and you couldn't get the gross of chicken eggs.
And then I'd go, I'd go, Hey, Hey. And I went to my backyard.
Sold them to your neighbors at a market. Got them for eight bucks an egg if you want them.
Yeah. They are still really expensive. Emily, how about you? What's something from your search
history? So I took a look and it's a bunch of boring stuff. Like I can't remember, I can't
be bothered to remember the website of a certain journal that I was
interested in.
So I'm like searching the name of the journal,
but then like down below that,
um,
we've been watching,
uh,
for all mankind,
which is this like alternate timeline thing that,
that that's a,
uh,
what happens if the Soviets landed on the moon first?
Right.
So it diverges in the 1960s and,
but it keeps referencing actual history.
So my search history is full of like, OK, so when did the Vietnam War actually end?
Right. And like, you know, which Apollo mission did what?
So it's a bunch of queries like that sort of comparing what's in the show to actual history.
How does that line up based on your sort of cursory research as you watch the show?
So interestingly, so there's there's a point where ted kennedy is like in
the background being talked about not going to chappaquiddick oh and then an episode later
he's president wow so there's and i suspect there's way more of that kind of stuff that
i'm not catching right right right just subtle things right right the media is like and ted kennedy missed a barbecue this weekend
in chappaquiddick that's that's super interesting yeah this is you know third or fourth person who's
mentioned for all mankind to us i think this is pushing it over the threshold to where i have to
i have to watch this damn thing it's enjoyable enjoyable, but it's tense. Yeah. You know, anything where it's like
people in outer space really creeps me out
because like that degree of like loneliness
and sort of lack of fail-safes.
Yeah.
You know, like when all the fail-safes are there
are the ones that you built
and beyond that, you know, your SOL,
that's creepy.
Seems uncomfortable, outer space.
Yeah.
What is something,
Alex, that you think is underrated?
I was trying to think of something, and
the only thing I could come up with this morning
was the Nintendo
Virtual Boy.
The red goggles
on a tripod. The red goggles
on a tripod, and I know
in whatever, 20 years
later, 30 years later, I think at this point, you know but then I was reading
the wikipedia page and it kind of said you know it was giving people headaches and it was overpriced
and but I'm going to stand by my claim that it is underrated it was ahead of its time and you know
it was one of those products that completely, you know, I don't know.
I think, you know, if you could go back, maybe it was just maybe I don't know.
I don't know what they could have done differently.
But yeah, that's that's all I got.
It was cool as hell.
Have you seen?
OK, so I was the same way.
Like I was subscribed to Nintendo Power, all that shit.
Totally. I was such a nerdy game kid.
And when like those ads for it,
it was like,
this is the future.
It blew my mind.
My parents obviously were like,
that's a hell no for a month.
Because of how silly you would look.
We don't want to deal with or in public with that.
Yeah.
It's also like a couple of grand wasn't,
it was just ridiculously.
It was something I think it was like a few,
it was just something like more than like what a playstation cost which was sort of like the height of it or something like anyway i remember a kid in my school had one brought it
to school no and we lined up to play it and it was the most underwhelming experience that like it
broke me because like there's nothing vr this. It's like lightly 3d everything.
I feel like in this like monochromatic,
like red scale kind of graphic thing.
It just was really,
it was underwhelming,
but I,
I completely follow the same path you had Alex and being like,
this is,
I need this.
This is the future.
Yeah.
Yeah.
And yeah,
that's so funny.
And everyone thought about the vision pro as being like the,
the,
like the spiritual sequel.
Exactly.
It's the direct descendant.
Yeah.
You know,
the Nintendo virtual boy crawled.
So the Apple vision could run.
Right.
How about you,
Emily?
What is something you think is underrated?
So after listening to a couple of your episodes and like all the nineties nostalgia, I have to say Gen X. Gen X is underrated.
Yeah. The people I thought were the coolest. We've got some monsters among us.
You know, but you know, Gen X, we're small, we're scrappy. I saw someone, a millennial that I know
online posting something about how weird it is to talk to Gen X folks about their internet experiences. And I wrote back, don't cite the old
magic to me. I was there when it was created. Although I feel like it's a bit meta, Emily,
because I feel like Gen Xers are always going to say that they're underrated, like they're underappreciated.
And it's kind of a class feature.
It's our whole identity.
Exactly.
It is a reference.
Yeah, it's a class feature of Gen Xers to say that they are unappreciated.
And, you know, you children have to struggle with AOL dial-up.
How about this prior no internet? Yeah., you know, how about this, you know, prior,
no internet or, I mean, unless you have that.
Do you even know how to read a map?
No.
I do.
As an elder millennial, I do.
Yeah.
Yeah. I grew up in LA.
I know how to use a Thompson guide or a Thomas guide.
That's, that was, that was our Google maps before anything.
But yeah.
Map quest, right?
Yeah, exactly.
But yeah, like I think about too like as
like an older millennial like gen x were all the people i thought were the coolest people growing
up like oh man this is like i want to be i want to be a hacker i want to be like them where's my
oh yeah well hey i don't know what are we gonna say about ourselves as millennials i wonder
i think we're just like we're just dead inside on some level that we're like, yeah, whatever. Cool.
We're dead inside and we killed everything, right?
The stock headline, you know, millennials are killing
the napkin industry. But the avocado toast is going to be
so good. It's going to be amazing.
We killed housing somehow because we spent too
much on avocado toast yes and you know you know turmeric lattes or whatever those are good i like
they are good though maybe even underrated oh okay there will be two people will be proposing
to each other with rings made of turmeric and avocado instead of diamonds because you guys killed the diamond industry.
Our new currency, yes.
I mean, good riddance, though.
Yeah, but we always talk about that framing.
It's just basically millennials can't afford X thing.
Right.
We can't afford.
We're not killing the diamond industry.
We don't have money for diamond industry.
We don't have money for these other things.
Millennials have this weird trend of living five to an apartment
because they love it.
Yeah.
Alex, what's something you think is overrated?
Well, we're on the AI stuff.
And so Emily and I are going to talk about parts of it.
But image generators, for sure.
Text-to-image generators.
I mean, people think they're very flashy.
But, I mean, it does a lot of copying.
You know, I was giving a talk at a San Francisco public library with Carla Ortiziz who's a um a concept artist and she had all these examples
in which you know give it a prompt and it would literally just copy kind of a game art and then
put it on the new thing and have some kind of squiggles around it right so the stuff is just
i mean and it's also i mean the the kind of images i think it's also, I mean, the kind of images, I think, it's just this distinctive style that aesthetically, apart from all the problems with this stuff and the non-consensual use and the data theft and all the awful kinds of non-consensual deepfake porn and the kind of far-right imagery like the stuff is just ugly like it's got this like every
every person and it just looks sunken they look like they've just seen some shit you know they've
got some shadows behind like it's a thousand miles there yeah just just glassy and you're just
you're just like what is the appeal you know and and so i just yeah i just i don't like it yeah it's not aesthetically
pleasing every so many things when people try and do like real life because i i look at like
the mid journey subreddit on reddit to see like what people are making and that was sort of like
my intro like wow these are like like when people get the prompts right it's interesting they're
like simpsons but real life characters and also but from a korean drama like okay so we're getting the korean
like the k drama irl simpsons but all the the aesthetic like all looks like it's like sort of
like david la chapelle's photography it's weird there's like this hyper stylized look to it that
feels very specific and i think the thing that interests me is sort of like as someone who's
terrible at visual art it's like this way for someone with absolutely no talent in that area to be like, I summon this thing I'm thinking of.
And you're like, fine, it's a bunch of copies.
But like, I think that's the thing that most people are like, oh, cool.
Right.
It made the thing.
It made the thing.
It made mediocre, like, art.
I had the thought,
AI is like a doesn't matter generator.
Like, Star Trek has the matter generator,
and AI just, like, shits out a bunch of stuff
that doesn't matter, that's, like, uninspired.
But it, like, matters to you for a split second.
You know, it's like, oh, wow, that's weird
that, like, that just came from a text prompt. But it actually doesn right wow that is that's weird that like that just came from
a text prompt but it's it actually doesn't matter in any long-term sense right yeah that's i love
that framing i mean what's that famous that i the like image of the like the diner yeah like
night hawks is that what it's called yeah Yeah, yeah, yeah. Yeah, yeah. Yeah, Night Hawks, yeah.
Someone had this great- Terrible thread.
Well, no, it was a troll thread, though.
Do you know what I'm talking about, Emily?
There was like the Night Hawks, the image,
and then someone was like,
I thought it, like, look at this composition.
It's so boring.
What if they were happier?
And it was kind of like a troll thread.
I think it was like-
Was it a troll thread?
I thought it was genuine. I thought that they were really saying, I'm making it better with AI like a troll thread. I think it was like... I thought it was genuine.
I thought they were really saying
I'm making it better with AI.
I think it was a troll thread.
It's like, why are these people so sad?
What if they were happy?
What if it was daytime?
Yeah, it was sort of like that.
Like, you know,
and I thought it was a troll thread.
Maybe I'm granting posters
like a little too much, like gracier. But I think it was a troll thread. Maybe I'm granting posters like a little too much.
But I think it was like the idea of like, I made it interesting.
Look at this.
Why is it so dark?
Yeah.
Right.
What if there was confetti?
Yeah, I know.
What if like a swagged out Pope was also sitting in a lightsaber?
Yeah, right.
Oh, so much. I don't know if it can lie to any image. also sitting at that time. With a lightsaber. Yeah, right.
The Swagdow Pope can lie to any image.
The Swagdow Pope did fool me, though.
I thought it was real.
And then I was like, this is such a cool
But again, it's not like
ChatGPT was like, you know, it'd be funny.
It was somebody was like,
you know, it would be funny and gave that prompt
to ChatGPT and then it ended up funny and gave that prompt to chat GPT.
And then, you know, it ended up being.
But like, that's that's the thing that you guys talk about a lot is just the erasure of like people are like, well, and here's chat GPT doing this thing.
And it's like that somebody told it to do and then programmed it to do.
Right.
It's doing a specific.
It is not a person.
Right.
Who is coming up with ideas. It's like a specific, it is not a person who is coming up with ideas.
It's like talking about Picasso's paintbrush, but just as the paintbrush, like, look what the paintbrush did. Oh my God, dude. Which is kind of cool and zen. There's a little Buddhism there
that I appreciate, but I feel like that's not where it's headed, unfortunately. Emily, do you
have something that you think is overrated?
Yeah.
I mean, the short answer is large language models are overrated.
But I think we're going to get into that.
We are.
Probably.
So I'm going to shift to my secondary answer, which is actually to take a page from Alex's work to say that scale is overrepresented.
That if the goal of taking something and scaling it to millions of people is like the thing, the only folks really benefiting from that are the capitalists behind it.
The product is worse.
The impact on the societies or the communities that lost access to whatever their local solution was is worse.
So scale is the thing that so many people are chasing, especially in the Bay Area, but also up here in Seattle.
And it's way overrated.
I can tell you that All over, in every
place. You hear that
all the time.
How does scale
make lines go up?
I worked on a
website that was having
really fast growth and
it was just based on publishing
these three articles a day and really
focusing on making them as good as possible and unfortunately once it got a lot of attention
then the executives came in and their first question was how do we scale this how do we
scale this though like how do we get a hundred articles a day and that was like back in 2010
those right now but it's just been how capitalism thinks about this from the beginning.
Yeah.
And now they're doing it with movies.
They're like, who needs movies now?
I could make myself the star of Oppenheimer.
It's like, why would you want to do that?
Seems like you missed the point of that movie.
Okay.
I want to be destroyer of world.
That'd make me feel strong.
Yeah.
Yeah.
All right.
Well, let's take a quick break and we'll come back and get into it.
We'll be right back.
I'm Jess Casavetto, executive producer of the hit Netflix documentary series Dancing for the Devil, the 7M TikTok cult.
And I'm Clea Gray, former member of 7M Films and Shekinah Church.
And we're the host of the new podcast, Forgive Me For I Have Followed.
Together, we'll be diving even deeper into the unbelievable stories behind 7M Films and L.A.-based Shekinah Church, an alleged cult that has impacted members for over two decades.
LA-based Shekinah Church, an alleged cult that has impacted members for over two decades.
Jessica and I will delve into the hidden truths between high control groups and interview dancers,
church members, and others whose lives and careers have been impacted, just like mine. Through powerful, in-depth interviews with former members and new, chilling firsthand accounts,
the series will illuminate untold and extremely necessary perspectives.
Forgive Me For I Have Followed
will be more than an exploration. It's a vital revelation aimed at ensuring these types of abuses
never happen again. Listen to Forgive Me For I Have Followed on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
Hey, I'm Gianna Pradente. And I'm Jemay Jackson-Gadsden. We're the hosts of Let's Talk Offline,
a new podcast from LinkedIn News and iHeart Podcasts.
When you're just starting out in your career,
you have a lot of questions,
like how do I speak up when I'm feeling overwhelmed?
Or can I negotiate a higher salary
if this is my first real job?
Girl, yes.
Each week, we answer your unfiltered work questions.
Think of us as your work besties you can turn to for advice.
And if we don't know the answer, we bring in experts who do,
like resume specialist Morgan Saner.
The only difference between the person who doesn't get the job
and the person who gets the job is usually who applies.
Yeah, I think a lot about that quote.
What is it, like you miss 100% of the shots you never take?
Yeah, rejection is scary, but it's better than you rejecting yourself. Together, we'll share what it really
takes to thrive in the early years of your career without sacrificing your sanity or sleep. Listen
to Let's Talk Offline on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
I'm Keri Champion, and this is season four of Naked Sports, where we live at the intersection
of sports and culture.
Up first, I explore the making of a rivalry, Kaitlyn Clark versus Angel Reese.
I know I'll go down in history.
People are talking about women's basketball just because of one single game.
Every great player needs a foil.
I ain't really near them boys.
I just come here to play basketball every single day, and that's what I focus on.
From college to the pros,
Clark and Reese have changed the way we consume women's sports.
Angel Reese is a joy to watch.
She is unapologetically black.
I love her.
What exactly ignited this fire?
Why has it been so good for the game?
And can the fanfare surrounding these two supernovas be sustained?
This game is only going to get better because the talent is getting better.
This new season will cover all things sports and culture.
Listen to Naked Sports on the Black Effect Podcast Network,
iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
The Black Effect Podcast Network is sponsored by Diet Coke.
Effect Podcast Network is sponsored by Diet Coke.
And we're back.
We're back.
And yeah, so as we mentioned, you know, we've done some episodes on AI, how there's a lot of hype around it.
Your podcast really helped me understand just how much of the current AI craze is really
hype.
understand just how much of the current AI craze is really hype. There's this one anecdote I just wanted to mention where a physicist who works for a car company was brought into a meeting
to talk about hydraulic brakes. And one of the executives of the car company asked them if putting some ai in the brakes wouldn't
help them operate more smoothly which they're kind of confounded and we're like what what does
what do you but like i guess that gets to the big question i have a lot of the time with ai hype
in the media which is what do these people think AI is? It's magic fairy dust, but you actually made it slightly more plausible by making it breaks.
The story was pistons.
Pistons.
Okay.
And the engineer was like, it's metal.
There's no circuits here.
Right.
Yeah.
But yeah, I mean, there are three kind of big ideas that I got a new perspective on
from your show that I wanted to start with.
The first one is kind of this distinction between limited and general intelligence or like general AI, which is something that I've heard mentioned a lot in the past couple of years in articles about chat GPT.
and articles about chat GPT,
they're basically trying to use this as a distinction to say, look, we've known that computers can beat a chess master for a while now,
but the distinction that we are trying to sell to you
is that this one is different because it's a general intelligence.
It can kind of reason its way into doing lots of things.
And one of the details I heard you point out on your show is that it's still pretty limited.
Like it's still basically doing a single thing pretty well.
Like the chat GPT is.
Can you talk about that? What is that distinction and how kind of imaginary it is? Yeah. So before Alice goes off on chess as a thing,
and we'll get there, Alex, you're right that chat GPT does one thing well. But the one thing that
is doing well is coming up with plausible sequences of words. Right.
The problem is that those plausible sequences of words can be on any topic. So it looks like it is doing well at playing trivia games and being a search engine and writing legal documents and
giving medical diagnoses. And the less you know about the stuff you're asking it, the more plausible
and authoritative it sounds. But in fact, underlyingly, it's doing just one thing,
predicting a plausible sequence of words.
Right. Yeah. And the thing about chess is that it wasn't too long ago that general intelligence meant chess playing, you know, and so you had, you had these, you know, during the cold war,
they would have these machines that can play chess and then that was supposed to be a substitute for
general ai they thought that was general intelligence and it was a big deal when
you know ibm watson beat gary kasparov and you know ibm watson was that deep blue deep blue
deep blue sorry yes yes watson watson won jeopardy jeopardy yeah it beat it beat ken jennings and then so yeah
thank you it was another one those ibm ones and so then that was the kind of that was that was the
bar and then they're like well test is now too easy we have to do we have to do go go you know
and then and then they got it into some real-time strategy games like StarCraft and Dota 2. And so there's always
been this kind of thing that's been a stand-in for intelligence, and this has been what the
advertisement for general intelligence is. And so in addition to what Emily's saying,
it's really good at doing this next word thing. I mean, it seems to achieve some acceptable baseline for all these different tasks.
And then these tasks become like stand-ins for general intelligence. And it really gives away
the game when people like Mark Zuckerberg come out and they say, well, Meta is going to work
on AGI now. I don't really know what that is, but, you know, we're going to do it. But you guys seem to like it as investors.
And so therefore, that is what we're going to call our thing now. Yeah, yeah. It's literally
turning back towards the machine and like dialing up the AGI knob and seeing if the crowd, you know,
seems to tear at it. Yeah. So so i think emily you were saying that just
you were specifically saying that they would take an existing sentence like one of the training
methods take an existing sentence cover up a word computer guesses what that word is and then
compares guess to what the actual word is and like it got better and better at doing that until it
seems like the computer is talking
to you using the language a human would use in conversation like just hearing it put that way
for some reason was like oh it's so it's like just better it's it's doing the same thing that a
chess computer does but it's just more broadly impressive to people who don't know how to play
chess so it's uh it's not actually doing the same thing that a chess computer does.
So what Deep Blue did was they just threw enough computation at to calculate it out.
What are the outcomes of these moves?
What opens up?
What opens up?
And you put enough calculation in for chess, that's doable with computers of the era of Deep Blue.
Go, it turns out the search space is much bigger.
computers of the era of Deep Blue.
Go, it turns out, the search space is much bigger.
So the AlphaGo,
what they did was they actually trained it more like a language model
on the sequences of moves in games
from really highly rated Go
players. Someone beat it recently
by playing terribly.
And then it was like, I don't know what to do.
It was completely out of the training.
They're just so crap at it.
That's why I usually lose at chess,
because I'm so much better than everybody.
It's just their low level confuses me.
So there was a big deal about how GPT-4
could play chess super well.
And one of the things about most of the models
that are out there right now
is we know nothing about their training data,
or very little.
The companies are closed on the training data,
even open AI, right?
But people figured out that there's actually an enormous corpus of highly rated chess players, chess games in the training
data and the specific notation for GPT-4. So when it's playing chess, it is doing predict a likely
next token based on that part of its training data. But that's not what Deep Blue is up to.
Deep Blue is a different strategy. And in all of these cases, there's this idea that like, well, chess is something that smart people do.
So if a computer can play chess, that means we've built something really smart, which is ridiculous.
And I just want to be very clear that Alex and I aren't saying there's some better way to build AGI.
We're saying there's better ways to seek out effective automation in our lives.
I'm kind of speaking for you there, Alex, but I think we're in agreement.
Yeah, or rather, I mean, even if we want automation in particular
parts of our lives, what is the function of making chess
robots, right? And insofar as building
chess robots in the Cold War, I mean, a lot of it had to do with
Cold War kind of scientific ac a lot of it had to do with, you know, Cold War kind of
scientific acumen, you know, the US versus the USSR and seeing if we could do better, you know,
and, you know, chess was one front getting to the moon. Speaking of the show, which has already
completely evaded me that we were talking about earlier. For all mankind. For all mankind. Thank
you. Please cut me out. Make me sound smart. It's a tough title.
It doesn't stick in the brain.
And they should have used AI to name that show.
Yeah, exactly.
Yeah.
Yeah.
That was the historical sexism that makes it stick.
Or maybe makes it a little bit slippery.
Right.
Yes.
Yeah.
So it's the same thing.
You know, getting to the moon first.
So, I mean, this becomes a stand-in, you know, and it's, you know, a lot of it's
frankly kind of show in his own, you know, mine can do it better than yours. And, you
know, and really getting to that. And so, you know, this association with chess, as
Emily said, of like this thing smart people can seem to do not things like this automation
useful in these people's lives, right? Will it make these kind of rut tasks kind of easier for people
instead we sort of jumped over that completely to go let's make this art kind of thing and oh
it's going to put out a bunch of you know concept artists and writers even if it doesn't work well
you know we're going to scare the wits out of these people who do this for a living what's like is there i feel like you know
looking at ces and just what happened at ces so many companies like it's got ai like this thing
this refrigerator is using ai and i feel like for most people who are just consumers or just kind of
passively getting this information there's like a a discrepancy between how it's being marketed and what it is actually
doing. Right. Yeah. And can you and so part of me is like, how the like, why are all these C-suite
maniacs suddenly being like, dude, this is the future. Like, you got to get in on it.
And I know that a lot of it is hype. But can you sort of like help me understand like the
hype line or pipeline to how a company says, oh, this is what we do. And then how that creeps up to the capitalists who
are then they're like, yeah, yeah, yeah. What, what, what? And then begin to say, like, make
these proclamations about what large language models are doing or how it's going to revolutionize
things. What's sort of like, cause from your perspective, can you just sort of be like,
just give us the, the unfiltered, like, this is how this conversation is moving from these laboratories into places like Goldman Sachs?
Yeah, I mean, I think a lot of hype seems to operate via FOMO.
It's honestly, honestly, if you want it kind of reductive kind of way, you need to get in on this at the ground level.
If you don't, you're going to be missing out on huge opportunities,
huge returns on investment, right?
And one of the ways I would say that AI hype is kind of at least a bit
qualitatively different than, let's say, I don't know, crypto or NFTs or,
you know, that's say, I don't know, crypto or NFTs or, you know,
distributing, you know, DeFi,
is that, you know, crypto has kind of shown itself
to be such a, you know, house of cards
that, you know, it takes only a few things,
like your Sam Bankman free going to jail,
or, you know, these huge kinds of scandals happening with with finance
and coinbase that you know those things and and it's the wild fluctuation of bitcoin and other
types of crypto that at least you have some kind of proof of concepts on this on this chat gpt where
it seems like this thing can do a lot.
And so there's enough kind of stability in that where folks say, yeah, let's go ahead and get in on this.
And if we don't get it on the ground floor,
we're missing out on millions or billions of dollars in ROI.
And so, you know, this is, you know, but the hype,
I mean, the hype is really at its base, this kind of FOMO element of it. And I think it's just the thing where, you know, it obscures so much of what's going on under the hood.
You see Gen AI, people say, we're putting Gen AI in this, Gen AI in that, that you can rub it on an engine.
When AI itself, I mean, itself, that kind of calling things AI, I technologies that have been called AI since the inception of AI in the mid-1950s, it can be anything as basic as a logistic regression, which is a very basic statistical method that gets taught in stats 101 or 201 to these language models that we see OpenAI hawking. And so you could say AI isn't anything.
I've got AI.
I've programmed by hand.
I did the math by hand.
Ooh.
Right, right.
So I think part of what's going on here
is that because of the way ChatGPT was set up as a demo
and then all these people did free PR for OpenAI,
sort of sharing their screen caps,
everybody has this sense that there's a machine
that can actually reason right now.
And then you call anything else AI
and that the, you know,
gee whiz factor from chat GPT
sort of like makes the whole thing sparkle.
There's a funny story from this conference called NeurIPS,
which is Neural Information Processing Systems.
Not real neurons.
These are the fake ones
that these things are built out of.
It's like a mathematical function that's sort of an approximation of a 1940s notion of what a neuron
is. And a few years back, I want to say like- You sound impressed, by the way.
Around 2016, 2017, I think, some folks at NeurIPS did a parody where they put together this pitch
deck of a company that was going to build AGI. And it's ridiculous. Like
the pitch deck is basically step three, dot, dot, dot, step three profit. Like there's nothing in
there. And this was just on the cusp of when AI went from being sort of a joke, like you wouldn't
call your work AI if you were serious to where we are now. And what these people didn't realize
when they were doing their pitch deck was that there were some folks in the audience who had already stepped over into that like you know ai true believer
we're gonna make a lot of money on this step people came up and offered them money for their
fake company that's great yeah so it's it's like the dot-com boom yeah there's just like what's
the company called some dot-com yeah yeah They're ready for the next thing, right?
They've kind of stalled out post-iPhone, and they're ready for the next big thing,
whether it's here or not.
And so, yeah, the GWIZification or the GWIZ results from OpenAI and ChatGPT,
I feel like they were like, good enough.
Let's get this thing going. Let's get
this machine cranking out. The idea of describing like the mistakes that ChatGPT makes as
hallucinations, like the marketing that's being done across the board here is pretty impressive.
The fact that ChatGPT even uses the word Right. That's a programming choice that could have been otherwise and would have made things much, much clearer.
Right.
Yeah.
Chad GPT is trained on the entire Internet.
Not true.
Yeah, that was something that I had accepted when it first started.
I'm like, well, this thing hallucinates and its brain is the entire Internet.
What could go wrong?
So the thing about the phrase the entire internet is that the internet is not a thing where you can go somewhere and download.
Right.
It is distributed.
And so if you are going to collect information from the internet, you have to decide which URLs you're going to visit.
And they aren't all listed in one place.
Right.
There's no way to say the entire internet.
There's some choices happening already.
Yeah.
no way to say the entire internet there's there's some choices happening already yeah and what they mean when they say the entire internet i mean most of the time they mean most of the english
available internet and i mean and although they've got some but then much of the work that has been
translated has been actually machine translated from from from english into another language so
for instance you know a bunch of the urls that they have are from the uh google japanese patents and common crawl and the dodge paper yeah they a lot
of it's been machine translated from english into japanese and it's just available through like
google's japanese patent site and so a bunch of this stuff is you know just just loaded in and it's and so yeah and so i mentioned this data set common
crawl which isn't you know in the gpt2 paper i think they cite it or is the gpt3 paper it's i
think it's a gpt3 paper and they said yeah it's the common crawl is this data set that they say
they use which is this kind of what's this weird like non-profit i was collecting a bunch of data
that could be useful for some limited scientific inquiry?
But then it became, now they've completely rebranded themselves.
They're saying, we have the data that, you know,
fuels large language models.
And Mozilla, the Mozilla Foundation,
folks that also, the corporation makes the web browser Firefox,
they did this report on Common Crawl
and kind of what's behind it too,
and find the weird idiosyncrasies of it.
But yeah, when they say the whole internet,
I mean, yeah, it's these curated data sets.
You can't get this whole internet.
That's just a falsity.
But for our listeners, you can get it from Jack and I.
We do have that file.
Yeah, it's just one large zip file.
Yeah.
It's just going to.
On your BitTorrent.
Yeah, exactly.
Yeah.
Yeah.
It's on Pirate Bay.
Check it out.
Yeah.
Yeah.
And we read it every morning to prepare for this show.
Just the whole thing.
Just download it.
Yeah.
Let's take a quick break and we'll be right back.
Let's take a quick break and we'll be right back.
I'm Jess Casavetto, executive producer of the hit Netflix documentary series,
Dancing for the Devil, the 7M TikTok cult.
And I'm Clea Gray, former member of 7M Films and Shekinah Church.
And we're the host of the new podcast, Forgive Me For I Have Followed.
Together, we'll be diving even deeper into the unbelievable stories behind 7M Films and LA-based Shekinah Church, an alleged cult that has impacted members for over two decades. Jessica and I will delve into the hidden truths
between high control groups and interview dancers, church members, and others whose
lives and careers have been impacted, just like mine. Through powerful, in-depth interviews with
former members
and new, chilling firsthand accounts,
the series will illuminate untold and extremely necessary perspectives.
Forgive Me For I Have Followed will be more than an exploration.
It's a vital revelation aimed at ensuring these types of abuses never happen again.
Listen to Forgive Me For I Have Followed on the iHeartRadio app,
Apple Podcasts,
or wherever you get your podcasts. lot of questions. Like, how do I speak up when I'm feeling overwhelmed? Or, can I negotiate a
higher salary if this is my first real job? Girl, yes. Each week, we answer your unfiltered work
questions. Think of us as your work besties you can turn to for advice. And if we don't know the
answer, we bring in experts who do. Like resume specialist Morgan Santer. The only difference
between the person who doesn't get the job and the person who gets the job is usually who applies.
Yeah, I think a lot about that quote.
What is it like you miss 100% of the shots you never take?
Yeah, rejection is scary, but it's better than you rejecting yourself.
Together, we'll share what it really takes to thrive in the early years of your career.
Without sacrificing your sanity or sleep.
Listen to Let's Talk Offline on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
I'm Keri Champion, and this is season four of Naked Sports,
where we live at the intersection of sports and culture.
Up first, I explore the making of a rivalry,
Kaitlyn Clark versus Angel Reese.
I know I'll go down in history, people are talking about women's basketball
just because of one single game.
Every great player needs a foil.
I ain't really near them boys, I just come here to play basketball every single day
and that's what I focus on.
From college to the pros, Clark and Reese have changed the way we consume women's sports.
Angel Reese is a joy to watch. She is unapologetically black. I love her.
What exactly ignited this fire?
Why has it been so good for the game?
And can the fanfare surrounding these two supernovas be sustained?
This game is only going to get better because the talent is getting better.
This new season will cover all things sports and culture.
Listen to Naked Sports on the Black Effect Podcast Network,
iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. And we're back.
So, yeah, just in terms of juxtaposing, like what these companies are saying and the mainstream media is buying versus
what is actually happening so i was reminded of sam altman telling the new yorker that he keeps
like a bag with cyanide capsules ready to go in case of the ai apocalypse so you know he is an expert who gets billions of dollars richer if people think
his technology is so powerful that he's like freaked out by it so but that that's something
that i i don't know i've seen reprinted and like long articles about the danger of ai
and then there's this other like more realworld trend where you talk about a Google engineer who
admitted they're not going to use any large language model as part of their family's healthcare
journey. Oh, that was a Google senior vice president. Very high up, actually.
Not a senior vice president.
A Google VP, Greg Corrado,
who's one of the heads of Google Health itself.
Oh, okay.
Yeah, and there's also this story from,
your show has a Fresh Health segment at the end where you talk about just saying examples,
headlines that are just...
Just fresh health.
Yeah, it's just fresh health.
It's new versions of health. The fresh AI health.
Yeah.
The one about Duolingo from a recent episode where they're getting rid of human translators,
firing them, cutting the workforce, replacing them with translation AI, even though the
technology isn't there yet.
But the point that you were making on the show
is that they're willing to go forward with that
because the user base won't notice the difference
until they're in Spain and need to get to a hospital
and asking for the biblioteca.
You know, like it's, they are specifically an audience who is not going to know
how bad the product that they're getting is because they're just like not in a in a position
to to know that by by the nature of the product and so just this distinction between being hyped
to the mainstream media and like these long read like new yorker atlantic
articles as this is a future that we should be scared of because like it's going to become
self-aware and sam altman is freaked out and then what it's actually doing which is just making
everything shittier around us is i think a big kind of chunk that I took away from your show that
was just like, oh, yeah, that makes way more sense. That feels much more likely to be
how this thing progresses. Yeah. So the AI doomerism, which is when Altman says,
I've got my bug out bag and my cyanide capsules in case the robot apocalypse comes or when jeff hinton um who's
you know credited um has has a turing award for his work on the specific kind of computer program
that's used to gather the statistics that's behind these large language models he's now concerned
that it's on track to becoming smarter than us and it's gonna like these piles of linear algebra
are not going to combust into consciousness. And anytime someone is like,
you know,
pointing to that boogeyman,
what they're doing is they're basically hiding the actions of the people in the picture who are using it to do things like make a shitty language learning project.
Because it's cheaper to do it that way than to pay the people with the actual expertise to do the translations.
Right.
Yeah.
Yeah.
And it's just leading just to, I mean, I like
this kind of thought on this, I mean, this kind of
process. And Corey Doctor has
got this kind of sister
concept of AI hype,
which is enshitification,
which I think the
Linguistic Society of America, it was their
overall... Yeah, American
Dialect Society. Dialect Society, yeah.
Picked it as the overall word of the
year for 2023. But it's gentrification
of something very specific, right? It's not just
like we're now swimming in
AI-extruded junk, right?
AI in quotes, as always.
It's something more. The
companies create a platform that
you sort of get lured in because initially
it's really useful.
You think about how Amazon was great for finding products.
It sucked for your local brick-and-mortar businesses,
but as a consumer it was super convenient because you could find things.
But then the companies basically turn around
and they extract all of the value that the customer is getting out of it,
and then they turn around to the other parties there,
the people trying to sell things, and they extract value out of them.
So you start off with this thing that's initially quite useful and usable, and then it gets enchinified in the name of making profits for the platform.
And that's like the specific thing about enchinification.
And I would say there's some kinds of processes.
You can think about enchinification, the kind of idea that you have to rush to a certain kind of market that you have a monopoly on this kind of
thing and i guess yeah i mean the thing about large language models i think that we get we get
on and talk a lot about is that large language models are kind of born shitty with content
so it's not like the platform started and that platform monopoly led to this kind of process
of instantification it's more like you decided to
make a tool that is a really good word prediction machine, and you use it for a substitute for
places in which people are meant to be speaking kind of with their own tone, with their own voice,
with their own forms and their own expertise. And then, and so it's kind of, yeah, it's,
it's born shitty. And so, you know, this so this kind of thing, I think, is really helpful. It makes me think of kind of a thing I think I saw a few times on Twitter where people are like, well, if you're an expert in any of these fields and you read content by large language models, you actually know anything about this.
thing about this, you're going to know that it's absolute bullshit, right? You know, if you ask it to write you a short, you know, treatise on, I don't know, sociological methodology, something
specific that I know a little bit about, it's going to be absolute bullshit, right? But good
enough to computer engineers and, you know, higher ups at these companies. Yeah.
So here's an example.
The other day I came across an article that supposedly quoted me out of this publication
from India called Bihar Prabha.
And I'm like, I never said those words.
I could see how someone might think I would.
And I searched my emails like, no, I never corresponded with any journalists at this
outfit.
So I wrote to them.
I said, a fabricated quote, please take it down in print or retraction, which they did.
And they wrote back and said, oh, yeah, that actually we prompted some large language model to create that article for us and posted it.
Yes, because it seems like you might have.
And the large language models, they don't like that.
That's isn't that what hallucinations are a lot of the time is just the large language model making up stuff.
It seems like is what the person wants them to say.
Exactly.
But here's the whole thing.
Every single thing output from a large language model
has that property.
It's just that sometimes it seems to make sense.
The whole thing is trying to do a trick of like,
predict, I figured out what you wanted me to say.
Ha ha.
But it's like, well, but what I wanted you to say
is not always, that's not how I want my questions answered. That's actually a wildly flawed way of coming up, like answering people. It's definitely something that I do in my day-to-day life because I'm scared of conflict and a people pleaser, but that's not, I'm not, I'm not a good scientific instrument for that reason. You know, like, but you just got a, you just an avoidant attachment style, which as an,
as another avoidance, they've just made me a scientific model. That's terrible.
I can't believe this. I was like, I don't know if you saw that uh headline where tyler perry was like i was
gonna open an 800 million dollar studio but i stopped the second i saw what sora this video
generative ai could do and i realized we're in trouble he said quote i had no idea until i saw
recently the demonstrations of what it's able to do. It's shocking to me.
And he's basically saying, he's like, you could make a pilot and save millions of dollars. This
is going to have all kinds of ramifications. That feels like quite, that feels like half like
just ignorance because this person's like, Oh my God, like total wow factor, but also maybe hype.
But I'm also curious from your perspective, what, what are the actual dangers that we're facing?
Because right now, I think everything is just all about
these are the jobs it's going to take.
I think in the LLM episode where the LLM predicted
what the potential of LLMs were
and the jobs that it could take in a very...
The GPT's paper was so ridiculous.
Yeah, where it's like, huh?
You know, like just sort of the unethical nature of how even these companies are doing research and creating data to support this.
Can we just stop, Miles?
Can you just stop and like explain exactly what the methodology of that paper was?
No, please.
I will allow the experts to do it because it's absolutely bonkers to hear because any person who's like been at any, like tried to look at a study or something and you look at methodology, like, um, so methodology is a very kind term for what was in that paper.
Truly, truly.
So Alex, did you want to summarize real quick what was in there was two different things we were looking at.
There was something that came out of OpenAI and something from Goldman Sachs and the Goldman Sachs one was silly, but not as silly as the OpenAI one.
Yeah.
I mean, getting in the deep, and I went through this and I puzzled this paper.
I like poked my friend who's a labor sociologist.
I'm like, what the hell is going on here?
And, you know, okay.
So there's this kind of metric that you can use to judge how hard a task is that the government collects.
And there's this kind of job classification.
They rate them from, you know, one to seven effectively.
And so what Goldman Sachs said was, well, basically anything from one to four, probably a machine can do.
And you're like, okay, that's kind of silly.
That's huge assumptions there.
I understand, though, as a researcher, you have to make some assumptions when you don't have great data.
But what OpenAI did is that
they asked two entities
what could be automated.
They asked one, other OpenAI employees,
hey, what jobs do you think could be automated?
Already hilarious,
because they're pretty inclined. They're not doing those jobs. you know they're they're you know pretty
they're not doing those jobs right yeah they're not doing those jobs they're pretty primed to
think that their technology is great and then they asked dpt4 itself they prompted and say hey
can we automate these jobs so you know and so you'll never guess what the answer was you'll
never guess you'll never guess they took the output of that as if it were data
yeah and then like you know these ridiculous graphs and blah blah and it's just like the
whole thing is is is fantasy right so yeah well so is the danger there just sort of this reliance
like i guess so if we're classifying the sort of threats to our sense of like how information is
distributed or what is real or what is hype or
whatever, or if they're, they're actually taking jobs. What, what is something that I think people,
that people actually need to be aware of or to sort of prepare themselves for how this is going
to disrupt things in a way that, you know, isn't necessarily the end of the world, but definitely
changing things for the, for the worse. There's a whole bunch of things. And the one that I'm sort
of most going on about these days is threats to the information ecosystem. So I want to talk about that. But there's also
things like automated decision systems being used by governments to decide who gets social benefits
and who gets them yanked, right? Things like Dr. Joy Bullen-Winney worked with a group of people
in, I want to say, New York City who were pushing back against having a face recognition system
installed as their like entry system. So they were going to be continuously surveilled
because the company who owned the house that they lived in, or maybe it was,
I'm not sure if it was government housing or not.
Apartment complex. Yeah.
Yeah. Wanted to basically use biometrics as a way to have them gain entry to their own homes, right, where they
lived. So there's dangers of lots and lots of different kinds. The one that's maybe closest
to what we've been talking about, though, is these dangers to the information ecosystem,
where you now have the synthetic media like that news article I was talking about before being
posted as if it were real, without any watermarks, without anything that either a person can look at
and say, I know that's fake. Like I knew because it was my name and I knew I didn't say that thing.
Right.
But somebody else wouldn't have.
And there's also not a machine readable watermarking in there.
So you can just filter it out.
And this stuff goes and goes.
So there was, oh, a few months ago, someone posted a fake mushroom foraging guide
that they had created using a large language model to Amazon as a self-published book. So then it's just up there as a book. And coming back around to Sora,
those videos, they look impressive at first glance, but then just like Alex was saying
about the art having this sort of facile style to it, there's similar things going on in the videos.
But still, it should be watermarked. It should be obvious that you're looking at
something fake. And what OpenAI has done is they've put this tiny little faint thing down
the lower right-hand corner that looks like some sort of readout from a sensor going up and down.
And then it swirls into the OpenAI logo and it's faint. And it's in the same spot that Twitter
puts the button for turning the sound on and off. So it's hidden by that if you're seeing these
things on Twitter. And it's completely opaque, right? off. So it's hidden by that if you're seeing these things on Twitter.
And it's completely opaque, right?
If you are not in the know, if you don't know what OpenAI is,
if you don't know that fake videos might exist,
that doesn't tell you anything.
Right.
So these are the things I'm worried about.
And the things I'm really worried about is, you know, these things doing a pretty terrible job
at producing written content and videos and and and images and so it's not that
they could replace a human person but it just takes a boss or a vp to think that it's good enough
right and then this replaces whole classes of of occupation so again, talking to Carla Ortiz,
who's a concept artist,
she's done work for Marvel and DC
and huge studios and Magic the Gathering.
And she's basically saying,
after Midjourney Stability AI
produces this incredibly crappy content,
jobs for concept artists have really dried up.
They've gone and it's really hard. And I mean, jobs for concept artists have really dried up. They've gone and it's really hard.
And I mean, especially for folks who are just breaking into the industry, who might just
be trying to get their work out there for entry-level jobs, they can't find anything
right now.
And so imagine what that's going to replace, what that's going to encroach on, right?
I mean, that's kind of unique thing about kind of creative fields and coding fields
and whatnot.
And then I would say this, yeah, this automated decision making kind of in government and
hiring, I mean, those are, you know, definitely, you know, terrifying, right?
And then, I mean, this is already being deployed.
I mean, at the US-Mexico border, there's kind of massive... The Mark have actually just put out
this interview with David Moss
who's at the Electronic
Freedom Foundation. No, either the
Electronic Freedom Foundation, I think it's at the ACLU.
I have to look this up. But it's
basically a survey of
surveillance technology that's at the
southern border. And
Dave Moss is at EFF, the Markdown,
but just published something with something about with him it
was like a virtual reality tour of certain technology or something wild like that i was
going to say there's also things like shot spotter which reports to be able to detect when a gun has
been fired and this has been deployed by police departments all over the country and there's you
know there's no transparency into how it was evaluated or how it even works or why you should
believe it works and so what we have is a system that tells the cops that they are running into an active shooter situation, which is definitely a recipe for reducing police violence.
Right?
Yeah.
Right.
Powered by Microsoft.
Yeah.
And there is a recent investigation that showed that it's almost always used in neighborhoods.
Yeah, communities of color.
Yeah, the wired piece on that, basically.
Yeah, I think they said about, what, 70% of the census tracts?
Something ridiculous like that.
Yeah, it's absurd.
And then there's things like Epic,
which is a electronic health records company,
is partnering with Microsoft to push this synthetic text into so many different systems so that
you're going to get like reports in your patient records that were
automatically generated and supposedly checked by a doctor who doesn't have
time to check it thoroughly. Right.
And they're going to be doing things like, you know,
randomly putting in information about BMI when it's not even relevant or
misgendering people over the place or, you know, this kind of stuff is going to hurt. Yeah. And I wanted to add
to the issue of like entry-level jobs drying up for people who do, for example, illustration,
Dr. Joy Bolanwini points out that we're getting what's called an apprentice gap,
where those positions where the easy stuff gets automated. And I don't think this is just in creative fields, but the positions where you're doing the easy stuff and you're learning how to master it and you are working alongside someone who's doing the harder stuff.
If that gets replaced by automation, then it becomes harder to train the people to do the stuff that requires expertise. in creative fields because there's such a just inability for executives to you know like they
they don't know it they've never known anything about like what what is quality creative work so
i feel like it's much easier for them to just be like yeah get rid of that and yeah like the way
that we'll find out about that that isn't working is the quality of the creative output will be far
worse right i mean what i mean all those folks really tend to care about
are content and engagement metrics.
You can't actually have something that's kind of known
for quality or creativity, right?
It does remind me a little bit about kind of, you know,
the first rebels against automation, the Luddites,
and the kind of way in which they did have this apprentice guild system in which they trained for a decade or so before they could perhaps go ahead and open their own shop in a way that those folks were replaced by these water frames that were, they're called water frames because they were produced by hydraulic power,
but then,
you know,
effectively powered by unskilled people and by unskilled,
usually children.
Yeah.
Doing incredibly dangerous work.
But yeah,
folks that have been training for this,
the apprentices were incredibly steaming mad.
Yeah. And then we made their name a synonym for like dummy.
Being a hater.
Hater.
Yeah.
Tech hater.
Not Canadian in the coal mine.
But there's been some nice efforts to reclaim them by Brian Merchant and Kevin Mueller and some folks.
So just in the comparison to crypto, it feels like the adoption here, the hype cycle here is more widespread than crypto like with crypto
there was that moment where we saw the person behind the curtain who was you know the people
that there was the fall of crypto with with ai i don't feel like there is any incentive for anyone to any of the
stakeholders i guess you got us involved to yeah to just come out and be like yeah it was bullshit
you know there's just so much buy-in across the board where i i guess we've already talked about
where you see this going but is there do you think there's any hope for this getting kind of found out, the truth catching on?
Or do you think it's just going to have to be 100 years from now when somebody changes their mind about AI haters and is like, actually, they were on something the way we are about Luddites. We're still
trying. That's what we're up to with the podcast, right?
A lot of our other public scholarship.
There was an interesting moment last
week when ChatGPT
sort of glitched and was outputting
nonsense. Oh, speaking Spanglish? Yes. I mean, was it
actually Spanglish or were people just calling it that
because it had some Spanish in it? It had some.
It was doing some Spanglish
stuff, but the stuff is just even more inscrutable than usual.
It was just completely nonsense.
Yeah.
Yeah.
And that, the sort of open AI statement
about what went wrong,
basically described it for what it is.
They had to like say something
other than what they usually say.
And then there was a whole thing
with Google's image generator,
which,
you know, the, so the baseline thing is super duper biased and like makes everybody look like white people. And, and there's this wonderful thread on medium and here I'm doing anything
where I can't think of a person's name, but it's a wonderful post on medium where someone
goes through this thread of pictures that I think were initially on Reddit, where someone asked one
of these generators to create warriors from different cultures taking a selfie and they're all doing the
American selfie smile. And so there's this really uncanny valley thing going on there where these
people from all these different cultures are posing the way Americans pose. So huge bias in
these things underlyingly. Google has some kind of a patch that basically whatever prompt you put in,
they add some additional prompting to make the people look diverse
and then last week or so someone figured out that if you asked it to to show you a painting
of nazi soldiers they're not all white because of this patch right right right so make it diverse
yeah so google's backpedaling of that i I think, was ridiculous. There was some statement in there about how they certainly don't intend for their system to put out historically inaccurate images.
I'm like, what the hell?
It's a fake image.
It's not like there's no accuracy there no matter what.
But these mishaps maybe sort of pulled back the curtain for a broader group of people.
There's some true believers out there who are not going to be reached.
But I think it may have helped for some slice of society.
Yeah.
I know some people who are like,
how do we know that Reinhard Heydrich didn't have dreadlocks?
I think it's pretty clear.
But okay, sure.
Go off.
Yeah.
And I mean, I think this is such an interesting question, right?
It's like, when is the AI bubble going to pop, right?
And I mean, in some ways, it feels like,
as much as we can do, we're kind of prodding it, right?
And one thing that I offhandedly said one time
and Emily loves is ridicule as praxis.
So one thing, and I will in a turn, one of Emily's great quotes
is, oh gosh, now I'm going to mess it up right now. It's the refuse to be impressed.
Oh yeah, resist the urge to be impressed.
Resist the urge to be impressed. It's much better when she says it. And it's kind of the idea of like,
some of it is a bit of the sheen, right?
But it feels like at some point,
if in the kind of operations of these things,
there'll be enough buy-in,
especially with automation,
that's hard to reverse a lot of the automation without just a huge fight.
And so, I mean, I think something that helps are, you know,
worker led efforts, you know, and one of the most awesome things that we've seen, this is something,
um, uh, a scholar, uh, Blair, uh, tear frost calls AI counter-governance. She calls, uh,
she calls uh and in one example she is is the wga strike and how folks struck for 148 days and after the strike they not only got a bunch of new kinds of guarantees for minimum pay when it
comes to streaming and the residuals they get for it they also you also have to basically be informed if any
AI is being used in the writer's room.
They can't be forced
to edit or
re-edit or rewrite AI-generated
content.
Everything has to be basically above board.
I mean, it isn't as far as
it could have gone and banned it outright.
But if there's any use of it,
it has to be disclosed and you can't bring in these tools
to whole cloth replace a writer's room.
Yeah.
Well, Alex and Emily,
it has been such a pleasure
having you on the show.
I feel like we could keep talking about this
for hours, for days.
But thank you so much for coming on.
And yeah, I would love to have you back.
Where can people find you, follow you, hear your podcast, all that good stuff?
Yeah. So the podcast is called Mystery AI Hype Theater 3000, and you can find it
anywhere. Find podcasts are streamed. And we're also on PeerTube if you prefer that kind of
content as video. So we start off as Twitch and then show up that way. I'm on all the socials. I tend to be Emily M. Bender. So Twitter, Blue Sky on Mastodon. I'm Emily M. Bender at, what are we, DARE Institute about social. And I'm very findable on the web, very Google-able at the University of Washington. How about you, Alex? want to check catch the twitch stream it's twitch.tv slash dare underscore institute usually
on mondays usually this time slot so next week we'll be here uh wait are we i forget we'll be
here eventually yeah today's two today's tuesday though in tbz land oh that's right that's right
yeah it's actually not so mondays check us out you can can catch me at Twitter and Blue Sky, Alex Hanna, no H at the end.
And then yeah, our Mastodon server, dare-community.social. What else? I think those are
all the socials we have. Yeah. Amazing. Is there a work of media that you've been enjoying?
So I am super excited about Dr. Joy Boulamouni's book, Unmasking AI.
It came out last Halloween.
It is a beautiful sort of combination memoir slash introduction into the world of how so-called AI works, how policymaking is done around it, how activism is done around it.
And I actually got to interview her here in Seattle at the University of Washington last week, which was amazing. Nice. How about you, Alex?
Yeah, I'd recommend the podcast Worlds Beyond Number. And it is a Dungeons and Dragons live
play, actual play podcast that is put together by Brenna Lee Mulligan, Abrea Iyengar,
Lee Wilson,
and Erica Ishii talking about writing
that is not generated by AI,
but generated by
some very talented improv comedians
and storytellers themselves.
But how does it scale?
How do you scale?
Oh my God.
It is actually surprising.
I mean, that podcast is actually,
I think the single most funded thing on Patreon for a bunch of records for
about 40,000 subscribers.
People want good content.
Yeah.
Not just high art,
you know,
people want this vibrant storytelling,
amazing creation,
you know,
and so the more we can, we can plug that stuff better. By the way, People want this vibrant storytelling, amazing creation, you know?
And so the more we can plug that stuff, the better.
By the way, you can also find Mystery AI Hype Theater 3000, like where podcasts are.
So in case you don't do Twitch.
Yeah.
Like I was listening on Spotify or whatever.
You weren't on Twitch?
I wasn't on Twitch this weekend. I was taking it.
I was taking a break,
you know,
it was just like too much,
too much Twitch.
Couldn't get a hype train on?
That's right.
Okay.
All right.
Miles,
where can people find you?
What's working media you've been enjoying?
Uh,
the at base platforms at miles of gray.
Also, if you like the NBA,
you can hear Jack and I just go on and on about that on our other podcast.
Miles and Jack got mad.
And if you like trash reality TV, like me me, I like I comfort myself with it.
Check me out on 420 Day Fiance where we talk about 90 Day Fiance.
A tweet I like.
This is just kind of a historical day, I would have to say, for the Internet.
We missed an anniversary on Sunday, apparently. The very famous clip of Peter Weber, the bowler,
who won and came out with the iconic phrase,
Who do you think you are? I am.
That had its 12th anniversary.
And I'll play that audio because it's one of our favorites.
Here we go.
That is why I said it.
That's number five.
Are you kidding me?
That's right.
Who do you think you are?
I am.
All right.
Who do you think you are?
Point to himself with a stump.
Who do you think you are?
Oh, man.
The whole thing is good also.
The run up to that is great.
That's right.
I did it.
I did it.
God damn it.
What?
Sure, man. I did it. I did it. God damn it. What? Sure, man.
I love it.
Yeah, I play an amateur sport, and just like the exhilaration you get, and you just say anything.
Yeah, exactly.
I know.
That's what it is.
I love that clip.
I respect it so much.
I think you are.
It's like, yeah, you've glory fried your brain circuits, and now that's what you're saying.
And this other one from AtTheDarkProwler.
It's a screenshot of an Uber notification on the phone.
It says, young stroker, the body snatcher, will be joining you along your route.
You're still on schedule to arrive by 7.54 or earlier.
Okay.
By the way, can I get those chicken nuggets glory fried?
Thanks.
That'd be great.
Oh, yeah.
Oh, a tweet I've been enjoying.
Caitlin at Kate Holes tweeted,
Overnight oats sounds like the name of a racehorse who sucks.
That was an old one.
That was apparently September 25th, 2022.
Good one.
Banger.
You can find me on Twitter at Jack underscore O'Brien.
You can find us on Twitter at Daily Zeitgeist.
We're at The Daily Zeitgeist on Instagram.
We have a Facebook fan page and a website, DailyZeitgeist.com, where we post our episodes and our footnotes.
We link off to the information that we talked about in today's episode,
as well as a song that we think you might enjoy.
Miles, is there a song that you think people might enjoy?
This is a nice little instrumental clip, not clip, a full-on track,
from a London-based producer who goes by Vegan, V-E-G-Y-N,
and the track is called A Dream Goes On Forever.
And it's a super dreamy track.
It's just kind of an interesting production.
So maybe,
you know,
maybe,
you know,
just check out some mid journey,
you know,
images,
some Sora AI videos,
and just blast this in your headphones.
Just kind of go on for a dream goes on forever.
Or just go for the music and leave the synthetic media out of it.
Just fully embrace.
Listen to the music,
close your eyes.
And actually the mid journey images that your brain will produce whoa i've found that like at night
when i sleep sometimes my brain produces sora images this is like you sound like you sound
like a youth pastor talking to kids about god or it's like you know where it's really the ai is
actually up in here kids yeah. Yeah. Down here.
You really want to get down with.
Yeah.
Yeah.
Yeah.
The real training data we had is the word of God.
Exactly.
Exactly. The only one I need.
Well, the Daily Zeitgeist is a production of iHeartRadio.
For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever
you listen to your favorite shows.
That is going to do it for us this morning.
We are back this afternoon to tell you what is trending. we'll talk to you all then bye bye i'm jess casavetto executive
producer of the hit netflix documentary series dancing for the devil the 7m tiktok cult and i'm
cleo gray former member of 7m films and sheah Church. And we're the host of the new podcast, Forgive Me For I Have Followed.
Together, we'll be diving even deeper into the unbelievable stories behind 7M Films and Shekinah Church.
Listen to Forgive Me For I Have Followed on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
I'm Keri Champion, and this is season four of Naked Sports.
Up first, I explore the making of a rivalry, Caitlin Clark versus Angel Reese.
Every great player needs a foil.
I know I'll go down in history.
People are talking about women's basketball just because of one single game.
Clark and Reese have changed the way we consume women's sports.
Listen to the making of a rivalry, Caitlin Clark versus Angel Reese on the iHeartRadio app, Apple Podcasts,
or wherever you get your podcasts.
Presented by Capital One,
founding partner of iHeart Women's Sports.
I'm Keri Champion,
and this is season four of Naked Sports.
Up first, I explore the making of a rivalry,
Kaitlyn Clark versus Angel Reese.
People are talking about women's basketball
just because of one single game. Clark and Reese have changed the way we consume women's basketball.
And on this new season, we'll cover all things sports and culture. Listen to Naked Sports on
the Black Effect Podcast Network, iHeartRadio apps, or wherever you get your podcasts.
The Black Effect Podcast Network is sponsored by Diet Coke.