The Daily Show: Ears Edition - Friend or Foe? Robots and A.I.
Episode Date: June 23, 2023Technology is advancing faster than we can keep up with, but are those advances to the betterment of society, or the detriment? Are the robots coming for all our jobs? Will A.I. takeover the world? Th...e Daily Show News Team get answers to those big questions in this throwback compilation.See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
You're listening to Comedy Central.
Facial Recognition Software is the newest way to unlock your phone, tag your friends, or
create Snapchat nightmares.
But there's one aspect of facial recognition that still needs some work.
Is facial recognition technology biased?
A researcher at MIT found that the technology works best for white men.
Users with darker complexions saw more instances of being misidentified.
I found that the training data that's being used for facial recognition
isn't as representative of the variety of human skin tones and facial structures.
Many facial recognition systems use the same data sets.
If those sets contain mostly white faces,
all the products that use that data
can inherit those same biases.
I was struggling to have my face detected
and pulled out a white mask,
and the white mass was easier to detect than my face.
Yeah, so basically if black people want AI to see them,
they just have to be stalking a slumber party or haunting a opera house.
It works.
It works out.
For more about the blind spots of facial recognition is our technology expert, Dulce
Sloan everybody.
Dulce, say, help me out here.
Like, do we have to worry about racist machines now? Trevor, don't be so quick to pass judgment on these robots?
They're just out there doing their best.
Is it such a big deal if they can't recognize black people?
It's not like I can recognize any of their robot asses, either.
And remember, the robots aren't racist on purpose.
It's not like they're out there shouting the n-word and binary code.
Or putting up statues of Robo E. Lee.
That South ain't rising again. No, the problem is there's not enough black people in Silicon
Valley. So the first time robots see a black person, they malfunctioned like an
Amish dude in Times Square.
And it's not just that machines don't recognize black faces.
Sometimes they don't recognize black anything.
Have you seen this video on YouTube?
Black hand, nothing.
Larry, nothing.
Larry, go.
Racist, mother, sinks.
You see that?
Put robots in charge of the bathroom and they make it whites only again.
I have no idea there are soap dispensers that don't see black people.
I know! How they even get the little taxi driver inside of there?
Oh!
Don't affect me, I take lifts. Listen.
With Trevor, I'm trying to look on the bright side.
See, someday the robots are going to take over the world.
I'm waiting for the bright side.
Think about it.
They can't kill us if the only survivors, Trevor.
Listen, I've had a vision of the robot apocalypse, and it doesn't look too bad for us.
Oh, good shot, ladies and try.
Who said that?
Uh, it was Chad.
Must kill, Chad.
Look at that sign robot ass.
Tonight's senior tech correspondent Ronnie Chang
investigates the latest advances in artificial intelligence for today's future now.
Thanks, Trevor.
Artificial intelligence. Someday, it may allow computers to cure illnesses, combat climate change, or even win
The Bachelor.
She can do everything but get in the hot tub.
But right now, this is what AI is up to.
Now, a battle of man versus machine is about to get underway.
Isadol from South Korea is the dominant figure in the ancient Chinese game of Go.
His opponent, in a best of five tournament, worth more than a million dollars, is AlphaGo,
an artificial intelligence system developed by the Google project named Deep Mind.
That's right. Google could have made a computer called Alpha Cancer, but hey, let's focus
on the world's 600th most popular board game instead.
The ancient Chinese game has long been considered too complex for computers to master.
There are more possible moves in Go than Adams in the universe.
The world Go champion Lisa that all he lost.
The fifth in final game today, leaving the final score at 4-1 in favor of the machine.
Yeah, welcome to my world, Lisa doll. I lose the computer every time I play anything.
All right?
Starcraft, FIFA 16, even Tinder, all right?
Man, that game is hard, all right?
I can never get to the next level.
Now, of course, some people are concerned about computers overthrowing us. And if that's the case, there are things you can do to slow down their advances.
Like for example, why are we teaching computers games like Go and Chess that are all about
war strategy?
Can we teach them something harmless, like Uno or hungry hippos?
So even if they go rogue, they're just feeding more hippos. All right. I'm sure that's good for the environment the. But the reality is, these computers aren't getting more dangerous.
They're just becoming more human.
To be good at this 2,500-year-old game, you need to have intuition.
A characteristic that we used to think was uniquely human.
That's right. Computers are learning human intuition.
And as they evolve, it's clear they're picking up
other human characteristics as well, like humans.
As computers become smarter, they're also getting lazier.
I mean, just take a look at AlphaGo.
He didn't even move his own pieces.
He has some guy just moving the pieces for him.
This is great news for us humans, okay?
Because these machines are still gonna need us humans to do all the work that's beneath them. Oh, come on, Ronnie, Ronnie, are you telling me that the smarter robots get, the lazier they'll become?
Yes, that's exactly what I'm telling you.
All right? You don't believe me?
Hey, check out Google's latest AI prototype, thro-bro.
Hey, bring it in, fellas. This right here is the most advanced AI ever created.
All right, I'm talking state of the art.
This thing finds prime numbers.
It ponders the concept of infinity.
But most impressive is what a lazy asshole this thing is.
All right, for example, Alpha Bro, would you like to play a game of go?
Eat me.
Why don't you make yourself useful and grab me a beer? Okay, I can get you
a beer, but please would be nice. Please get me a beer, you little bitch. What the hell
man? You know computers are supposed to make our lives easier? I am making life easier
for myself. Now take me outside my Uber's waiting.
He's today.
Meet Josh Browder.
I'm trying to replace the $200 billion legal industry with artificial intelligence.
He invented this lawyer-murdering robot. Do not pay.
It contests parking tickets and has saved people millions in fines and legal fees.
But why is he trying to put hard-working, money-grubbing lawyers like me out of business?
What do you have against lawyers?
I don't have anything against lawyers. I just think that so many people need access to justice,
and they're getting ripped off, and making it free for them is popular.
Did you go a law school? No, you did not. No, you did not. That's what the law is. The law is about the person with the most money and resources winning.
You are totally disrupting that. I think we can agree on that.
Okay, so how does this thing even work? So just like a real human lawyer, you go to it,
type in whatever your legal problem is. It gives you a legal document for free in under 30 seconds.
And it's not just parking tickets. This AI lawyer has already helped people sue Equifax
without paying a lawyer anything.
Oh, so with this thing, everyone can just sue everyone.
I mean, that's not what it was intended.
Oh, wow, look at this!
I just sued you for emotional distress
for devaluing my degree and my profession.
Oh, I just sued you for that terrible shirt and soup combo.
Oh, I just sued you for impersonating John Oliver.
How'd you like that?
I mean, I'm personally offended, but I stand by my software.
A human lawyer is great, but they're just so emotional.
Too emotional.
I was making a great case against AI lawyers, using my human skills of legal persuasion.
But what if robot lawyers were just the beginning?
What if AI is taking over the entire legal system, legal tech expert Tim Huang?
Increasingly we're seeing the use of these automated systems even in the application of law?
Judges for example are now using algorithms to assess whether or not people should be released pretrial.
Okay, wait, hang on.
Robots are already judging humans.
Oh yeah, in many states around the country already.
Oh my God, he's right.
Robot judges are already performing pretrial risk assessment,
helping human judges determine bail or if a person should be detained.
And over a million criminal cases have been processed using these systems.
So what are the benefits of having machines in the judiciary?
Well, so some people say that these algorithms will be sort of free of bias in the way that
judges are not.
I don't know about you, but I would rather be f-over by a human than a machine
any day of the week. Unless it's one of those hyper-realistic sex robots,
obviously, those things are like,
I'm told that when you feel them,
it feels like the real.
I think that's a whole other issue.
I don't believe, I'm sorry.
But even this sex block at it concedes that
AI judges aren't perfect.
Great study out of ProPublica a few years back looked at the specific case of a pretrial
risk assessment system and they were able to find that it was actually quite racist, that
actually black defendants who were not likely to commit crimes in the future had substantially
higher risk ratings.
What you're saying these judging machines were racist?
Yeah, that's what it looks like in this case.
So that means the system works then. I don't know about that.
It's great.
No, I'm sorry.
But increasingly we're seeing robots implemented all across legal practice.
So so ridiculous.
Where does it end?
Eventually you're going to see machines judging humans.
A lot of the systems we're seeing are not that advanced.
That's exactly what's going to happen. Robot lawyers, robot judges?
What's life going to be like in this soulless new legal world?
Will no one stand to defend humans in law?
Your Honor, members of a jury.
This is about the essence of humanity itself.
Because unlike that thing, I went to law school, taught by humans.
I spent countless, sleepless nights, reading, writing, pondering shit,
taking drugs, orally and anally, all things artificial intelligence can't do.
And quite frankly, I'm sensing a lot of bias in this courtroom.
Watch yourself, Counselor. Don't you think human emotion is necessary in the law?
You want answers? I want the data! You can't handle the data.
Alexa, play dramatic music. Son, we live in a world with laws, and those laws are better
applied by machines with logic. And my existence, well grotesque and incomprehensible to you,
saves lives.
Alexa, turn off.
Nice try, asshole.
In the case of Ronnie Chang versus legal robots, I sentence Ronnie Chang to death.
Just kidding.
But I rule in favor of the robots.
Hipp-Hi.
Hip-Hilay.
Hit-Hii.
I'm going to sue all these robots.
Since taking this job, I've been on a steady diet of news, journalism, and rotting three-week-old
spinach.
So far, the spinach has been the easiest parts.
But is there any way to improve our quality of reporting?
Hassan Menage checked in on some recent advances.
This is the golden age of journalism.
Today's reporters know that personality matters more than training
and that facts shouldn't get in the way of a good story.
But now, these superhero journalists are at risk of being replaced.
New York Times columnist, Barbara Arenreich.
There is a threat of robots doing our work.
Robots?
Robots.
Journalism done by robots.
That's right.
Ayrnrich is saying our newsrooms will soon look like this.
No, we're talking about algorithms.
They're software.
So, for example, you want to write an article on a topic,
you send out the algorithms to search for everything that has been said about them.
Okay. Synthesize it and turn out an unacceptable article. They can't do what I do. I mean,
they will do and can do our work. Prepare to be unemployed, Hasan.
Then it hit me. This lady is nuts. I spent my nights down in cocktails. I will do and can do our work. Prepare to be unemployed, Hassan.
Then it hit me.
This lady's nuts!
I spent my nights down in cocktails with the Washington elite.
I'm verified on Twitter,
and I've even got Wolf Blitzer on speed dial, okay?
There is no way a reputable news organization
is going to replace someone like me with a machine,
associated press managing editor, Lou Ferrara.
We already use automation technology to automate the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing the writing Associated Press-managing editor Lou Ferrara. We already use
automation technology to automate the writing of some articles. What? It's true.
I'm actually embracing it. They are. But it turns out these robo articles are
littered with completely useless facts and information. Pure gibberish like
shares have decreased 6% and new car sales have been strong this year. A bunch of garbage!
Where's the spin?
There isn't spin.
What about the snark?
No snark.
What about the bias?
No bias.
Where's the fear-mongering?
None of that.
And there's no mistakes.
No.
Then where's the journalism?
Yeah.
I don't see that as part of the that. I don't think most journalists are doing that. Certainly not at the AP. That's not the goal. Not the goal.
Lou, you're forgetting the cornerstones of modern journalism.
Cable News has shown the value of bending the facts
to fit your beliefs.
All this snow and still cries of global warming.
Brian Williams has demonstrated the importance of self-aggrandizement. A helicopter we were traveling in tra tra tra we we we we we we we we we we we we we the the we the the the the the the the the the the the the the the the the their th th. th. th. th. th. th. th. thoes th. th. th. th. th. th. th. th. th. th. th. th. th. th. th. th. th. th. to do. to do. toe. toe. toe. to do. to do. too. to do. to do. to do. to do. to do. too. too. too. too. too. too. toe. too. too. too. too. to. to. to. to. to. to. to. to. to. to. to. to. to. to. to. to. to. hit by an RPG. And the AP itself has taught us. Get the story out quickly and check the
facts later. The AP reported that millionaire Robert Durst had been booked on
weapons charges in Louisiana. But mixed up, Robert Durst, the murderer with Fred
Durst, the f-mick musician from the 90s. Unbelievable!
It is unbelievable, and it wasn't a good thing.
A robot could never look at a 70-year-old murderer
and go, you know what that reminds me of?
The guy with bleach tear and pukahels from the early 90s.
I mean, that love, that's, mu-I mean, that is human stupidity at its finest. That was a mistake we regret and mistakes are going to happen.
No way!
What crazy viral?
Crazy viral.
It did.
And that wasn't even your best work!
In 2014, the AP in a rush was the first to tweet, breaking Dutch military plane
carrying bodies from Malaysia Airlines flight 17 crash lands in Eindovin.
Nine minutes later, clarifies Dutch military plane carrying
Malaysia Airlines bodies lands in Eindovin.
In nine minutes, Lou, do you see the brilliance?
You guys were their first to report that the plane had crashed?
Peehth, Retweet City.
And the first to report that the plane hadn't crashed.
Preeteat City. In the words of Denzel Washington, my man.
It was unintended, especially on such a horrible situation.
Lou, you guys Tupac hologram that situation.
You faked your death and you came back as a hologram later.
And both of them were equally great.
That wasn't our goal.
I'd never Tupoc hologramram anything and don't intend to.
Until these robo reporters learn the value of page views, bias, and straight up lying,
it looks like journalists like me are going to have a job, at least for a while.
By 2050, you're going to have computer algorithms reproduce my tone, my snark, whatever.
So you're telling me a robot could write this headline.
21 pictures of side boobs that'll get your
rick rock hard.
Yes, it would draw various, you know, various approximations
till they get the perfectly disgusting one, which of course you came right to yourself.
I did, Barbara, all in the day's work.
So, ma'amaraati, welcome to the Daily Show.
Thank you for having me.
So, many people have seen the images that Dali creates.
Many people may even think they understand it.
But let's
let's get into it. Like how does an AI create an image? Because it's not
copying the image, it's not you know taking from something else. It is creating an
image from nothing. How is it doing this? Exactly. It's an original image,
never seen before and you, we have been making images
since the beginning of time,
and we simply took a great deal of these images,
and we fed them into this AI system.
And it learned this relationship
between the description of the image and the image itself.
It learned these patterns.
And eventually, it was generating images that were original,
they were not copies of what it had seen before.
And basically the way that it learns, the magic is just understanding the patterns and analyzing
the patterns between a lot of information, a lot of training data that we have fed into this system.
There are people who are terrified about this. For instance, there was an art competition and the winner in that we have fed into the system. There are people who are terrified about this.
I mean, for instance, there was an art competition
and the winner in the art competition used a version of this kind of software,
whether it was Dali or not, I don't remember,
but they used a version of this kind of software
to create an art piece that won the competition.
Artists were livid. They were like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like like the not art, it was created by. And Nautz said, no, the same way you use a brush, I use a computer, and that's how I design this.
In creating AI, are you constantly grappling
with how it will affect people's jobs
and what people even consider a job?
Yeah, that's a great question.
It's, you know, the technology that we're building
has such a huge effect on society, but also the society can
and should shape it. And there are a ton of questions that we're wrestling
with every day. With the technologies that we have today, like GPD 3 and Dali, we see
them as tools. So an extension of our creativity or our writing abilities, it's a tool.
And, you know a tool.
And you know, there isn't anything particularly new about having a human helper.
You know, even the ancient Greeks had this concept of human helpers, you know,
that when you'd give something, you know, infinite powers of knowledge or strength or so on,
maybe you had to be wary of the vulnerabilities. And so, these concepts of extending, you know, infinite powers of knowledge or strength or someone, maybe you had to be wary of the vulnerabilities.
And so these concepts of extending the human abilities
and also being aware of the vulnerabilities are timeless.
And in a sense, we are continuing this conversation by building AI technologies today.
Well, it might be frightening because some people go, oh, the world is going to end because of this technologies today. Well it might be frightening because some people go
oh the world is going to end because of this technology but in the meantime
it's very fun I'm not going to lie. No because it like you know Dali for
instance doesn't just create an image from text you know you've
also gotten it to the point now where as a company you've designed it so that it can imagine what an image would be so for instance there's there's th the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the their. their the the their their their their w. their w. their w. their w. their w. their w. their w. their w. w. w. w. w. w. w. w. w. w. w. w. w. their their their their their their their their their their their their their their their their their their their their their their their their their their their their their their their world. wa. world. world. world. world. their. world. world. their. world. world. world. their. world. world. world. their. that it can imagine what an image would be. So for instance there's that there's that famous image you
know it's it's the girl with the pearl earring and it's a it's a famous
image right but what Dali can do is you've got the famous image and then
Dali can expand that. That all of the use everything you're seeing they never existed. So Dali's like well this is what I th I I I I I I I th I th I th I th th th th th th th th th th th th th th th th is what I th is what I th is what I th th is what I th is what I th is what I th is what I th is what I th is what I thi is what I thi thi thi th is is is is is is is is th is th is th is th. thi is thi is thi is thi is thi is thi is the the the the the the the the the the the their their their their their th is th is th is th is th is th is th is what is what is what is what is what is what I th is what I th is what I thi thi thi thi thi thi thi. thi. thi. thi. thi. theeee. the. the. the. the. the. the. the. think it would look like if there was more to this image. It can assume, it can create, it can inspire.
Yes, it can inspire.
And it makes this beautiful, sometimes touching, sometimes funny images.
And it's really just an extension of your imagination.
There isn't even a canvas or the boundaries of paper are not there anymore.
So how do you safeguard then? You know, someone might look at this technology and go,
well, then you could type in,
a politician was caught doing something here.
Now I've got the image, you know,
you've got, and now all the politician can say,
oh, that's not me, it was made by that fake program.
We can very quickly find ourselves in a world where nothing is real
and everything everything th everything th everything isn't th everything isn't and everything that's real isn't and we question it.
How do you prevent or can you even prevent that completely?
Yeah, you know, misinformation and societal impact of our technologies, these are very important
and difficult questions.
And I think it's very important to be able to bring the public along, bring these technologies
in the public consciousness, but in a way that's responsible and safe.
And that's why we have chosen to make DALI available, but with certain guardrails and
with certain constraints, because we do want people to understand what AI is capable of,
and we want people in various fields
to think about what it means.
But right now, you know, we don't feel very comfortable around the mitigations on misinformation,
and so we do have some guardrails, for example.
We do not allow a generation of public figures, so we will go in the data set and we
will eliminate certain
data. Also if you type something in you can't pull up a it can't create a
politician for you it won't be a picture of that person. So that's that's the
first step at the training of the model itself just looking at the data
and auditing it making interventions in the data sets to avoid
certain outcomes and then later in the deployment stage,
we will look at filters, applying filters,
so that when you put in a prompt,
it won't generate things that contain violence or hate,
and make it more in line with our content policy.
Wow. So let me ask you this then.
You know, obviously part of your team has to think about the ethical
ramifications of the technology that you're creating.
Do your team also then think about the greater meaning of work or life or the purpose that
humans have?
Because, you know, most of us define ourselves by what we do, i.e., their jobs.
As AI slowly takes away what people's jobs are, we'll find a growing class class, thiiiiiiiiiiiiiii.aauui.a.a.a.a.a.a.a.a.a.a.a.a.a.a.a.a.a.a. th. th. th. thi. thi. thi. thi. thi. thi. We'll, thi. thi. thi. We thi. We thi. We. our jobs. As AI slowly takes away what people's jobs are, we'll find a growing
class of people who don't have that same purpose anymore. Do you then also have to think
about that and wonder, like, what does it mean to be human if it's not my job? And can
you tell me what that is?
You know, we have philosophers and ethicists at OpenAI, and but I really think these are big societal
questions that shouldn't even be in the hands of technologists alone.
We're certainly thinking about them.
And I, you know, the tools that we see today, they're not the tools that are automating
certain aspects of our jobs.
They're really tools, extending our capabilities, our inherent abilities and making them far better.
But it could be that in our future, you know, we have these systems that can automate a lot of different jobs.
I do think that as with other revolutions that we've gone through, there will be new
jobs and some jobs will be lost, some jobs will be new, and there will be some retraining
required as well. But I'm optimistic. It's interesting, it's scary because change always
always is, but as long as we have, as long as we have, as long as we
have koalas riding bicycles, I think we're heading in the right direction.
Thank you so much for joining me on the show.
Explore more shows from the Daily Show podcast universe by searching the Daily Show, wherever
you get your podcast.
Watch the Daily Show weeknights at 11, 10 Central on Comedy Central and stream full episodes
anytime on Fairmount Plus.
This has been a Comedy Central podcast.