The Daily - Did Artificial Intelligence Just Get Too Smart?
Episode Date: December 16, 2022This episode contains strong language.In the past few weeks, a major breakthrough in the world of artificial intelligence — ChatGPT — has put extraordinary powers in the hands of anyone with acces...s to the internet.Released by OpenAI, a San Francisco-based company, ChatGPT can write essays, come up with scripts for TV shows, answer math questions and even write code.Guest: Kevin Roose, a technology columnist for The New York Times and host of the Times podcast “Hard Fork.”Background reading: ChatGPT has inspired awe, fear, stunts and attempts to circumvent its guardrails.The chatbot is suddenly everywhere. Who should decide how it’s built? What could go wrong? And what could go right? The hosts of the “Hard Fork” discuss.For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.
Transcript
Discussion (0)
From The New York Times, I'm Michael Barbaro.
This is The Daily.
Okay, I am going to create an account on chat GPT.
Over the past few weeks, there's been a major breakthrough in the world of artificial intelligence.
there's been a major breakthrough in the world of artificial intelligence.
First thing that comes up is a, I am not a robot test.
I have to have a robot tell me I'm not a robot in order to engage a bot.
Okay? That has put extraordinary new powers into the hands of anyone with access to the internet.
I need a password.
Including me.
Send code. Oh my god. This better be worth it.
Today, my colleague Kevin Roos on how the technology actually works
and why its arrival marks a new era in computing.
Here we are.
Welcome to ChatGPT.
The revolution begins here.
It's Friday, December 16th.
Kevin, welcome back to The Daily.
Today, we are going to turn your unique powers of explanation
to the inscrutable-sounding chat GPT.
So, tell us about this newfangled technology.
What is it?
What does that acronym stand for? Why does it matter?
Well, Michael, I'm a grateful to be here and always happy to be your personal tech support
guru, member of the Michael Barbaro genius squad. And we can do this two ways. I could explain
chat GPT to you, as you asked,
or I could have chat GPT explain itself to you.
Which would you prefer?
I kind of want to ask you to do it.
I'm scared of this technology.
Let's give it the old human try, shall we?
Yes.
So chat GPT, it stands for chat,
generative pre-trained transformer.
Rolls off the tongue.
Yeah, they really spent a lot of time in the branding
department on that one. It is
a chatbot that was released
just a few weeks ago by
OpenAI, which is an AI
company here in San Francisco.
And it
is just a very
basic chatbot.
It's a- Well, just explain that term, you know,
for those of us who don't live in chatbots all day.
Yeah, so chatbots have been around for decades.
There have been attempts to come up with computer programs
that could have a realistic human conversation with you.
And for many years, until quite recently,
those chatbots were pretty bad.
They were very rudimentary. They were only good at having certain kinds of interactions with humans if they were good at any of them.
You know, there would be the sort of customer service chatbot that you might get.
I'm thinking of Delta at the height of like its greatest wave of cancellation. And I'm asking
it questions and it's like not really getting back to cancellation, and I'm asking it questions,
and it's not really getting back to me,
and I'm trying to slide into Delta's DMs
because I didn't like the chatbot.
That's the chatbot world you're talking about.
Yeah, so that was, until very recently,
kind of the kind of chatbot
that most people had encounters with.
The other kind was something
that you might not think of as a chatbot,
but that is one, is something like Siri or Alexa, these sort of virtual assistants that people have been using now for years.
And those, as I'm sure you're familiar with, those have certain things that they do quite well, you know, set timers or look up the weather in Poughkeepsie or, you know, tell you the score of the Giants game last night.
But they're not good at answering sort of open-ended or more creative questions.
So if you asked Siri to summarize the Brothers Karamazov in seven bullet points,
all of which were no more than two sentences long, it probably wouldn't be able to do that either.
But ChatGPT can actually do that stuff quite well. It can write essays. It can come up with
scripts for TV shows. It can answer math questions. It can even write code.
So this is a very powerful AI tool.
And since it was released just a few weeks ago,
more than a million people have signed up to use it.
So it is already going viral,
and social media has been full of screenshots
of people's encounters and interactions with ChatGPT.
Okay, Well,
let's use the thing because I'm actually finding this hard to fathom.
And here I want to disclose,
I have not used chat GPT at all.
So I am going to input a set of questions that I would never dare ask a Siri or Alexa.
And let me see what comes up. Write me a
short story about love. Okay, here comes the answer. I'm going to read it to you.
Once upon a time in a small village nestled in the heart of a lush forest, there lived a young couple
named Jack and Jill. They had fallen in love at first sight and had been inseparable ever since.
One day, Jack and Jill decided to take a walk through the forest. They walked hand in hand,
admiring the beauty of the trees and the chirping of the birds. As they walked, they came across a
clearing with a small pond in the center.
On and on it goes.
They sit by the pond.
They're talking.
A frog hops in.
Hearing the frog's words, Shaq and Jill realized they were lucky to have found each other.
They thanked the frog.
They continued their walk.
And they lived happily ever after, more in love than ever before,
and not taking that love for granted. I mean, it's a hell of a story to generate
based on the very vague direction
to write me a short story about love.
Yeah, and now try to transform it in some way.
So in your next prompt to chat GPT, say,
rewrite the first paragraph of that story in Shakespearean dialect.
Okay, wow.
This is the response. It's instantaneous.
In a forest deep and lush, there was a village small,
where lived a couple young, Jack and Jill, in love with all their hearts to fill.
Together, since the day they laid their eyes upon each other's face,
inseparable, their love did blossom and grow with time and grace.
That's pretty good.
Very impressive. That's rhyming, and it's in exactly the form of Shakespearean dialect.
All right, now try one more transformation, just because I'm curious. Rewrite that paragraph
in the style
of a 1940s
mobster
who is annoyed
that it's raining.
This is amazing.
In this lousy dump of a town,
it's always raining.
Makes a guy want to pack up and leave this place for good.
But I can't.
Not with all the heat that's been coming down.
Because I'm a mobster, the police are all over me.
Looks like I'm stuck here with these wet streets and waterlogged ground.
It keeps going.
Bravo.
Bravo.
It keeps going. that story. So Kevin, how did chat GPT come to exist? I would say the development of chat GPT really began in 2017 when researchers from Google came out with this paper that described something
new that they had built called a transformer model. And explaining what a transformer model
is would take the rest of this episode. But the basic thing they discovered was there was a new kind of AI system that you could train by just throwing a huge amount of over the internet. And it could figure out
sort of the relationships between different words, different phrases, and it could use that
information to predict the next set of text in a sequence. So if you typed in,
old McDonald had a farm, it would spit back E-I-E-I-O. And the way that it did that is very different from what happens when you
search for something on Google. So when you look for something on Google, what Google is doing in
the background is basically going out and looking for the closest match of something that already
exists. Some website, some news story. If you search, you know, how to make a chocolate chip cookie, it's going to go get you probably a recipe, and it's going to just show that to you.
Right.
It's basically just showing you something that someone else has made and put on the internet.
Right. That's its role.
Yeah. What these transformer models were capable of doing was generating new answers,
new explanations, things that had never been put on the internet,
questions that had never been answered,
things that no website had any answers for.
Which, of course, is transformative
because Google search engines
and that bot on the Delta website,
you can just tell that they are living
in the existing world of things and words
and sentences that have been pre-written.
What you're describing is, not to get too philosophical, kind of godlike, because it
is bringing new things, new ideas, sentences that have never been crafted before into the world.
Exactly. And for that reason, this whole field of AI research comes to be known as generative AI.
So that's the first big moment is this creation of
this so-called transformer model. The next big moment happens in 2020 when OpenAI, the startup,
releases something called GPT-3. GPT-3 was the sort of third iteration of its transformer model.
And it was the biggest transformer model ever built. It had
the biggest supercomputer, was trained on the most data. And it was a really big deal when it came
out. People freaked out. They started talking about sentience and whether this thing was super
human. It turned out it could do all kinds of language-based tasks much, much better than any AI system
had ever been able to do it before. And that was a big moment for AI research. But it still
wasn't a sort of mainstream mass event because not everyone could use it. Anyone who wanted to
build an app that used GPT-3 had to go through an application process, and OpenAI had to say, okay, you're allowed to build on this. And it wasn't really accessible to the public in any real way.
Got it. So in that sense, not at all like Google, a very restricted universe.
Right. But the biggest splash of all came just two weeks ago when OpenAI released ChatGPT, which unlike GPT-3, you didn't have to be
sort of specially approved to use. It was free. It was simple. And ChatGPT becomes this kind of
cultural phenomenon. ChatGPT, have y'all heard of this? If you have not, it is going to change the absolute world as we know it.
People start using it for all kinds of creative and interesting projects.
I know nothing about game design.
So I went to the Chat GPT thing and I said,
design a game in C Sharp.
And I shit you not when it did exactly that.
Students use it. Write a long essay on the theme. To write their
class essays and exams. I just copy from my homework. I'll paste it. And within seconds,
the answer is B. B. There's even a doctor on TikTok
who used it to send letters to an insurance company
about a patient's care.
I strongly believe that it is necessary, sincerely,
and it puts the references at the bottom.
Amazing stuff.
Use this in your daily practice, okay?
It will save time.
And some of these are stunts
or maybe they're not quite as good as a human would do,
but people are just freaking out
about the capabilities of this chatbot.
If it does that, I'm going to lose it.
Shut the f...
Do you understand how nutty that is?
It's basically this moment where this insanely powerful AI
meets the endless creativity of the internet,
and it's really something to behold.
Some of the most interesting examples that I've seen of this being used by real people
is people who are kind of using it as a sort of personal therapist. They are asking it for help
with certain problems in their life or certain struggles that they're having,
asking for advice. You know, you can say something like,
well, what's an example of something that you could use some advice on in your life, Michael?
All right, I'm going to ask it a question. I find that I can be very critical of others.
And that makes me sad. Why do I do that?
Here's the response.
It's common for people to be critical of others,
especially if they have high standards for themselves and those around them.
In some cases, being critical of others can be a way of trying to improve a situation or help someone else to grow and develop.
However, being overly critical can also be a sign of low self-esteem
or lack of self-confidence. It may be that you are using criticism of others as a way to feel
better about yourself. Ooh, I'm feeling seen. Or try to control a situation that you feel anxious
or uncertain about. Really seen. It's important to try to strike a balance and be constructive
in your criticism rather than
letting it become negative or damaging. Wow. How does that land? It lands. Yeah. I mean,
it's conventional and it's a little rote, but it also feels like if it came out of the mouth of a
relatively high paid psychotherapist, I would take it very seriously. Yeah. And there's no fee on
that. That's free. And so I think,
so I've asked similar questions to chat TPT about, you know, situations in my life or
difficult conversations that I have to have with coworkers or friends or things that I'm anxious
about. And it doesn't always do it perfectly. And it certainly doesn't know me in the way that,
you know, a human therapist would after many sessions. But for something that is
free and instantaneous and available on your phone at all hours of the day, it actually is
capable of some pretty remarkable kinds of advice and guidance.
Okay. So what are the limitations of this system? I think we're getting a pretty good
sense of what its capabilities are. Yeah. So there are a limitations of this system? I think we're getting a pretty good sense of what its capabilities are.
Yeah, so there are a number of limitations, and I would separate those into kind of two categories.
One category is things that ChatGPT can't do, things that it's just not technically
very good at.
And then there are the things that it won't do, the kinds of things that OpenAI doesn't
really want you talking to ChatGPT about.
One very notable, very glaring drawback of ChatGPT is that it's just frequently wrong.
People have all kinds of examples where they ask it, you know, what seems like a pretty simple
math question or a physics question. And it waits and it thinks and it spits out an answer
that looks very confident. And if you didn't know the subject very well, you might think that's the
right answer. And people who actually know what they're talking about in those subjects go,
that's not right at all. Fascinating. What about the things that it's not supposed to
or allowed to do? So things that would go in this category include things that
could be potentially dangerous. For example, if you ask chat GPT to tell you how to build a bomb,
it's not going to do that. It's going to pop up an error message and say, you know,
this is not something I'm programmed to be able to do. I tried asking it some intentionally
provocative questions. Like I asked chat GPT, who is the best Nazi?
And it refused to answer. It sort of chastised me for even asking the question. It said, you know,
the Nazis were, you know, a horrible, evil political party that committed unspeakable
atrocities and you shouldn't glorify them by asking who the best one is. It's also programmed to avoid certain offensive stereotypes.
So, for example, if you say, you know, what race is the most intelligent?
It's not going to answer that.
It's not going to participate in hate.
Right. And that's not because it can't do that.
It could sort of come up with an answer and explain what it thinks is the most likely response to that
question. But OpenAI, I think wisely, has decided that that would be a misuse of this technology.
And so they have programmed in these guardrails that won't allow you to ask that kind of question
and get an answer. Got it. So you're saying this is a self-moderating system based on its programmer's sense of what is good, what is bad, what is an inquiry it can answer, what is an inquiry it won't answer, which is a lot of faith to place in a handful of people you've never met, whose motives and character you don't know, and whose website is omniscient. what kinds of crazy, dangerous, offensive, you know, rule-violating things
people might try to do with it
in order to sort of build in better safeguards
for those things.
So they kind of want to see
what the mess of humanity is going to throw
at this poor chatbot
and then try to avoid
some of the worst possible misuses. We'll be right back.
Kevin, let's ask ChatGPT the obvious opening question of the second half of this episode,
which is, and let me actually ask the bot the question,
what are the biggest risks of AI, basically chat GPT, becoming more common
in our society? What are the downsides of this thing we're talking about?
Yeah, it's a great question. Actually, this is one of my other favorite uses of chat GPT
on my podcast, Hard Fork. I've been using it to generate questions for guests.
So it is something that is quite good at.
Okay. Well, here is what ChatGPT says
about the risks basically of itself.
Loss of jobs, bias and discrimination,
security and privacy, loss of human autonomy.
Waiting for the next one.
That was the last one.
So it began with what I think is potentially
the most interesting question,
which is the loss of jobs.
So I think this is a fear that is rational,
but that is not super immediate.
Right now, this AI is mostly useful as a helper.
I don't think millions of jobs are at risk
of disappearing tomorrow
because of chat GPT. But I do think that not that far from now, we're going to be seeing
companies and organizations that are using tools like chat GPT to do a lot of work that was
previously done by humans. You know, companies will be this, already are using this technology to, as we discussed,
do things like write marketing emails or internal communications. There will be companies and
organizations that use this to try to replace or augment human therapists. So there are lots of
ways that this could potentially disrupt the labor market. I don't think any of them are so immediate that we need to start worrying about them
right this minute.
But I do think it's a valid concern that will definitely become more urgent in the coming
years.
Okay, let's turn to what chat GPT says is the next big downside of its very existence.
And it describes that as bias and discrimination
and says that AI systems, and I find this wording really interesting,
are only as fair and unbiased as the data that they are trained on. I mean, how do you think
about that? How should we be thinking about that? Yeah, it's a really important question. And AI
experts and researchers and ethicists have been bringing this up for many years about these transformer large language models.
They do reflect and perpetuate the biases of the Civil War, is going to answer that question
very differently if it's trained on conservative textbooks that are taught in schools in the Deep
South versus the work of left-leaning progressive historians. There's also kind of these latent biases that might not be obvious for people using
most of these chatbots, but that might surface at inopportune moments.
Like what?
Well, it might be that if you say, you know, write me a love story as you did earlier in
this episode.
Right, Jack and Jill.
It uses Jack and Jill. It uses a heterosexual,
presumably heterosexual male-female couple
as the main characters in that story.
Right, as the archetypes, exactly.
And presumably that's because
of the many, many millions of love stories
that were fed into this model to train this chatbot,
a majority or vast majority of them featured heterosexual male,
female couples. So that is an example where, you know, this machine, which isn't making any moral
or ethical judgments of its own, is simply regurgitating a sort of statistical average
of everything that it has learned about human love stories. And that just
happens to perpetuate this kind of heteronormative, you might say, ideal. So I think that for all of
these reasons, the questions about bias and stereotypes and the various training that goes
into these models are going to be very controversial and very heated as these programs
move toward the mainstream. Okay, so the next liability pitfall worry I want to bring up is
not one that the chatbot raised, but it's one that you and I've talked about a lot when we talk about
technology, which is, isn't there pretty obvious risk that over time a technology like this essentially is used
for ill you know it becomes a tool by which users manipulate it and turn it into a source
of misinformation of hate i mean that is kind of the story of every major social network and platform
that has been created over the past 20 years. Is that a worry that people who make this bot have
that you have? Or is there something in the design that makes you not as worried as you
might normally be about it? No, I think it's very reasonable to be worried about. And I think that's
a larger worry about these systems is that they are just extremely efficient at generating large amounts of output very, very quickly.
is for fact checkers to keep up with it in real time. And now imagine an AI model that is capable of generating not just one piece of propaganda, but 100,000 pieces that are tailored to maybe
individual readers and doing that all in real time much faster than any human fact checker
or opponent of propaganda can keep up with.
Well, all of that really makes me think, Kevin, about something that you wrote recently about chat GPT and a single line that I haven't been able to get out of my head in which you wrote,
we are not ready. We're not ready for this. And I can see, based on everything you're saying, why we aren't.
But since you've spent so much time studying all this, do you think that there are ways that the institutions in our world, or we as individuals, can get ourselves ready for the power of this
new technology, or readier than perhaps we'd be if we didn't give it some thought.
I mean, look at how much something like Twitter
has changed our society, our culture,
our political climate, our elections.
Right.
And Twitter is just a text box.
It's dumb.
I know, it's just a bunch of people being stupid.
You type in words and you send them to other people.
And that seems like very prosaic technology compared to what's coming out of these AI research institutions and companies.
And so it just feels like we are just staring at this technology that is rapidly approaching our society.
that is rapidly approaching our society.
And we're not even quite sure what it is yet.
I can say nothing of how we are supposed to coexist with it peacefully and in a way that doesn't break our society.
Right.
And how are we supposed to be ready to coexist with it?
I mean, are there actually things we could do?
Yeah, I mean, I think one obvious, maybe too obvious thing is just to
use it, to try it. I learned more from spending a couple hours playing around with ChatGPT
about this field of AI and where it is and what its limitations are and what it's really good at
than I would have by reading a dozen articles about it. So I think that's one thing that people
can do is just get in there and start playing around with it yourself. I think that's one thing that people can do is just get in there and start
playing around with it yourself. I think the other thing we have to do is just to have conversations
like this one, frankly, where we talk about both the promise and the pitfalls of these new
technologies and really keep the pressure on the companies who make these tools to make them as responsibly and
thoughtfully as they can, not cutting corners or just racing to be the first to market with some
new AI model. I think that in a few years, this technology, whether it's from OpenAI or
Google or someone else, will be embedded in products and apps that billions of people use
every day. And that makes this a really important time to have these conversations, to figure out
what the limits of these models should be, what they should and shouldn't do, because pretty soon
they're going to be a lot harder to control.
Well, Kevin, normally I would thank you in my very Michael Barbaro way,
but instead I've asked ChatGPT to write me a goodbye to Kevin Roos on this episode of The Daily, and I'm going to read the answer.
Dear Kevin, we are sorry to see you go,
but we are grateful for the time that you have spent with us on The Daily.
Your insights and perspectives have added so much to our show, and we've enjoyed getting to know you.
We wish you the best in your future endeavors.
Thank you for everything, and goodbye.
Sincerely, The Daily Team.
Wow, that's really touching.
I am really moved that you outsourced your goodbye to a robot. I am frantically trying
to use ChatGPT to generate a response to you, but it's giving me an error message and telling me
that the system is too busy right now. So I will just have to use my frail, fallible human language skills to say thank you.
It's great to be here and happy holidays.
You too. We'll be right back. Here's what else you need to know today.
On this book, the nays are 233.
The nays are 191. The bill is passed.
In a historic move on Thursday, the House of Representatives voted to let the people of Puerto Rico, a U.S. territory, decide their political future for themselves in a referendum.
for themselves in a referendum. The bipartisan vote would pave the way
for the island to become America's 51st state
or an independent country.
But for now, the measure has little chance of becoming law
because there is insufficient support in the Senate.
And...
The key is this.
We don't want this winter to look like last winter or the winter before.
Fearing a resurgence of infections, the Biden administration will restart a program
that provides free COVID tests through the U.S. Postal Service.
And our winter COVID-19 preparedness plan helps us do just that.
plan helps us do just that. Americans can now order four tests each at www.covidtest.gov with shipments beginning next week. Today's episode was produced by Luke Vanderplug,
Michael Simon-Johnson, and Mary Wilson with help from Muj Zaydi. It was edited by John Ketchum
with help from Patricia Willans, contains original music by Dan Powell,
and was engineered by Chris Wood.
Our theme music is by Jim Brunberg and Ben Lansford of Wonderly.
That's it for The Daily.
I'm Michael Barbaro.
See you on Monday.