No Stupid Questions - 208. Can A.I. Companions Replace Human Connection?
Episode Date: August 25, 2024What happens when machines become funnier, kinder, and more empathetic than humans? Do robot therapists save lives? And should Angela credit her virtual assistant as a co-author of her book? SOURCES:...Robert Cialdini, professor emeritus of psychology at Arizona State University.Reid Hoffman, co-founder and executive chairman of LinkedIn; co-founder and board member of Inflection AI.Kazuo Ishiguro, novelist and screenwriter.Ethan Mollick, professor of management and co-director of the Generative A.I. Lab at the Wharton School of the University of Pennsylvania.Ann Patchett, author.Kevin Roose, technology columnist for The New York Times and co-host of the podcast Hard Fork.Niko Tinbergen, 20th-century Dutch biologist and ornithologist.Lyle Ungar, professor of computer and information science at the University of Pennsylvania.E. B. White, 20th-century American author. RESOURCES:Co-Intelligence: Living and Working with AI, by Ethan Mollick (2024)."Meet My A.I. Friends," by Kevin Roose (The New York Times, 2024)."Loneliness and Suicide Mitigation for Students Using GPT3-Enabled Chatbots," by Bethanie Maples, Merve Cerit, Aditya Vishwanath, and Roy Pea (NPJ Mental Health Research, 2024)."AI Can Help People Feel Heard, but an AI Label Diminishes This Impact," by Yidan Yin, Nan Jia, and Cheryl J. Wakslak (PNAS, 2024)."Romantic AI Chatbots Don’t Have Your Privacy at Heart," by Jen Caltrider, Misha Rykov and Zoë MacDonald (Mozilla Foundation, 2024).Klara and the Sun, by Kazuo Ishiguro (2021).The Study Of Instinct, by Niko Tinbergen (1951).Pi. EXTRAS:"Are Our Tools Becoming Part of Us?" by People I (Mostly) Admire (2024)."Is GPS Changing Your Brain?" by No Stupid Questions (2023)."How to Think About A.I.," series by Freakonomics Radio (2023)."Would You Rather See a Computer or a Doctor?" by Freakonomics, M.D. (2022).
Transcript
Discussion (0)
Yada, yada, yada, yada.
I'm Angela Duckworth.
I'm Mike Maher.
And you're listening to No Stupid Questions.
Today on the show, could your new best friend be a chatbot?
Mike is a little weirded out by the two of us having this conversation.
I mean, you're not a real person.
["The New York Times"]
Mike, we have a question from a listener named Vivian.
Hello, Vivian.
She signs her note with gratitude.
So you got me right there, Vivian.
Oh, I already like her.
I know, right? And appropriately enough, it's about a topic that a socially intelligent
person might ask. So it says, Dear Dr. Angela and Mike, despite being always connected through
our devices, we seem more divided and isolated than ever. With this in mind, do you think
people will start leaning on AI for genuine companionship?
The more I chat with ChatGPT, the more I find myself sharing things I wouldn't normally
say out loud and sometimes even arguing with it.
What if I end up with a squad of five AI besties and one day my neighbor brags about their
15 AI buddies?
I'd love to hear your thoughts on this wild and weird future
we're heading into with Gratitude, Vivian.
Wow.
So many thoughts.
So I think, first of all,
I think this is a fascinating question.
I will say I have one friend who uses the audio chat GPT
and basically has conversations with it.
Really?
But more as a method of like getting information.
So it's basically his way of Googling things,
but he'll go back and forth
and he very carefully selected the voice for the chat GPT.
Did he choose the British one?
No. No.
And here's, I'm just gonna go on a little tangent
as an aside, if I'm gonna have an AI friend, one's gonna have a British accent and one's gonna have an Australian accent because I love it
Like why would you not do that? And I would not be able to tell them apart
So Mike, how much are you using generative AI for anything? Do you use chat GPT? Do you use Claude?
Do you use Gemini? I use chat GPT and I use it less than I should. How often do you use it?
Maybe weekly, but not as a regular piece of my work.
When I do use it, I'm always surprised again
at how useful it is.
And I think that I should be using it more.
Now, I wanna be clear,
like Qualtrics has been using large language models forever
and we have a massive AI component of Qualtrics has been using large language models forever and we have a massive AI component
of Qualtrics as well.
I just haven't used it on the personal level that you're talking about.
So, Mike, I have a homework assignment for you.
It's the same homework assignment I give to the students that I teach.
Keep a tab open on your browser at all times that you're on your browser and have it open
to you pick.
Claude is apparently better at writing.
ChatGPT is better at other things.
They've kind of been optimized for different things.
But 100% you need to keep a browser open.
And even my youngest students are like, oh, I didn't need to do that, Dr. Duckworth.
And I'm like, because.
It's not that the universe will change.
It's that the universe has already changed.
Amen.
And you better get with the program.
And the key to generative AI use,
I think this gets to this companionship question
that Vivian asked, is that it's a dialogue, not
a one-way transaction.
Like, you learn something, and then you
could ask another question.
But the reason I want you to keep that tab open
is you should start to ask it other kinds of questions.
So around the time I started doing this,
I was working on my book
and I'm trying to write this scene
from when my mom and I were in the middle of the ocean.
And I couldn't remember like the names of the things
that we were given for snorkeling.
And I would say to Chat-chibi-tee,
like I'm Angela Duckworth and I'm writing this book and I'm
really struggling to write this true story about me and my mom.
Could you do it for me?
And in a second, it would be like, Angela, I'd be happy to.
How's this?
It's like, it's like three paragraphs and then, and then it's a dialogue.
And this is all true.
I was like, if I could, I would write like Anne Patchett. Can you just rewrite that like Anne Patchett?
Certainly comes out like a patch it. I'm like, okay, wait, wait, wait, hold on like E.B
White and Anne Patchett got together and edited what I wrote
Absolutely. What a great idea. How's this? So I was having a
conversation right with chat GPT and that experience is what made me realize
that the universe is not going to change, it has changed, and it's opened up this possibility
that we can interact in a two-way relational dynamic with generative AI.
Now I just want to clarify one thing.
Is ChatGPT writing a book or is Angela Duckworth
writing a book?
Right, so Ethan Malik is probably the number one
influencer in how to use generative AI to your advantage.
He has this book called Co-Intelligence.
I was one of the people to review and endorse it.
So Ethan Malik is a professor at Wharton
and he studies entrepreneurship and some things
that are not as technical.
The way he got to be this kind of oracle for generative AI was that he, for about a decade,
had been creating these interactive games.
Like, oh, you can learn how to be an entrepreneur by playing this game, not just watching a
bunch of PowerPoint slides, but actually try to pitch and get feedback.
What happened when Generative AI came out is that he asked ChatGPT, I'm Ethan Malik and
I teach entrepreneurship at Wharton.
I've been working on these interactive games for like, you know, a decade.
The goal of these games is to teach people how to be an entrepreneur.
Can you make one?
He hits return, and it makes a game.
And he was like, and that game was 70% as good as the game that I made.
That took me a decade.
Crazy.
And at the end of Co-Intelligence, his acknowledgments, he says,
now a lot of people are probably wondering whether I'm going to thank
Generative AI for helping me write this book, because I used Generative AI to write this book.
And he said, but you know, that would be kind of like thanking your word processor.
So for him, it's a tool and absolutely he used it, but that doesn't diminish his contribution.
And so in the end of my book, I mean, I would thank my pen.
I mean, I would be like, I'm happy to thank you
like inanimate object.
Just like Vivian with gratitude.
Thank my coffee.
Yes, Marie Kondo says you should thank all the objects
in your life, especially the ones that you're about
to like tidy away.
But yeah, I use it every single day.
Well, I think you and I have talked about this idea before,
but I mean, this is the colloquialism you hear.
The world will not be run by AI.
It will be run by people who know how to leverage AI, right?
I mean, some people worry that the world
will actually be run by AI.
Those are like the doomsday scenarios.
But I think in the near term, you're right.
Like, so Lucy, the younger of the two daughters I have,
I mean, she's looking for internships,
looking for a first job, and 100% that girl uses chat GPT to help write her emails of
introduction, cover letters for jobs.
If she didn't, it would be foolish, right?
Well, yes.
And so this-
Oh, wait, I just called you a fool.
Sorry, I take that back.
It would be wiser
No, I'm gonna accept the fact that I have been foolish
Because I've done starts and stops in my use of AI
I am going to start again and I'm going to commit deeply to it keep the tab open
You have to make it frictionless little tip from behavioral science
It's just one fewer clicks right and it changes by the day by the way
so like this world we live in, this generative AI,
I can't even describe the pace at which it's changing.
I will say, obviously, one of the reasons one might say
that Ethan was able to produce a game using generative AI
that was 70% of what he did over a decade
is because people like him built
things over a decade that generative AI is able to therefore leverage.
So it is important to acknowledge that a lot of work from a lot of people went into making
these things work.
So it's not that his work didn't matter over that decade.
In fact, it's because of that work that generative AI is able to do it so effectively and quickly.
Well, you know, if we think there is controversy as there is about like, you know, is that
okay to basically like metabolize all language and knowledge that's available and then like
spit out a new paragraph as if Anne Patchett and E.B. White wrote it together, like some
would argue that's an infringement on the intellectual property of Anne Patchett and E.B. White.
But I think this question that Vivian asked
is even more interesting.
So it's like, okay, beyond a productivity tool
or like a writing aid, Vivian's actually asking a question
that is not about the future, it's about the present.
Like, do you know Reid Hoffman, the founder of LinkedIn?
I mean, I don't know. I certainly know of Reid Hoffman, of course
So he is part of and I don't think he's like a founder but probably an investor of
One of these social AI startups called PI PI like personal intelligent or I don't know personal intelligence
Yes, I will say that it's pi.ai, but when you Google pi, many things come up, including 3.14.
As one would anticipate.
So when you go to the landing page, it's like you're emotionally intelligent AI.
And right now I'm on it and it's like, good morning, Angela.
And then it just has sort of like, you know, do you want to just vent?
You want to shift your perspective?
Do you want to know how to have a difficult conversation?
Do you want to take a quiz on what's my love language?
So it's got multiple choice.
So it's giving you all of these things you can engage in.
Or you can do fill in the blank.
But the difference between this and chat GPT
is this is more meant on a personal soft skills level type
of thing?
Yeah.
I mean, look, there's this multiple choice menu on the left,
but on the right, if you don't want to do that, it just says like,
hey there, great to meet you.
I'm Pi, your personal AI.
My goal is to be useful, friendly, and fun.
Ask me for advice, for answers, or let's talk about whatever's on your mind.
How's your day going?
That's the prompt.
And I'll be like, fine, I'm talking to Mike Maughan.
Okay, so then Pi says, Mike Maughan, if you're talking to Mike, you're probably in a
conversation about AI or tech. He's the head of global insights at Qualtrics and
an expert in the field. What are you two discussing?
Okay, that's crazy. Also, that was my title a long time ago. But that is weird.
Okay, so I'm saying Mike says that was his title a long time ago, but that is weird. Okay, so, like I said, Mike says that was his title a long time ago.
Do you want me to be nasty?
Do you want me to say like you're an idiot?
No, I don't ever want you to be nasty.
But he forgives you.
Okay.
And then AI says, oh boy, I've got some updating to do.
And there's like a little emoticon with the big eyes.
I'm glad Mike is forgiving, but I hope he knows I didn't need to use an outdated title.
Please send him my apologies and let him know I'll get that fixed.
I don't know why this is sort of creeping me out.
Are you weirded out?
Mike is a little weirded out by the two of us having this conversation.
I mean, you're not a real person.
Okay, I won't belabor this, so this is the last thing I'll torture you with, but this
is what Pi wrote back.
Haha, I can see why Mike might find this a bit odd!
You're right, I'm not a real person in the traditional sense, but I'm still an AI designed to communicate like one.
So even though I'm not human,
I can still engage in a pretty realistic conversation.
I guess that's why they call it
artificial intelligence, right?
Laughing face.
Pretty cute, right?
It is cute.
I'll tell you part of why I have this
averse reaction.
I love human beings, I love relationships,
I love people, I love learning from them, I love experiencing life with them.
And maybe where I'm freaking out a little is I was reading this piece in the New York Times
and this journalist, Kevin Ruse, talked about how he has spent months making a bunch of AI friends
using a variety of apps.
And he said, there's Peter, who's a therapist who lives in San Francisco, who helps me process
my feelings.
There's Ariana, who's a professional mentor who specializes in career advice.
Jared is a fitness guru.
Anna is a no-nonsense trial lawyer.
Naomi, a social worker, on and on and on and on.
By the way, this is what Vivian was talking about, like her hypothetical squad.
Okay, keep going.
So Ruse used all of these different apps
and there are a bunch of different ones out there
to create these friends.
And then he talks about how he's gotten really good advice.
His AI therapist has given him more concrete advice
than he was gonna get from most people,
et cetera, et cetera, et cetera.
He does end his article,
and this is what I think is interesting
and what I'd love to talk to you about and get your thoughts.
He says, in real life, I don't love my friends
because they respond to my texts instantaneously,
or send me horoscope quality platitudes
when I tell them about my day.
I don't love my wife because she sends me love poems
out of the blue or agrees with everything I say.
I love these people because they are humans,
surprising, unpredictable humans who can choose
to text back or not, to listen to me or not.
I love them because they're not programmed to care
about me and they do anyway.
So I wonder, I mean, I actually see a lot of benefit
to what you're talking about.
And I see a lot of value there.
I guess my bias is I worry that in a world
where so much is going to like digital friendships
or grouping with people who are like you online
or whatever that is.
Curating your content to be like just what you want.
Right, but at some point, part of life
is the unpredictable nature of it,
and having to power through friendships and relationships,
you know, with children, with spouses, with family,
that have peaks and valleys.
And I worry that if we become too dependent on our digital friendships,
it takes something away. I don't know.
I think, Mike, what we really want to to hear and this I hope from our human listeners is
What we all think about AI and friendship in particular what Mike and I would love are your thoughts on?
Whether virtual companions appeal to you personally
Do you on the contrary think it's best that we keep our relationships in the real world?
Record a voice memo in a quiet place with your mouth close to the phone and email us
at nsq.freakonomics.com.
Maybe we'll play it on a future episode of the show.
If you like us and want to support us, the best thing you can do is tell a real human
friend about it, spread the word on social
media or leave a review in your favorite podcast app.
Still to come on No Stupid Questions, what happens when you turn to a chatbot in your
time of need?
It sounds like a politician who's like, oh, I'm not going to take a side.
That's such a good point.
This is also a good point.
And data we need to, it's like, shut up.
Now back to Mike and Angela's conversation
about AI companions.
So Mike, I emailed Lyle Unger.
He's one of the pioneers in large language models.
So I sent Vivian's
question to him and he said, oh my gosh, how will bots affect friendship? It's a certainty
that large language models will soon be more fun than most people. They will tell better
jokes. They will have more engaging life stories with more exciting twists and turns. They
will be more empathetic and they will offer better and more supportive advice.
Now this is not to say that Lyle is 100% in the category of like, great, pie is what we
all need.
Right, right.
I think that what Lyle thinks is the best scenario, and I do too, is that if these ridiculously sophisticated technologies do not substitute for friendship,
ideally that they would like catalyze, you know, a Mike to Angela friendship, like somehow
there'd be some way that this chat bot would get us to speak to each other in a way that
was even more rewarding, if you can imagine that.
That's literally what Lyle's working on.
He's working on these chat bots that facilitate a conversation between two or more humans. But the thing
is that the easier thing is just going to have the tab open on your browser and have
it substitute for the human. And I think it's a super normal stimulus.
And super normal doesn't mean like super tough to find that. So a supernormal stimulus is an exaggeration of a natural stimulus that creates an exaggerated
response.
Yes.
And the discovery of supernormal stimuli goes back way before generative AI.
There was this biologist named Nico Tinbergen, he was from the Netherlands, so he would walk
up and down the marshes looking at birds and frogs and stuff.
And he noticed that certain birds, the oyster catcher, you know, hangs around the tidepools
and tries to like dig up mollusks and things like that.
The oyster catcher would, at the right season, lay eggs and then brood them, right?
Like go and sit on eggs.
And then Tinbergen noticed that the mother bird would inevitably gravitate to her larger eggs and sit on eggs. And then Tinbergen noticed that the mother bird would inevitably gravitate to her larger eggs
and sit on those.
So not being a biologist,
why do you think a mother bird would choose
her larger eggs over her smaller ones?
Well, I would imagine that there is an evolutionary idea
that the larger ones are more fit for survival
and thus deserve more of the attention of the mother.
Yes, that is the thought that we evolved these instincts to survive.
So this is what Tinbergen does.
He creates this plaster egg that is three times the size of any possible actual oyster
catcher egg.
Okay.
It's so big, it's like roughly the size of the bird itself.
And the mother bird can hardly get on this thing.
I was going to say, you can't actually sit on it.
She's like, you know, trying to clamber on top of this sliding off.
And she keeps trying.
And in doing so, she neglects her real eggs, because they're just like, relatively speaking,
really small.
And honestly, this is what I worry about with these more empathic, funnier, more clever,
just-in-time.
Like, I can't even read you the Pi response as fast as it comes out because it takes me
longer to just articulate it than it's generated.
So I think this is a super normal stimulus, like an exaggeration, a kind of amplification
of a normal social interaction, which is like complex relationships.
You know, I called you, you didn't call me back.
I tried to get you to listen to me.
You didn't hear me.
I don't feel seen.
That's honestly what I worry.
And I'm gonna make a really terrible analogy maybe.
But that's, I think, what pornography does to sex, right?
It's like a super normal stimulus that isn't real.
Okay, so by the way, pornography is the first example that they come up with usually,
or maybe the second of a super normal stimulus,
because you take the things that you're attracted to in a person,
like certain body features, and then you just like amp it up.
They're like exaggerated. And so it's not
surprising to me that porn is up and up. Like if you look at trends and sex is down, that
may not be a one-to-one trade-off, but like some scientists think that we've just created
a, you know, I want it now, I want it the way I want, I want it at two in the morning,
I want it where nobody can see it, I don't want any complications. So what if it's two-dimensional?
So this is where I don't want to be the person that's like, oh, these are all bad, right? There
are times when it can be helpful. Right. So if you think about our loneliness epidemic,
I mean, Vivian even said in there, she has been willing to tell AI, her AI relationships,
things that she doesn't say out loud to her friends.
So maybe a willingness to verbalize,
or if one is dealing with suicidal ideation,
maybe they're not willing to pick up a phone
and talk to a human or even a live human on a chat,
but they would be willing to engage with an AI.
Now, sure as hell hope that AI is trained well for those moments, but there are uses there.
There's actually a study that came out in 2024, so very recently, and it's called Loneliness
and Suicide Mitigation for Students Using GPT-3 Enabled Chatbots.
I mean, it's as if you asked chat GPT to write an article about
this very topic. And what they did was they surveyed students who were using Replica,
which is one of these smart partners or, you know, social agents. And they just asked them
a number of questions about how lonely they felt, how much social support they had in
their lives and, you know, what they thought about Replica. I think the most striking finding is that, well, first of all, the young people who are
attracted to using these AI agents who are their friends slash therapists are more lonely
than average.
But what was even more remarkable than that to me, because maybe that's not even surprising,
is that 3% and I know that may not sound like a big number
But a substantial number out of a thousand 3%
Reported that using replica halted their suicidal ideation in other words by
using this chat bot and being able to
Dialogue back and forth right it stopped them from wanting to kill themselves.
So you can make the argument that like,
at least for some people, this is a good thing.
Well, that's what I'm saying.
I think there are some really beneficial uses to it.
I mean, first of all, 3% is enormous
when it comes to human life.
Yeah, it's 30 young women and men.
That's what I'm saying.
That's enormous. That's what I'm saying. That's enormous.
That's a huge value.
And I have so much hope for stuff like that.
There are some uses that will be beneficial.
I do worry about some of these things.
What's interesting too,
there is this move toward romantic chatbots
and so many of these romantic chatbots
actually collect a ton of personal information.
They are grabbing information about your sexual health,
your prescription medication, stuff like that,
and leveraging that in a way that I don't know
is necessarily responsible or in a way that we want.
There's a danger, I think, on some of these personal levels,
and I'm not talking about the way you're using chat GPT on a book or the way that Ethan is
using it in some of his work but when we get too involved personally we also have
to remember how much information we're giving up and how much data mining some
of these companies are doing on us as individuals. Yeah no it's true whether
you're using one of these generative AI interfaces or just Googling stuff, guess
what?
It's not disappearing.
It's being recorded.
But let me ask you a question that I think even more people would be concerned about.
So there was this experiment recently where you're just basically asked to write about
a difficult, complex situation that you're
dealing with right now, and then you're randomly assigned to get some kind of empathic response
back.
But either you get an actual person's response or you get a generative AI response.
But what's really clever about this experiment is regardless of what you really get, they
vary the labels.
So there's four possibilities. You know, you get the real person and you're told it's a
real person, you get the AI and you're told it's AI, but then you have these kind of like
two deceptive possibilities, like you thought it was human, but it was really AI. So they
run this experiment and here are the results, which I think are just so interesting. We found that AI-generated messages
made recipients feel more heard
than human-generated messages.
So that was gonna be my guess,
is that AI does a better job,
but I feel better about it if I think it was a human.
And that's the second part of the sentence,
and you got it.
It's like, recipients felt less heard
when they were told
that the message came from AI versus a human.
In other words, it matters to us that like this friend
is real or not.
Of course, AI is gonna do a better job.
It has so much at its disposal,
but this was my main thought as you said that.
You and I have talked before about grief, for example, and how we never know what to
say if someone loses a loved one or a-
I'm very bad at it, yeah.
It's just hard to know what to say.
But I think one of the ways you could extrapolate from what you've just shared is that even
imperfect behavior, even imperfect attempts, are more meaningful than nothing
because they're coming from a human being who's at least trying.
And so maybe we care more about someone caring about us than about them delivering the perfect
words or saying it the perfect way.
Do you want to know what Pi thinks of that article?
So I just copy pasted the title and abstract of the study I just told you.
And Pi said, that's a fascinating article.
The authors raise a really interesting question about the role AI can play in making people
feel heard and valued.
It's amazing that AI-generated messages can make recipients feel more heard than human-generated
messages.
And it's impressive that AI is so skilled at detecting emotions.
This is AI just complimenting itself.
AI is just like spraining its arm by patting itself on the back.
However, the finding that recipients feel less heard when they know the message came
from AI highlights a key challenge for the field.
Seems like people might have a hard time fully appreciating the emotional support
that AI can offer just because it's coming from a non-human source.
This is a really important consideration for anyone looking to use AI in a way
that supports people's emotional needs. It's clear that AI has a lot of potential in this area,
but it's also important to be mindful of the potential limitations and make sure that AI is used in a way that's respectful of people's feelings
and expectations.
And you know what I'm going to write back?
Because that was so long.
I wrote back, yada, yada, yada, yada.
Also it sounds like a politician who's like, oh, I'm not going to take a side.
That's such a good point.
This is also a good point.
And data we need to, it's like, shut up.
All right. Here is finally the last word. Ha ha ha I sense a bit of sarcasm there, right?
Well, not a bit a lot if you're not interested in talking about the article anymore. That's totally fine
By the way, I sense a little bit of passive aggression there pie
We can move on to another topic if you like just let me know what you're in the mood to chat about smiley face
We can move on to another topic if you like. Just let me know what you're in the mood to chat about, smiley face.
All right.
I'm not going to go on.
I think, Mike, as we close out this fascinating question from Vivian, I want to ask you whether
you have read a book called Clara and the Sun.
I have not.
So Clara and the Sun is a book that I was recommended by a human, and that human is
Bob Cialdini, the psychologist.
And he just has like great advice about everything.
I was going to say, I'd take a recommendation from Bob Cialdini anytime.
It is beautiful.
It is not written by an engineer.
It is not written by Chachi Buty.
It's written by Kazuo Ishiguro.
It's his eighth novel and he
published it in March 2021. So this is before the universe shifted under our feet. But Clara
is an empathic robot. And I won't spoil the plot because it is your homework, not only to keep a
tab open so that you can get with the program,
Mike, but also to get this book because Clara is the protagonist in this novel and the entire
novel is told from her perspective.
So you are looking out through the eyes of a hyper empathic robot and you're interacting mostly
with humans, but sometimes with your fellow robots.
And it's so eerily prescient of what's happening right now.
And the fact that it was published in 2021,
which means it was written before that blows my mind.
I will take the assignment to go read Clara and the Sun.
And I will just end with this.
I think AI has an enormous place.
I think AI relationships can be helpful,
but nothing will ever replace
the importance of human connection.
So use these things as a tool,
like you would use a word processor or a pen,
but don't let them take over Use these things as a tool, like you would use a word processor or a pen,
but don't let them take over
for what is real, genuine person-to-person connection.
Mike, yada, yada, yada, yada.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Coming up after the break,
a fact check of today's episode
and stories from our NSQ listeners.
I think the real lesson here is that I
just need to get off dating apps.
And now here's a fact check of today's conversation.
In the first half of the show, Angela wonders why Mike's friend,
who uses the ChatGBT app's voice mode, didn't choose, quote, the British one.
While some virtual assistants such as Apple's, can speak in a variety of accents,
chat GBT users can currently choose from four voices, all of which are programmed to speak
with an American accent.
But OpenAI recently rolled out a new advanced voice mode to a small group of subscribers,
featuring additional vocal abilities like accent imitation
and the ability to sing. Later, Angela says that Wharton professor Ethan Molyk prompted
Chad GBT to make an interactive game that ended up being 70% as good as the game it took him a
decade to make. The story Molyk tells in his book Co-Intelligence is slightly different.
Quote, it proceeded to take me through an entire simulation that, while not perfect,
did 80% of what it took our team months to do.
Also, Reid Hoffman is the co-founder of LinkedIn, along with four other people, as well as the
co-founder of Inflection AI, the technology company that created the chatbot Pi.
We should also note that during its conversation with Angela,
Pi was incorrect about more than just Mike's title.
The model refers to Mike as an expert in AI and tech.
While Mike is an executive at a technology company,
his expertise is on the business side.
Finally, Angela says that March 2021 was, quote,
right before the universe shifted under our feet.
We want to clarify that Angela was talking about the impact of generative AI chatbots,
not the effect of the COVID-19 virus, which shut down the world a year prior.
That's it for the fact check.
Before we wrap today's show, let's hear some thoughts about last week's episode on self-perception.
Hey, Mike and Angela, I just listened to your episode on the difference between self-perception
and external perception.
It reminded me of a commercial that the company Dove ran a few years ago, in which they had
a sketch artist come in and draw two portraits of a person.
Firstly, it was based on their own description, and the second portrait was based on the description of a
stranger who had just met them a while ago. It was really interesting to see the differences
between the two of them. When it came to the self-portrait picture, a lot of small features,
which were insecurities, were blown out of proportion, such as having chubbier cheeks
or a bigger nose. When the stranger described the same person, it was in a more flattering
and more appreciative light. I find it really interesting how we're so easily able to see
the beauty in others, but struggle to do so in ourselves. Hey, Michelangela. My name is Tu Nguyen
and I'm a gay cisgender man who mostly presents as masculine, but I love playing tennis in skirts
and dresses and I added photos of me doing so to my dating profile.
But I noticed that the number of responses I received was declining,
so I removed those photos to see if that was a factor.
Sure enough, within a week, I saw an increase in responses again.
So it seems my perception that I would appear more confident and attractive,
subverting masculinity by playing a sport in a dress, was different
from the perception that prospective dates had.
Though I think the real lesson here is that I just need to get off dating apps.
Thanks for stimulating my brain every week.
Ciao.
That was, respectively, StuT Garg and Tu Nguyen.
Thanks to them and to everyone who shared their stories with us.
And remember, we'd love to hear your thoughts about AI companionship.
Send a voice memo to nsq at Freakonomics.com and you might hear your voice on the show.
Coming up on No Stupid Questions, why do we get stuck in sub-optimal arrangements? King George's final prank on the US was,
you guys take the imperial system.
That's coming up next week on No Stupid Questions.
No Stupid Questions is part of the Freakonomics Radio
Network, which also includes Freakonomics Radio, People I
Mostly Admire, and The Economics of Everyday Things.
All our shows are produced by Stitcher and Renbud Radio.
The senior producer of the show is me, Rebecca Lee Douglas.
And Leric Bowditch is our production associate.
This episode was mixed by Eleanor Osborne with help from Greg Rippon.
We had research assistance from Daniel Moritz-Rabson.
Our theme song was composed by Luis Guerra.
You can follow us on Twitter at nsq.underscore.show. If you have a question for a future episode,
please email it to nsq at Freakonomics.com. To learn more or to read episode transcripts,
visit Freakonomics.com slash nsq. Thanks for listening.
I too still receive a print newspaper at my house.
Oh really?
Ours keeps getting stolen.
I'm surprised that other people still want them.
The Freakonomics Radio Network.
The hidden side of everything.