The Agenda with Steve Paikin (Audio) - Can Canada Keep Up with AI?
Episode Date: April 30, 2024In late 2022, generative AI, like ChatGPT, shook the world and called into question what these advancements meant for our collective future. What impact would it have on the way we work? Was it being ...used ethically? Should schools be banning its use? Many of these questions remain. Here to give us an update on whether Canada is ready to tackle its future with AI is Canadian futurist Sinead Bovell, founder of tech education company, WAYE.See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
From epic camping trips to scenic local hikes,
spending time outdoors is a great way to create lasting memories to share with friends and family.
This summer, TVO is celebrating the natural wonders that inspire unforgettable adventures
with great documentaries, articles, and learning resources about beloved parks in Ontario and beyond.
Visit tvo.me slash Ontario summer stories for all this and more. And be sure to
tell us your stories for a chance to win great prizes. Help TVO create a better world through
the power of learning. Visit tvo.org and make a tax-deductible donation today.
In late 2022, generative AI, such as ChatGPT, shook the world and called into question what these advancements could mean for our collective future.
We've also heard calls for better management of social media and smartphones for youth in school just this past weekend from Ontario's Education Minister.
Are our lives online getting too complicated?
Are our lives online getting too complicated?
Here to give us an update on whether Canada is ready to tackle its future with AI,
we welcome Canadian futurist Sinead Bovell.
She's a founder of the tech education company WAI, W-A-Y-E,
stands for Weekly Advice for Young Entrepreneurs.
She was also appointed to the United Nations Generation Connect Visionaries Board.
And it's great to have you here at TVO.
Great to be here. Thanks for having me.
I do have to ask you right off the top, what does a futurist do?
Well, for me, in my line of work, the proper term is in strategic foresight.
So I analyze quantitative and qualitative pieces of data, and I use that to build forecasts and scenarios about where the future may be headed.
So I cannot predict the future.
I think my life would look a lot different if I could,
but I can make scenarios as to where things may likely end up.
So you're making a distinction between forecasting and predicting.
Absolutely a distinction.
Okay.
So I can predict that the Leafs will almost surely end this series in ignominy,
but I couldn't...
There's a distinction between that and forecasting.
Right, yeah.
I think I got it now.
Okay, generative AI, we talked about it off the top.
You bullish or bearish on it?
I am cautiously optimistic.
How come? I think the potential with this technology to transform key areas that are important to society,
whether that's climate, whether that's healthcare, whether that's education,
whether that's productivity, are really high.
I think the caution is, can we steer the evolution of this technology right?
Can we make the right decisions about it now to set it up for the best chance of success
that we can? That's where the caution comes in. But I think if we can get this technology right,
it will be pretty remarkable. What are the threats that concern you?
Well, there's quite a few. I think misinformation and disinformation are probably one of the biggest ones. And it's not just
generative AI on its own. This technology is entering into an already fractured information
ecosystem, largely driven by social media and the impact algorithmic ranking has had on our
society. So generative AI at the intersection of social media, we're looking at personalized misinformation and disinformation and taking these this rabbit hole of kind of fractured society that we have.
And it's going to become a little bit more individual, which is challenging.
I want to Sheldon, can I pull an audible here?
Because we were going to get to miss and disinformation later, but you've brought it up.
So here we are. OK, fake robocalls from President Biden.
This is, I guess, as good an example as any.
This was somebody did this in New Hampshire before the primaries telling people, don't bother voting.
You know, it's all it's all but done. And you had a little something to say about this.
Miss and disinformation.
Roll it if you would, Sheldon.
There's a deepfake audio of President Biden's voice
going around calling New Hampshire Democrats
and telling them not to vote on Tuesday.
Do you know the value of voting Democratic
when our votes count?
It's important that you save your vote
for the November election.
We'll need your help in electing Democrats
up and down the ticket.
Voting this Tuesday only enables the Republicans in their quest to reelect Donald Trump again.
The automated call begins with a term that apparently President Biden likes to use a lot.
I don't know about you, but I wouldn't be able to tell that that voice wasn't actually President Biden.
Right. That's the point, right? It wasn't him?
It was not him. It was confirmed that that was a deepfake audio of his voice.
But it's pretty damn convincing.
It is convincing, and that's only going to grow with time.
And you can couple that with video.
You can couple that with imagery.
And we're looking at entire interactive deepfake compositions.
And I think what becomes more challenging is we were able to catch that because so many callers received the same call.
challenging is we were able to catch that because so many callers received the same call but you can imagine a world where each person gets a different
call and a personalized call and it might even be more challenging if it's
not a famous voice that people might talk about you might get a call from a
local rep that's talking about something that you're passionate about that has to
do with politics or it's politics adjacent, but you're not as thrown off as the president just called me, that seems a little bit funny.
I, you know, in my generation, throw my hands up in the air and think there is nothing we can do about this.
This is beyond our ability to control.
Now, you're younger than me, so maybe you're more optimistic about this.
What do you see?
I do think the next couple
years are going to be quite challenging when it comes to AI generated disinformation. So the bad
guys are going to be in the lead? Not necessarily in the lead. We have to be very, very diligent
about the information we come across in any sort of digital ecosystem, audio ecosystem. But I think
if we want to fix this problem, not only do we need to have more action in terms of accountability,
but we have to start thinking about bigger structural changes to the very infrastructure,
the digital infrastructure that we operate on. The internet we use today wasn't designed for a world with AI-generated fake content.
So we have to think more critically about authentication, infrastructure that would allow us to trace the histories and origin of content to give us a leg up in this new information age.
I think that's what it's going to take to get us over this hump, a new digital infrastructure that we can coexist on,
because the one we have today, it's insufficient for the age of AI.
If we go to the kind of authentication that you are referring to,
does that get into privacy concerns that people will no doubt raise?
It can, but it doesn't have to.
So authentication could mean we move,
we look more critically at moving towards
a blockchain infrastructure,
where people are still relatively anonymous,
but you can trace where content started.
Or even working with social media companies
to ask them to embed metadata
when a piece of content gets uploaded,
that gets traced throughout the internet
so we can see if it does cause harm.
The original post was by R2D2,
one to one with five followers.
That made allude us that this is probably AI,
a bot or somebody or something trying to cause harm.
It's not something that we should actually take seriously.
So it's going to take a consolidated effort
between multiple big players on the internet.
But we're going to have to think more critically about
these structural changes toward information ecosystem, if we do want to make it through the information
age with AI. It feels to me, though, admittedly sitting in this chair, that there is a fire hose
of this crap coming at us 24-7. And even if we did put the proper regulations in place, and even
if we did get the kind of authentication you're looking for, you'd never be able to keep up with the mischief makers. Fair?
It will be a bit of a cat-mouse game, but I don't think that's anything new. I think it's new with
AI as the tool of the cat-mouse game, but we've had these challenges with cybersecurity. I mean,
you can kind of pick the technology and pick the challenge or any kind of dual-use technology,
and it has been a bit of this cat-mouse game.
So I think we can figure it out with AI,
but it will probably be a little bit messy
over these next two years,
which isn't ideal when we're looking at
one of the most significant election years
in global history.
Yeah, I mean, a big election in the United States,
an anticipated election in Canada
a year and a half from now,
and lots of mischief-makers out there who are determined to make sure that we are deceived along the way.
Does that suggest that we're going to have, well, I'd say it means we've got to be pretty bloody vigilant, but, you know, people are leading normal lives and they don't often have time or bandwidth for vigilance.
So what do we do?
We do have to be very vigilant.
or bandwidth for vigilance. So what do we do? We do have to be very vigilant. And I think education might be one of our most underutilized resources in combating misinformation,
disinformation, and building societies of digital resilience. So more akin to Finland or Estonia
that face persistent threats from some of their neighbors, persistent digital threats. We need to
build this into our education systems that you think critically about why an
algorithm puts something in front of you versus somebody else. What is the narrative and the
perspective that account may have when they created that piece of information or that piece
of content? I think it's going to take an entire do-over as to how we approach information online,
but that's what's required. I don't want to keep playing this generation card, but I find it's
the case.
You're a digital native.
You grew up with all this.
You get all this stuff.
People in my generation are digital immigrants, I guess.
You know, we're coming late to it.
We're trying to figure it out.
In many cases, we're not equipped
to kind of understand these deep fakes
and this mis- and disinformation.
So, okay, I have hope for your generation.
What do you want to do about mine?
I mean, I think older generations have had challenges with the radio, challenges with TV,
and so I think there's some learnings we can take from history, but we're also going to have to
figure out ways to educate society more broadly. If something is as serious as democracy is on the
line, which in many ways it is when it comes to, if we don't have shared truths, that makes it nearly impossible for democracy to function.
We have to meet society, every aspect and every member of society, where they are in terms of education
and building this next chapter of digital resilience in this AI age.
In which case, there was a, some would argue, significant step taken in the province of Ontario in that regard this past weekend, where the Minister of Education said he wanted smartphones
banned from classrooms. What do we think? I 100% agree that that is an important place to start.
The data, I think we all know and feel that we are distracted by these devices in our pockets that have been engineered
with tons and tons of psychology research
to keep us on platforms,
whether it's apps on these devices
or checking these phones constantly
or these devices constantly.
So I think at the very least,
they need to be banned in classrooms.
But when you actually look at the data
as to smartphone bans,
the stricter the ban is,
the more effective the outcome is for children. So there was a study done in Norway recently that looked at 400 schools that had
different levels of smartphone bans. And they found schools where you couldn't bring the smart,
if you did bring the smartphone, it has to be off the whole day, or you hand it in at the beginning
of the school day. Those schools had transformative outcomes
in terms of mental health for girls,
academic performance, and lower bullying
by 46% and 43% for girls and boys, respectively.
And you can draw a straight line between those two things.
You can draw a straight line.
But what becomes more interesting,
the classrooms where you just had to turn it off,
but you could have your phone,
it actually became counterproductive
because the kids couldn't wait to get to recess
to check all those notifications
or to quickly take a million bathroom breaks.
So I think the stricter the phone ban, the better.
And I think the more the data comes in,
it's going to show we're better off
giving kids a phone-free eight-hour, seven-hour day.
I think that that's probably going to turn out to
be the direction we go and the best case scenario for outcomes for kids. You know, in my experience,
though, it's not the kids who are the problems here. It is the parents. Because when the kids
come to the parents and say, I need a smartphone and I need an iPhone, whatever, we up to 16 now
or something? Anyway, the parents should say no, but they say yes.
So how do you get this message through to parents that actually you just need to do some parenting here and not say yes to everything your kids want?
I mean, I think it's a tricky one, right?
It's the definition of a collective action problem where if just one or two families or one or two strict parents are the ones that say no,
you are what seems to be disadvantaging your child in the short term,
because that's where all their friends are.
I think that's why we need a bit more education and a bit more support from government
to help us understand the dangers of smartphones and the dangers of social media for young minds.
We've been here before. We've seen similar kind of fights with big tobacco,
with seat belts.
So I actually don't think it should
be on an individual family or parent
to recognize these harms and challenges on their own.
I think we need more societal education on it and government
to kind of step in and to help.
Because it is impacting mental health.
It is impacting educational outcomes.
Even among adults, we can feel we're distracted.
Imagine trying to sit in class all day
where you have something that's been engineered
to keep your attention away from whatever it is
that you should be doing.
So a 12-year-old who asks for a state-of-the-art iPhone,
a parent should simply say,
I am worried about your safety,
and for that, I will give you a flip phone, and you can deal with it for the next four years.
Is that the right response?
I don't know about a specific response.
I would say a flip phone, sure, I'm very much in favor of those.
For some kids, if you're walking to and from school, maybe a smart watch if you're able to get something like that.
But I think it is going to require a broader shift
for more parents and more students
to move away from smartphones.
It's hard if it's just one individual family.
And so I do sympathize with parents there.
But I think we're at the beginning
of that kind of big tobacco era fight
where eventually we come out on the other side
and we realize less smartphone use
for younger ages is much better.
So I do see us making it over the hump.
But for now, if you can delay it as long as possible, the better.
A flip phone, a smartwatch or none at all, I think.
Listen, in the interest of full disclosure, when my 12 year old daughter came up to me and said, I need an iPhone, rose-colored, please, you know what our response was? Yes. We're as bad as everybody else. Should
have said no, but, you know, all her friends had it. And so we thought, well, I guess we got to do
this too. Anyway, I'm not trying to be, I'm certainly no better than anybody else on this,
but you've given us some good advice here. I want to ask you about this lawsuit. You mentioned Big
Tobacco, and of course there were huge lawsuits against Big Tobacco,
and they paid off in the end and are still doing so.
Four school boards in the province of Ontario are suing Meta and the creators of Snapchat
and TikTok, alleging harm to students.
What do we think about this?
I think the suit is important in putting this as a public priority as to the impact social
media has on mental health, on learning, on attention. I think I agreed with most of the
claims, I would say, in the suit that kids are distracted. They feel somewhat addicted to these
platforms. And we can trace that back to why, based on the technological design.
So this is no longer just kind of anecdotal or intuitive.
Social media platforms were designed to create habit-forming behavior.
You can actually trace it back to a particular class that happened at Stanford
that at the time was called
persuasive technology.
And so the developers of these platforms
learned how to make these technologies more addictive,
in variable reward systems, preying on our insecurities,
our need for social validation, all of these things
that somewhat hijack the dopamine reward system
and keep us coming back for more and more.
So we can see
where this is coming from. There's quite a causal line. So I think if teachers are the ones that are
having to deal with it and are noticing it, I think it's important that they raise this to public
attention. I'm not sure what the outcome will be in terms of the impact on tech companies themselves.
I know I think Meta's cash flow was $3 billion a week last year. But I
do think it helps that it isn't just happening in Canada. We're seeing these suits as well in the
U.S. So I do think the tide is turning. We can expect to see, again, pushback on the research,
trying to invalidate the science by these tech companies. But the data is there. And the more
time goes on, the more data we're going to see that social media
was not designed for the brains of kids. A system where you present yourself to a coliseum of the
public to be rated and to be evaluated on in real time, and your brain is against an AI-powered
supercomputer, it's not a race kids' brain were designed to play in. So I think I'm glad that
we're taking this seriously now. Gotcha. Let's, okay, full circle here. We're going to go back to AI and other changes that you
not predict, but forecast. Vis-a-vis, let's talk about relationships. How will they be
influenced and or affected by artificial intelligence? Artificial intelligence and
relationships. This is an interesting one and it's an important one.
So, there's two lanes that are going to kind of transpire here.
On the one hand, more of us will start to have
AI powered digital assistants and
digital chatbots that help us in our daily life.
So, the relationships we have with our smartphones,
where they're kind of a portal to the digital world,
it will soon be like that with
digital chatbots and digital assistants. However, I do think it's important for regulators to really
pay attention as to the closeness and the emotional attachments people may start to form
with AI systems. These systems were designed to seem quite human, to make it seem like they are empathetic and to evoke certain emotion.
And the more emotionally involved and attached you are to something, the more vulnerable you are to it.
What was that movie?
Remember the movie with Joaquin Phoenix?
Her.
Was that what it was called?
Yes.
And we've seen the real-life version with Google last year when the Google engineer lost his job because he thought he was protecting a sentient AI. So you could imagine this at scale across a society,
people building these emotional dependencies on AI systems
that could be socially engineered
to impact human psychology at scale.
And in the hands of the wrong actor
or a developer that has bad intentions,
this could end up like another social media level disaster
in terms of psychological
manipulation. So I think if there's one place I'd look, it's there. We also have a bit of a
loneliness epidemic, right? So we don't want people fulfilling their loneliness challenges
with an AI. It seems like it's a perfect timing. So I think regulators should zoom in there. So on
the one hand, they'll be really effective and supportive and helpful, these AI systems,
but it kind of treads a fine line,
especially when it comes to these emotional relationships.
And in many ways, I mean, this is the first time in history
where a non-human entity is going to be given the keys
to human language in a way that's indistinguishable
from humans themselves.
Just play it forward for me, for argument's sake. What is the harm in a way that's indistinguishable from humans themselves. Just play it forward for me, for argument's sake.
What is the harm in a very lonely person
being emotionally dependent on an AI friend?
So I'd say emotionally supported by an AI friend
I think could be okay.
And I'll leave this to the Department of Psychology,
but there has been some research that does show
in times of severe loneliness, it can be helpful just to have something to talk to. It's the dependencies
and starting to prefer interacting with an AI instead of a human because it becomes,
it's much more frictionless. It's a very one-sided relationship.
It's easier.
That's going to make dealing with humans much more challenging. And so you could imagine that at scale, people starting to prefer AI over humans. I don't think that's a path that we want
to venture down. We forget how to relate to each other. We forget how to relate or we work against
our own best interests when it comes to dealing with humans. Right. Okay. Speaking of the word
work, how do you think AI will affect the kind of work we do in the future?
do you think AI will affect the kind of work we do in the future? AI, so what's interesting is AI is a general purpose technology. So right now when we think of AI, we tend to think of chat GPT and
chatbots. But this is really going to be a technology that will be as disruptive as the
printing press or electricity or the internet where it impacts every single aspect of work and
of society. So on the one hand, we'll see the
invention of a lot of new jobs that just don't exist today, and those are kind of hard to see.
So that will be interesting and exciting. I do think the trend lines of us starting to work a
little bit less will continue, and that's nothing new. That's happened since the 19th century as a
result of labor unions and
regulations, but also improvements in productivity.
So I do think it's possible in a future where we're a little bit more productive or perhaps
a lot more productive as a result of AI and even just generative AI.
So something like a four-day work week will actually be in the cards.
And why I think that this is something that's important for workers to be aware of
and for voters to be aware of.
If AI can make us a lot more productive,
so I can do what I used to do in five days and four,
should the companies and capital interests
bank that productivity gain, or should I?
Should I get that day back, but I don't lose economically?
So that's one way to think about it.
And another way is to ensure that there is still a balance between supply and demand of work available and demand for those jobs.
So I think we need to start thinking a bit more critically about what a four-day work week could look like.
And we'll also start to move away from, we'll move towards more flexible gig-based work.
And that's because AI is going
to continually kind of disrupt aspects of work. So this idea that you learn, you work, you retire,
that's probably going to go away. And it will be a lot of learning and working and kind of
changing where you work, what you do at work, because AI will continue to encourage us or
force us to bring new skills to the table.
So it will be a bit more volatile and
the fabric of the workforce will be
one that's a bit more flexible.
But I think if we can take
the hindsight of what happened in
the Industrial Revolution where people weren't
protected and make sure we don't do that again,
and we make sure everybody has access to the skills to
reach for the newer jobs of
the future that haven't been invented yet,
and they're protected in times of transition, we should be able to make it through.
And we have enough time to make sure that that happens.
We just have to make the right decisions about it.
Can I ask a very weird last question?
Yes.
How'd you get so smart about all this stuff?
Like, this is not, I don't think they teach courses in this in university, do they?
So where'd you get into, like, explain.
I read a lot.
I read a lot, but this is also my field of work.
And in fact, I think we all are futurists at heart in some ways.
I think most of us are rooting for versions of the future that we want to see.
And I happen to do it for a living, but I think most people are in some ways practicing
futurists. And I do think the more of us that lean into the future and that try to shape it,
the more likely that future is going to work for more of us. And I think that that's really
important. Hope you're right. Sinead Bovell, it's great to meet you. Thanks for coming into TVO
tonight. Thanks for having me.
The Agenda with Steve Paikin is made possible through generous philanthropic contributions from viewers like you.
Thank you for supporting TVO's journalism.