What Now? with Trevor Noah - Why AI Won’t Destroy Us with Microsoft’s Brad Smith [VIDEO]
Episode Date: May 23, 2024Trevor puts on a suit (no, he’s not returning to The Daily Show) and heads to Microsoft. This week Brad Smith, Vice Chairman and President of Microsoft talks AI, explains why he doesn’t believe it... will be the end of humankind, and what we all have to do to keep it that way. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
If we think about that scaling continuously and growing and growing and growing, you can
get to a place where even if you're optimistic, AI is fundamentally doing everything for us.
Thinking of it in the short term and thinking of it in the long term, how do you see it
philosophically beyond just a product?
Philosophically, I have always wanted it to be a part of something that advances technology
and uses it to make people better.
Now, 100 years, 500 years from now,
people may look back and say,
wow, this guy, Brad Smith,
he was like a real backwards thinker.
He didn't have the vision to see
that we were creating a new species
that would replace humanity.
I don't would replace humanity. I don't want to replace humanity.
This is What Now with Trevor Noah.
Brad Smith, welcome to the podcast.
It's great as always to be with you, Trevor.
This is really a fascinating and fun experience for me because I didn't plan it, which like
most of my favorite things in life is how I like things to be.
I planned to being here with you, you know, at the CEO Summit here in Microsoft in Seattle.
That's why I'm wearing a suit.
This is not my usual attire. But when I knew
I was going to be spending the week here, I thought to myself, man, if there's one person
I'd love to have on the podcast, it's somebody I'll often have the most fascinating conversations
with and that's Brad Smith. Not just the vice-chairman president of Microsoft, but a thinker. I really
appreciate you as a deep thinker and you and I have been,
I mean, I've known you for how many years has it been now?
Since October 2016.
Yeah, look at that 2016. Wow, what a different time that was.
Yeah, but we've known each other for a long time and you've been gracious enough,
you know, to take me around the world of Microsoft. And I've loved it, not even in a commercial way,
by the way, just as like a lover of tech, you love tech, I love tech,
you love the world, I love the world
and trying to think about it.
And I thought I would love to have you on
to speak about everything that we're looking at today,
you know, AI, geopolitics, elections,
misinformation, disinformation,
really everything that affects everybody
in the world right now,
which is unique to have one person who has to deal with all of that. And I think that's what you do
in your role at Microsoft. So thank you for taking the time. Thank you for being here.
My gosh. You and I have had so many fascinating opportunities now. You help invent and literally
patent new products among many other things. It's been a while. I need a new patent. I haven't
patented something in a while.
Well, the one thing I can tell you, unfortunately,
is you can't get a new patent
unless you have a new invention.
Yeah, that's the hardest part.
That's exactly the hardest part.
So, you know, to situate you first and all,
I was trying to explain this to a friend.
I said, I'm gonna be having a conversation with Brad Smith.
And he's like, Brad Pitt?
I said, no, Brad Smith.
Brad Smith, it happens so many times. I said, no, Brad Smith, Brad Smith.
It happens so many times.
I said, no, Brad Smith from Microsoft.
I said, the vice chair and the president of Microsoft.
And he said, well, I thought Satya Nadella is the president.
No, Satya is the CEO.
This man is the president.
I love that you have the title president of Microsoft.
Cause I believe in many ways you have the job that many presidents in the
world have and that is you travel around.
You're on the road. how many days a year?
Oh, probably 120, I would guess.
120, like that's even to me as a touring comedian, that's impressive.
I know, it's like it's a lot.
120 days a year, you're on the road.
You have the role that many world leaders have, and that role is trying to grow the
organization or the team that you represent, whilst also trying to add value to other people.
As a leader, how are you stitching together something that could so easily fall apart
because it so often does? What do you think it is you understand about communicating and deal making
that maybe some world leaders don't understand right now?
I think you really hit the nail on the head, Trevor. The key to permission to invest is to ensure
that we're doing it in a way that genuinely benefits
somebody else, and not just superficially,
or apparently does.
And when I just think about, call it this AI moment,
I'm just so struck by two things.
One is what never happened for electricity.
Electricity is like AI.
You can't have electricity without a massive investment
in power plants and a grid.
And I just think it's a really sobering thing
that here we are, it's literally 142 years
after the first power plant lit up the first
building in Manhattan, and this morning there are still 700 million people in the world
that don't have electricity.
43% of the people who live in Africa.
And I often look at that and go, how can this be?
What happened? And the answer is it was enormously expensive.
Capital didn't flow around the world.
Ironically, when some of the colonial powers
became colonial powers, they built railway lines
to extract mineral wealth from these places,
but never built the power plants that would
genuinely create the basis for prosperity for everybody.
So to have a company or really a group of companies that is prepared to spend the money
to do what no one ever did, that's a fantastic thing.
Now the flip side, people don't necessarily want to rely on a foreigner.
Last year, literally in France, I had a meeting.
We were talking with the government minister, and he said, I am worried that I'll be too
dependent on you.
And I said, well, just remember, we're going to build these massive buildings.
We're not building them on wheels.
So once they're built, they're in France and they will be subject to French law and French
regulation.
We have to win your trust and we have to show that we are going to follow French law and
it's a new equation.
In a world where so many people think business and geopolitics and politics in
general is a zero-sum game, you seem to be an outlier in that field. You seem to be forging
relationships. You seem to be building bridges. You seem to be creating diplomatic ties where they
may not necessarily exist. It feels like Europe is on the edge. It feels like many parts of Africa on the edge. You know, you see
how many coups or, you know, civil wars we're facing. The Middle East is on the
edge. You're in a position where you are oftentimes speaking to world leaders who
themselves do not necessarily have a good relationship with the country that
you've just come from, and you have to make a deal with them, and you have to
agree with them, and you have to find consensus. What do you think some of our leaders might be missing right now in
the way they conduct their politics? Like we don't seem to be very good at sitting down at
tables anymore. You know it seems like we're very quick to get to the conflicts and maybe I have
confirmation bias in my own memory but I remember as a child watching the news there were always
these conferences and there were always these summits and there was these peace talks and negotiations.
People always at tables.
What is it that seems to be lost in the world of diplomacy?
There are some real serious differences in the world and I won't, I'll never go as far
as to say everybody is equally good.
They just have misunderstandings. And at the same time, I think that if people don't
sit down and spend more time talking, they may miss the opportunities to find common ground.
One of the interesting things that I have found in the last year is if there is a unifying idea, it's around artificial intelligence. No country or people or set
of leaders anywhere that I have found wants to see humans subjugated by machines. No military
leader I've met wants to see a machine start a war. And the thing that I think that should remind us of
is on a daily basis, it's so easy to focus on what makes
you different from someone else.
And if you look at the history of the United States,
enormous discord, disagreement in, say, 1940, 1941,
and then all of a sudden the country came together
when it was attacked at Pearl
Harbor and it had a common foe.
I don't think we should think about AI as a common foe.
But the fact that it's different is creating an opportunity for us to remember, you know
what, we are all human beings.
We all have these things in common.
Maybe we should spend a little bit more time remembering that, whether
we live in a single community or a country, or hey, let's remember, it's a pretty small
planet when you get down to it.
But you do need to bring people together and you need to show them something that's different
from themselves sometimes for them to see that.
It's funny though, when you say the thing about No, it's, it's funny though.
When you say the thing about AI, I also think it's interesting how human beings
are a lot more clear eyed on what they perceive good or bad to be when it comes from an external actor, you know?
So if you say to people, should we start wars?
They go, well, you know, the thing with war and the thing, but if you say, should
a machine be allowed to start a war?
People go, no.
Yeah.
Immediately.
It's amazing how clear-eyed people will be when you take it away from the human, you know,
and you say, okay, can a robot decide who gets money and who doesn't?
Then you're like, no, no, that's crazy.
Then you're like, but then why should a person decide that?
Well, they're like, well, that's different.
And I don't know, I find it interesting because AI, for me personally,
is illuminating the human experience. You know, on the podcast, for instance, we spoke to Sam
Altman, and this was right after he got like fired and then rehired, and it was the whole
debacle at OpenAI that affected Microsoft and everyone really. But you know, Sam speaks about
AI from like a pioneer's perspective.
Sam thinks about the long-term future, this idea of what it can be,
what it can do, and the utopia.
And I think, you know, that's necessary for somebody like that who's building it.
You know, in your book, Tools and Weapons, that you co-authored with Carol Ann Brown,
you talk about the fact that it's not black and white. And I really
appreciate that point of view, because many people will say AI good, AI bad. But what you argue is,
like dynamite that was used to clear paths for cars and roads to be built and then also used to blow up, you know, people's homes or to wage wars.
All technology is a weapon and a tool. Why is it important for us to think about AI as a weapon
and a tool or any tech really? Well, I think it is because any tech,
one of the examples we use in our book is a simple one. Like you can use a broom to sweep the floor.
You can use a broom to hit somebody over the head.
My mom did both.
And so sobering when you see that humans are just
as ingenious and creative and using technology to
do bad things as good things.
Yeah.
And this does go, I think, to the role that I believe the tech sector is playing better
than it did five years ago and needs to continue to get better, substantially better, I would say,
five years from now. Hey, we need to worry about both of this. Let's be excited about all the good
things. And I am fundamentally an optimist about all the good things that can be done.
But oh my gosh, if you don't anticipate, if you don't build in guardrails,
if you don't use technology to defend against the abuses of technology, it's weaponization.
That is what happened with social media, to be honest, in my view.
Facebook did not weaponize social media, to be honest in my view. It's not that Facebook did not weaponize social media,
but the Russians did.
Right, okay.
And because people on the West Coast of the United States
were so idealistic that they didn't perceive that happening,
we were not prepared as an industry.
What I find fascinating is I think the healthiest
and most successful societies are fundamentally
a three-legged stool.
There is the government or the public sector,
there is business or the private sector,
and there is the nonprofit or NGO or civil society.
There's three legs.
And this is where governments,
they absolutely should be pushing and regulating.
That's their job.
It's where NGOs need to be keeping us all honest by criticizing us and then offering
suggestions.
But the more powerful the technology, the more formidable the weapon.
And we got to think about both.
People in business say, government's too powerful.
People in government or the nonprofit community say, business say government's too powerful. People in government
or the nonprofit community say no, business is too powerful. And unfortunately, I think
that the most politically astute social scientists in the world sometimes work for the Russian
government and they spend an enormous amount of energy trying to sow the dissensions between us.
Let's talk a little bit about the Russia thing. Right before the war in Ukraine started,
I remember Microsoft was one of the first to issue a warning to the world to say,
hey, we think Russia is about to invade Ukraine. And you based this on a wide range of information.
Help me understand this.
You know, you are seeing how Russia is trying to either sow dissent in the world or create
and disseminate misinformation in the world.
And from a US perspective, it seems pretty clear.
It's like this is what Russia is doing.
But then in many parts of Africa and in parts of South America, there's a very different opinion on Russia. They go, well,
Russia is trying to help us grow crops or Russia is trying to help us with our power
plant technology or Russia is trying to help us with our science and the rest of the Western
world isn't helping with that. Is that split real or are we just perceiving the same thing differently?
I think most of Europe is united because they they do run what we call a cyber influence
operation, a disinformation effort on a global scale.
The key to misleading the public is to tell a story that just might be true.
If it's too fanciful, it will be rejected because people will listen to it and go, no, that's crazy.
You need to understand the people that you're
trying to impact.
And you then need to be creative.
You need to weave a tale, if you will, that is just
plausible enough.
And then you use technology to get it going.
Sometimes now with deep fakes, but more often not.
And then you use social media to fan the flames.
Um, you know, one of the more sobering things, if you
will, um, that I remember someone sharing in recent
months was somebody who had talked with no Volney
when he was in Germany for a time.
And he said, you have to remember they're not trying
to persuade people
that Vladimir Putin is trustworthy,
they're trying to persuade people
that no one is trustworthy.
There's one thing I wanna go back to, I guess,
in the social media space,
TikTok is on everybody's lips right now.
I speak to a lot of young people who say,
why is TikTok being banned?
And I know there's a large community of young people
who say like TikTok seems so positive.
It seems like a space where there's varying opinions.
There's niche as well,
which is novel for a social media platform at that scale.
And then there are people who work in government
who say, no, this technology is the enemy of the United States.
The Chinese government needs to divest from it
in any way whatsoever.
You've been in a unique position in that,
like I remember it was reported that, you know,
Microsoft was one of the companies that was sort of asked
to look at buying TikTok
when it was in the Trump administration.
So how do you look at this situation?
Because it seems a lot more complicated
than we would like on the face of it.
And then also, is there a world where this thing
can continue to exist in a more positive way,
or is it even existing in a negative way,
or is that just how it's been spun?
Well, first of all, I think it would be a shame
if TikTok were to go away.
Anything that one or 200 million Americans decide to use, Well, first of all, I think it would be a shame if TikTok were to go away.
Anything that one or 200 million Americans decide to use, you want to respect their basic
ability to keep doing something that they've chosen to do.
I think it would be a shame if it were to go away because there would be less competition
in the marketplace.
What I would say is in a way that sort of makes it a little simpler, to be honest, the
fundamental issue today is that the Chinese government is not comfortable with American
technology services for the Chinese consuming public.
That's why you don't see Facebook in China know, in China, you don't see Instagram,
you don't see extra Twitter, you barely even see LinkedIn. The US government has
now taken the same position. It is not comfortable with Chinese technology
providing a consumer service to such a large part of the population. The thing
that is different is whereas the Chinese, in my opinion, at the governmental
level, basically took action as soon as they saw American services start to grow.
The US government was slower and so TikTok became extremely popular.
And even the law that Congress passed that the president signed,
banning it could happen, but really what they're trying to do is require the sale of it.
Now, then you get to the second question, which is, should governments care about these things?
What I've found and have always found most striking about TikTok is that the debate started around privacy.
And I think it is relatively feasible to protect the privacy of people's data, even when the
service is controlled by a foreign company.
But the risk of use for, call it disinformation, cyber influence operations.
And that's where I think that the US government
has now decided that it's just not comfortable.
Having a tool that can reach so many people so quickly
and just use an algorithm to determine what people see next
become a potential engine of disinformation.
So what is the answer? We'll let the courts figure out whether everything was done properly,
etc. But I think ultimately if TikTok needs to be sold, you'd want it to be sold to probably companies and brands that the public trusts.
That would enable the service to continue
in all the ways that people currently value it.
And it could address the concern that emerged
in the United States at the governmental level
at the same time.
We're gonna continue this conversation
right after this short break.
We're going to continue this conversation right after this short break. You're in a unique position as this giant company that is tasked with observing technology,
observing information, looking at what's happening in a space that sort of doesn't exist and
yet is ubiquitous. information, looking at what's happening in a space that sort of doesn't exist and yet
is ubiquitous.
You are also based everywhere and nowhere.
And one of the complexities that has now arisen with many companies, specifically American
companies is where does the loyalty of the company lie?
Because I think it applies to Microsoft.
But I think honestly, it's something that every company and maybe even many countries
are going to need to start thinking about and that is, how do we find the balance between
what we think is right and wrong and what somebody else thinks is right or wrong when
we are in their domain?
I think we need to be a principled company.
It really, in my view, especially on these issues of sort of war and peace, and that's
fundamentally what we're talking about with these cyber attacks and the like.
Yeah, there are a couple of principles.
I mean, one is we work to protect countries defensively.
We don't engage in offensive operations.
You know, there are other companies that do that.
I respect that, but that's not us.
I don't think it works to do what we do and be engaged in
offensive activities. Second, maybe most importantly, we believe in the protection
of civilians. I mean, I think that's a global principle. It's a universal
principle. It was one of the most important ideas to emerge from World War
II. The whole world came together in 1949 in Geneva, Switzerland, was called the fourth version of the Geneva Convention.
And I feel very comfortable saying that we stand up to protect civilians,
whether they're French or American or South African or Kenyan or in other places.
And I think that principle and other similar principles
are ways that we can sort of synthesize
or unify the role we play everywhere.
When you talk about protecting people,
America's getting ready for another election.
In fact, this year, the world is getting ready
for more elections than it has in a very long time.
And some of the biggest elections from India to South Africa to, you know, the United States,
etc. etc. The world is, we're in a moment of tectonic shifts, and we don't know which way
the plates will move and how those plates will affect everybody on the globe. But there's a
consensus. People agree that social media is a tool that is powerful enough to shift or shape how people think,
almost against their will or without them willingly knowing
that their view is being shifted.
When you talk to social media companies about this,
they'll say, it's not a big impact,
we don't have that much of an impact,
and it's also not that the Russians aren't really doing much
and there isn't much in person.
But when you talk to Microsoft, you say, no, no, it is.
Why do you think there's such a difference in how you're seeing the problem between,
let's say, yourself and social media companies, tech companies?
Yeah, everybody, I'll just say most people get up in the morning and they go to work
and they feel good about what they're doing.
So when there's a suggestion that what they're doing is not so good, it's hard to get your
mind around it.
I think that's an easy problem for any of us.
I think that the concerns around social media have grown over time.
In a way, it's almost startling because there was a time when people thought it was going to be
the savior of democracy.
It was going to bring information to everyone.
Everyone would be a publisher.
It would be the great equalizer.
Yeah.
We were almost euphoric and it's worth remembering
that as we think about AI, you can start out being
so euphoric that you miss the dangers that
technology may be creating.
the dangers that technology may be creating. Yeah, as time has gone by, especially the last two years,
there's been this interesting and maybe even odd development
because I see a lot of people, I meet a lot of people in government
and they say, we are not going to make the same mistake we made with social media,
we are going to regulate AI.
And I'm like like I get that. But if we made mistakes with social media
are we going to go fix them or are we just going to go to the next thing?
Because I think these issues are still very much with us. I do think that the
real solution to a lot of the concerns that people have about say social media
require bringing tech companies
and governments and nonprofits together in multi-stakeholder action.
We need to be willing to work with each other.
And yeah, some of the debates have been bruising in recent years.
It seems to be a little bit harder to get people into the same room.
And I think that's something we have to keep working to overcome. Getting people in the same room seems to be an art form and an idea that is forgotten or maybe ignored.
And ironically, we want to get people in the same room in a world where technology is keeping people in their own rooms.
How do you look at that on a philosophical level?
Well, I think that is a really fascinating and important aspect of all of this.
There's two ways one can look at social media and have some concern.
One is that it fans the flames of discord or unhealthy comparisons based on what people
see. But the second is it just leads to more time spent
doing things other than interacting with other people,
including in the same room.
And I think both of these things have come together in a way that makes some of
our challenges societally more pronounced.
But the other thing that's interesting,
this has
been the story of technology for a hundred years.
The automobile connected people that were far apart.
You could drive 15 miles and be with people that
frankly before the automobile, you
couldn't really go see.
Right.
But the moment people could leave a small town,
the ties within the town started to weaken.
People didn't spend as much time with each other.
And, you know, with each successive generation
of technology, the telephone did the same thing.
When I was a kid in the middle of the United
States, uh, you know, in the 1970s, my parents would complain that, you know, my sister or
brother and I were spending too much time in the
evening talking with friends.
We had to argue over who, you know, we only one
phone line and it was fixed.
You know, but, you know, that was time that
separated the family.
Now, to me, the iconic image of call it life in most most of the world the last 10 years is three family members sitting on a couch,
each person absorbed with their own screen and their own phone.
So we have to pull each other away from the technology to get each other in the room and counter the force of technology.
But then there's a second aspect.
Social media has made it so easy
to find people you agree with
that I think it makes it a little bit harder
to get people comfortable spending time
with others they disagree with.
Yeah.
And yet if you can't sit down with people that
you disagree with, I think fundamentally your worldview
gets smaller, not larger.
You don't build the bridges that are needed to go solve
big problems and act with great ambition.
So for all my love of technology, having spent 31 years at Microsoft, I equally feel
the limits of it.
And it's why I've always been so committed to just getting people to like listen to each
other and talk with each other.
And it's okay that people may say something that's critical of you just have a little thick skin you're gonna learn something
don't go anywhere because we got more what now after this
so let's let's talk about AI it's interesting to me how like every space I
go into everyone is is looking at this giant
orb from a different side, but the orb is there.
It is floating ominously above the earth and it is AI.
So let's start with the weapon side of AI.
Let's start with the scary side, the side that keeps you up at night.
What are some of the biggest things you think we need to be looking out for as we embark on what could be, you know, by various accounts, the
biggest jump in human technology in, you know, hundreds of years, if not ever?
I think the two problems that I worry about the most in sort of real world this decade kind of sense.
Our number one, people who are doing horrible things today will use AI to do them even more
horribly in the future.
And then the second is just it could, if not deployed well, end up exacerbate social divides
that already exist.
You think about a bully in a middle school,
12 year old.
You think about people trying to defraud
senior citizens of their money.
You think about people trying to impact elections
and undermine democracy from within.
They will and even are using AI to do all of those things.
Unfortunately, what you have to be willing and able to do is if you want to fight criminal activity, you got to think like a criminal so that you can anticipate it.
And then you put in place the technology, both to make it harder to use legitimate
tools, but fundamentally to detect it, respond, and as much as possible defeat it.
And, you know, even though the conversation in 2024 is about deep fakes and elections,
you know, it's only a matter of time before you read a story somewhere that there's a,
You read a story somewhere that there's a 75 year old grandparent who wired money because they got a call and it sounded like their granddaughter was in trouble.
I mean we're already seeing some of those.
I think there was one I read in Miami.
You're already seeing those stories.
And that's just so unfortunate, but that's this dark side of human nature.
So we need to combat all of those kinds of things.
But is there a way to combat it?
Like, you know, for someone listening,
they go, if it is a deep fake,
if it's a video that looks like somebody,
it's a voice that sounds like somebody,
how do you combat that?
Isn't the genie out of the bottle?
I will always argue that there is a way to combat it.
Not with 100% success, not as a
panacea.
That's just not the way the world works.
But if you are prepared to invest in protecting people and defending communities and countries
from these kinds of abuses, you can get a lot done.
And so already, you have to broaden the strategy,
which is why we use AI to detect AI.
And that's key.
And you do a lot of public education.
If you get an email from someone that you don't know
telling you that if you send them your bank account details
they'll wire you a million dollars,
most people are like, yeah, I've heard of this before.
I'm not doing that.
And then fundamentally just continue to remember
just because it's on the internet doesn't mean it's true.
But then the other side of things is the,
say the divides that it can widen.
And the biggest divide it can widen is just the division between
technology haves and have nots. We still live in a world where there are roughly three billion
people that don't have access to the internet. People even say in a single country like the
United States, where if you're underprivileged in the middle of a city or underprivileged
in a rural community, you may not have access to the internet, you may not have access to
a computer.
If you don't have access to an internet or a computer, it's going to be hard to use AI.
But it's like everything.
If you worry about a problem, you're actually likely to do at least something useful to
solve it.
It's when you don't think about the problem that by definition, you'll do nothing to help
address it.
When you look at the upside, it seems like, and maybe it's because I'm an optimist, it
seems like the potential is scarily infinite, you know, in healthcare, in education, in equitable access to information.
It seems like AI is deflationary and it seems like a tool
that can be diffused in a way that few technologies
ever have been.
Is this how you see it as well?
And what have you seen that makes you most excited
when you look at AI?
Well, you take something like healthcare
and then you have to keep in mind
that there's so many people today
that don't have access to a doctor.
Yeah.
This is just so important in bringing
many healthcare-related advances.
It will accelerate drug discovery.
So I think that is just one of the kinds of examples
where we should all be so enthusiastic.
I think there's a second thing which is so interesting.
The barrier to entry for people in doing hard things
is actually quite high and it's true for almost everybody
in at least some space.
I mean, I may be good at reading or writing,
but not at math or I can't code.
Um, well with AI, you can ask for help.
You can actually get to a point in the very near future where if you can conceive of it, you can ask for help to actually go do it without having to know how to do it all yourself.
And I think that's going to be a huge, uh, you know, game changer in just making it possible for people to do more things.
And at the end of the day, you, even more than me, have had the opportunity to meet so many interesting, successful people in so many walks of life in so many parts of the world. And I sometimes think to myself, what is the trait that you see most often in people who
have become hugely successful?
And I think it's curiosity.
Yeah, I agree.
And so I think AI is the best thing invented for curious people, and hopefully it will
help other people
become more curious.
When you look at it philosophically,
beyond an actual tool,
AI has the opportunity to fundamentally shift,
like any industrial revolution,
what and how humans perceive their value,
their purpose, and what they consider work.
You know, and it's interesting, you know, when you're talking about doctors and medicine, et cetera,
it stands to reason that AI can get to the point where it will be doing all the thinking parts of medicine,
but we'll still need people to be doing the physical parts until robotics maybe gets to the point
where it can also do that with more accuracy and no fatigue etc.
So then the question becomes on a philosophical level as somebody who thinks quite a lot,
where does that leave us?
Well, yeah, I've for two decades been the person who is in the senior ranks at Microsoft
who sits every week with engineers but didn't get a degree in engineering or
computer science by itself.
I came from the liberal arts side.
And so philosophically, I have always wanted to be a part of something that advances technology
and uses it to make people better, to make humanity more successful.
Our whole mission as a company, it's not about technology for its own sake.
It's create technology that empowers other people or organizations so they can do something
that they couldn't do before.
That's the philosophy.
I don't want to create a future where people cannot dream new ideas that a machine's already
thought of.
And I'm not fundamentally worried about it because I just think that people who think
that a machine can do everything that a person can. I think they're underestimating people.
And that spark of creativity that people can have.
So that's sort of philosophically
where I will probably always be, that's
the company I want to be part of that's creating technology that
makes people better, doesn't replace people,
doesn't leave people with nothing to do
except go to the beach.
Cause I appreciate that it's nice to go to the beach,
but when you're on day 700 of going to the beach,
I just can't believe it's as much fun as on day three.
I think that's true for everything.
I think that's true for everything.
Well, Brad, I could speak to you for hours.
Oftentimes I do, but I know whenever I'm taking your time, there's a president somewhere in the world
who's wondering.
No, I doubt that.
Where you are.
No, this is true.
This is true.
So to whichever president I've just kept you from, I apologize.
But thank you for spending the time with us.
I appreciate the way you think.
I highly recommend people read the book because I do think it's an even-handed, optimistic
but still cautious look at how we see the world and technology.
So thank you so much again, my friend.
Okay, thank you.
What Now with Trevonoa is produced by Spotify Studios in partnership with Day Zero Productions
and Fullwell 73.
The show is executive produced by Trevonoa, Ben Winston, Sanaz Yamin and Jodie Avigan.
Our senior producer is Jess Hackl. Marina Henke is our producer.
Music, mixing and mastering by Hannes Braun.
Thank you so much for listening. Join me next Thursday for another episode of What Now?