Young and Profiting with Hala Taha - Peter Norvig: Simple Ways to Grow Your Business with AI | E307
Episode Date: September 9, 2024Peter Norvig was an AI hipster before it was cool. Curious about getting computers to understand English, he went to his teachers, but they admitted that it was beyond their abilities. Undeterred, Pet...er dove headlong into the complex but exciting world of AI on his own. Today, he is recognized as a key figure in the advancement of modern AI technologies. In this episode, Peter unpacks the evolution of AI and how it’s shaping our world. He also offers practical advice for entrepreneurs looking to leverage AI in their businesses. Peter Norvig is a leading AI expert, Stanford Fellow, and former Director of Research at Google, where he oversaw the development of transformative AI technologies. His contributions to AI and technology have earned him numerous accolades, including the NASA Exceptional Achievement Medal and the Berkeley Engineering Innovation Award. In this episode, Hala and Peter will discuss: - Peter’s transition from academia to the corporate world - How AI is changing the way we live and work - Practical ways entrepreneurs can leverage AI right now - How AI is making learning more personalized - Tips to stay competitive in an AI-driven market - How AI can bridge skill gaps in the workforce - Why we must maintain human control over AI - The impact of automation on income inequality - Why AI will generate more solopreneurs - Ethical considerations of AI in society - And other topics… Peter Norvig is a computer scientist and a leading expert in artificial intelligence. He is a Fellow at Stanford's Human-Centered AI Institute and a researcher at Google Inc. As Google's Director of Research, Peter oversaw the evolution of search algorithms and built teams focused on groundbreaking advancements in machine translation, speech recognition, and computer vision. Earlier in his career, he led a team at NASA Ames that created autonomous software, which was a precursor to the Mars rovers. Also an influential educator, Peter co-authored the widely used textbook, Artificial Intelligence, which is taught in over 1,500 universities worldwide. His contributions to AI and technology have earned him numerous accolades, including the NASA Exceptional Achievement Medal and the Berkeley Engineering Innovation Award. Connect With Peter: Peter’s Profile: https://hai.stanford.edu/people/peter-norvig Peter’s LinkedIn: https://www.linkedin.com/in/pnorvig/ Peter’s Facebook: https://www.facebook.com/peter.norvig Resources Mentioned: Peter’s Book, Artificial Intelligence: A Modern Approach: https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597 Google AI Principles: https://ai.google/responsibility/principles/ LinkedIn Secrets Masterclass, Have Job Security For Life: Use code ‘podcast’ for 30% off at yapmedia.io/course.  Sponsored By: Shopify - Sign up for a one-dollar-per-month trial period at youngandprofiting.co/shopify Indeed - Get a $75 job credit at indeed.com/profiting Found - Try Found for FREE at found.com/YAP Rakuten - Start all your shopping at rakuten.com or get the Rakuten app to start saving today, your Cash Back really adds up! Mint Mobile - To get a new 3-month premium wireless plan for just 15 bucks a month, go to mintmobile.com/profiting. Connectteam - Enjoy a 14-day free trial with no credit card needed. Open an account today at Connecteam.com Working Genius - Get 20% off the $25 Working Genius assessment at WorkingGenius.com with code PROFITING at checkout Top Deals of the Week: https://youngandprofiting.com/deals/ More About Young and Profiting Download Transcripts - youngandprofiting.com Get Sponsorship Deals - youngandprofiting.com/sponsorships Leave a Review - ratethispodcast.com/yap Watch Videos - youtube.com/c/YoungandProfiting  Follow Hala Taha LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ TikTok - tiktok.com/@yapwithhala Twitter - twitter.com/yapwithhala  Learn more about YAP Media's Services - yapmedia.io/
Transcript
Discussion (0)
Today's episode is sponsored in part by Mint Mobile, Working Genius, Rakuten, Connect Team,
Found, Shopify, and Indeed.
Save big on wireless with Mint Mobile.
Get your new 3-month premium wireless plan for just $15 a month at mintmobile.com slash
profiting.
Unlock your team's potential and boost productivity with Working Genius.
Get 20% off the $25 Working Genius assessment at workinggenius.com
with code profiting at checkout. Get cash back on every purchase with Rakuten, the smarter way to
shop and save. Start all your shopping trips at rakuten.com or get the Rakuten app to start saving
today. Connect Team is a mobile phone employee management app that helps you manage non-desk employees.
Open up an account at connectteam.com
and enjoy 14 days free, no credit card needed.
Found gives you banking, invoicing,
and bookkeeping all in one place
and was created for busy entrepreneurs.
Try Found for free at found.com slash profiting.
Shopify is the global commerce platform that helps you grow your business. I don't want technology that makes me disappear. As always, you can find all of our incredible deals in the show notes.
I don't want technology that makes me disappear.
I want technology that respects me.
I'd rather have it be two-dimensional and let me choose how much the machine is going
to be doing and how much I'm going to keep control.
How do you feel AI stacks up right now against the human brain as a tool?
We don't want a tool that replaces a human.
We want a tool that fills in the missing pieces.
We've seen some encouraging research that says,
AI right now does alleviate inequality.
So do you feel like AI is going to
generate a lot more entrepreneurs and
solopreneurs in the future?
Absolutely.
["Souls of the Future"]
Yeah, fam, welcome back to the show.
This year I've probably released about half a dozen AI episodes, and today I'm releasing
another AI episode, and this time it's going to be about human-centered AI.
Now, human-centered AI is sort of a new conversation that's happening in the AI world.
In the past, it was all about what is AI,
how are these algorithms created,
how are we enhancing these algorithms and tools.
And now, AI is working really well.
And the conversation has now shifted to human-centered AI.
How do we ensure that the AI has the utility
that we want as humans?
How do we make sure that it's safe?
How do we make sure that AI is safe? How do we make sure that AI is inclusive?
How do we make sure that AI is not gonna take people's jobs?
So now it's really focusing towards
how do we use AI to optimize humanity,
not how we're optimizing algorithms.
I love this topic.
I think it's super important.
AI is a scary thing.
A lot of us feel nervous and anxious about
it. A lot of us feel excited about it. And I can't wait to speak with Peter Norvig about this topic
today. He is a fellow at Stanford's Human-Centered AI Institute. He also formerly worked at Google
and NASA. And he's literally written a textbook on AI, and he's been writing about the ethics
of data science and AI since the 90s.
So without further ado,
here's my human-centered AI conversation with Peter Norvig.
Peter, welcome to Young and Profiting Podcast.
Great to be here.
Thanks for having me.
I'm really looking forward to this conversation.
I love talking about AI,
and I can't wait to pick your brain on that topic. But first, I want to talk a little bit about your career journey.
I learned that you worked at some awesome companies like NASA,
you actually worked at Google,
but it turns out you started in academia.
So I'm curious to understand,
why did you decide to transition from academia to the corporate world?
So I've been in a lot of places.
I'm an AI hipster. I've been in a lot of places.
I'm a AI hipster.
I was doing it before it was cool.
Got interested in it as a subject in the 1980s.
And at that time, really the only way to pursue it
was through academics.
So got my PhD.
And it was the assumption back then that you get a PhD,
you're gonna go be a professor.
There was much less back and forth between academics and industry than there is today.
So that's the path I took.
But then I started to realize we didn't quite have the word big data back then.
But I saw that that's the way things were going.
And I saw as a young assistant professor, I couldn't get the resources I needed.
You could write a grant proposal,
get a little bit of money,
get a couple of computers and a couple of grad students,
but I really couldn't get the resources
to do the kind of big projects I wanted to do.
And industry was the only way to do that.
So I set out on that path.
I love that.
It's so funny that you say you were doing AI before people knew
it was a thing. For me, it was surprising because I feel like we hear about AI so much, but it turns
out that AI has been a thing for decades. Can you talk to us about when you first discovered AI and
how long ago that was? So it's definitely been here right from the start. Alan Turing, one of the founders of the field, writing about it in
1956, foreseeing the chatbots that we have today. But of course, we didn't know how to build them
back then. But it was definitely part of the vision of where we might go. So I guess I got
interested. I was lucky that I had a high school that at that time had a computer class and also
had a class in linguistics.
And I took those two classes and talked to the teachers in the classes and said, hey,
seems like there's some overlap between those two.
Can we get computers to understand English?
And they said, yeah, that's a great subject, but we can't really teach you that.
That's beyond what we know how to do.
So you're on your own pursuing that goal.
And that's more or less what I've been doing since with some side trips along the way.
I always say that skills are never lost.
They're really just transferred.
So I'm curious to understand what skills do you feel like were an advantage for you in
the corporate world that you took from academia?
I certainly agree with that idea of transfer.
I guess the idea of being able to tackle a complex problem, being able to move into an
area that hadn't been done before.
Academia is all about invention of the new.
And for industry, it's a mix of you want to make successful
products but sometimes in order to do that you've got to invent something new
and that's harder to do. You don't know what the demand for it is going to be
there's nothing to compare to and yet you have to design a path to say we're
going to go ahead and build this and we're going to put it out and customers
are going to have to get used to it because it's not going to be familiar to them.
And speaking of building something new, you were responsible for Google search.
And that was a while back when Google really was not starting off, but there was only 200
employees when you joined them in 2001.
So what was it like working for Google back then? So it was an awesome time.
The company was three years old,
200 people all in one building.
I came in and I got the honor
of getting to lead the search team for a while,
for about five years.
So it's not like I invented it.
Google search was already there,
but they were three years old
and it was really the
time when they're trying to ramp up the advertising business.
So a lot of the key people who had built the search team had moved over to help build the
advertising platform.
And so there was an opening and I had just come on board.
And so I got the opportunity to be a leader of the search team and bring
that forward over the next five years.
So that was super exciting to be right in the middle of a transformative time in our
industry.
Yeah, and I think a lot of my listeners, they don't realize that the internet was actually
much different before Google.
Google really changed the way that we use the internet.
Can you help people understand what it was like
before Google search?
I guess there was a couple of things.
First of all, there was directories and lists of sites.
I remember from the various early days, 1993 or so,
and there was a site that was internet site of the day.
And so it was just, you go there and it says, hey, look,
here's a new website that you might not have heard of before.
And it was like, wow, today, 10 new websites joined the web
and they picked out a good one.
And you could keep up that way.
But then a year or two later, that no longer
worked because there were thousands of new sites
every day,
not just a couple.
Yahoo was one of the first to try to deal with that.
And they took this, you know,
it's not gonna be just one person saying,
here's my favorite site today.
It's gonna be a company organizing the sites
into directory structure.
And that worked okay when the web was a little bit bigger.
But as it continued to grow, that no longer worked. And then we really needed search rather
than manually curated lists of directories and so on. But in the early
days, the search systems just weren't that good. We had some experience as a
field of doing, it used to be called information retrieval rather than search.
And it was sort of, it worked. The techniques we had at the time worked for things like libraries.
But the problem there was in a library, everything that was published is a real book or a real
journal article that's already been vetted. And so the quality is all at a pretty high level.
On the web, that just wasn't true.
And so we needed new systems that not only said
what's relevant to your query,
but also what's the quality of this content.
And other companies really hadn't done that.
And Google said, we're gonna take this really seriously
and we're gonna work as hard as we can
to solve that problem.
And I think others didn't really see that as an opportunity.
So there's a story of, in the very early days,
people were saying, here's Google, it's rising.
Yahoo was far bigger and far better known.
Maybe Yahoo should buy Google.
And that never happened, in part because the Google founders
thought they had something more important.
Whereas Yahoo said, oh, yeah, search,
that's kind of important.
We've got a home page, and it's got all this stuff on it.
And you've got to have search on the home page.
But you also need daily comics and horoscope.
So why would search be more important than horoscope?
That's sort of how they felt about it.
And Google felt, no, we think search is really, really
important, and we're going to do an excellent job of it.
So that was something new that other people hadn't
thought about.
Totally, and people who are my age and all these listeners
who are tuning in, Google is a verb for us.
Google is how we use the internet,
but something is changing now with AI.
Now, a lot of us, instead of going to Google,
we're going to chat GBT. And instead
of putting in a search query and then digging around for information ourselves, we're just
asking a question and getting chat GBT to spit out the information. So how do you think
AI is going to change search and the way that we use the internet?
I think there's always been changes, and that's always been true.
So Google's had a dominant position.
But there's always lots of places that people go to.
If you wanted breaking news, you went to Twitter.
If you wanted a short explanation of something,
you might go to TikTok or YouTube to see a video.
So there's always going to be lots of ways to access this.
And we'll see how that changes as AI gets better.
Right now, sometimes it works and sometimes it doesn't.
So it's a little bit of a frustrating experience.
But there certainly seems to be a path to say we can have something that's a much better
guide to what's out there, both in terms of answering a question immediately is one aspect,
rather than saying I'm going
to be pointed to a site that has an answer, I can get the answer right away.
And then also guiding you through and maybe summarizing or giving you a whole learning
path.
So right now you sort of have to make up that path yourself.
But I think AI can do a good job of saying, where are you now?
What do you know?
What do you want to know?
And we're going to lead you through that.
Yeah. And AI also is just using
the information that was inputted into the system, right?
So it might not have all the information
available that you could potentially find on the Internet. Is that right?
That's certainly true, right?
Depends on what it's trained on.
And we're at a point right now where
the training of
these big AI models is very expensive. So it's harder to keep them up to date. With the internet
search, if something new happens, some new news is there, it's pretty fast of getting that indexed
and making it available. But with the large AI models, it's just too expensive to update them
instantaneously,
and so you miss out on the newest stuff.
But that will change over time and we come up with
new ways of getting things out faster and faster.
When I first started at Google,
we said, we're like a library where you can go to look things up.
So it's okay that the library catalog only gets updated once a month.
And now that would seem crazy to say,
you're only getting information that's a month old.
But in the earliest days of Google, that was the case.
And then we went to daily and then hourly and then even hourly wasn't fast enough.
You had to get faster and faster.
Yeah. It's so interesting how fast technology changes.
I know that you wrote a book about AI with Stuart Russell in 1995.
You wrote a textbook, the first edition of Artificial Intelligence.
How has AI changed since you wrote that textbook?
We did the first edition in 95 and we're up to the fourth edition,
which we did a year or two ago.
There definitely are changes. First of all, I think we did the book because we saw changes even back
in 1995, where in the earlier days in the 80s and the start of the 90s, the dominant
form of AI was called expert system. And what that meant was you build a system by going out and interviewing an expert, say an expert doctor, and ask them, in this situation with this patient, what
would you do? And then you try to build a system that would duplicate what the doctor
said. And it was all built by hand, programmers sitting down, trying to understand what the
doctor said and trying to encode that into rules that they would write into the system.
It worked to some extent,
but it was very brittle and it just
often failed to handle problems that were
just slightly outside of what it had anticipated.
In the 1990s, there was a big switch away from
this expert system hand-coded approach
towards machine learning approaches,
where we said rather than telling the system how to do it,
you just show it lots of examples and let it learn by itself.
We felt like the existing books had missed that change.
We wanted to write a book about it, so we did that.
But of course, things continue to change.
So I guess, what can I say about what's changed over the four editions?
I guess one was at the start,
we felt like, well, AI,
this is part of computer science,
and computer science is about algorithms.
So we're going to show you a bunch of
cool algorithms and we did that.
And then in the second edition,
I think we felt more like, okay,
you still got to know all the cool algorithms.
But if you had a choice, you're probably better off getting better data rather than getting better algorithms.
So we're going to focus a lot more on what the data is.
And that continued to be more true in the third edition.
And now I feel like we've got plenty of data, we've got plenty of algorithms, you still
have to know about them.
But really the key to future progress is neither of those.
The key is deciding what is it that you want?
What is it that you're trying to build?
So we have a great system that says,
if you give me a bunch of data, I've
got an algorithm that can optimize some objective
that you're shooting for.
But you've got to tell me what the objective is. What is it that you're trying to do? And for some tasks that's easy, you know,
if I'm playing chess it's better to win than to lose. But in other tasks that's
the whole problem. And so we look at things like we have these systems that
help judges make decisions for parole. Who gets out on parole and who doesn't.
And you want to parole somebody if they're going to behave well and you want make decisions for parole, who gets out on parole and who doesn't.
And you want to parole somebody if they're going to behave well,
and you want to not parole them if you think they're going to recommit a crime.
But, of course, these systems aren't going to be perfect.
They're going to make mistakes.
So the question you have to answer is,
what's the trade-off between those mistakes?
How many innocent people should we jail to prevent one guilty person from getting away?
So there's this trade-off. You're going to make false positives and false negatives,
and what's one worth against another? Even before there was AI or any kind of automation,
we've had these kinds of discussions in our societies, going back to Judge Blackmun in England more
than a century ago, who said, it's better that 10 guilty men go free than that one innocent
man be jailed.
Now, I don't think he meant it that literally, 10's the boundary and 9's okay and 11 would
be bad.
But with today's AI systems, you have to specify that, right?
So you have to build the system and there's got to be
an exact number in there of saying,
what is the trade-off point?
We're not very good at understanding how to do that.
We built a software industry and we have 50 years of experience
in building debugging tools and so on.
So we're pretty good at making reliable software.
Every week you'll see some kind of bug or something, but we're getting pretty good at that.
But we don't have a history of tools for saying, how do we specify the right
objective? What are the trade-offs? How important is it to avoid this mistake
versus that mistake? And so we're kind of going by the seat of our pants and
trying to figure that out. And so I think that's where a lot of the focus is now is how do you decide what you really want?
Let's hold that thought and take a quick break with our sponsors.
Young and Profiters, I love a good habit.
Now, I've got some bad habits. I'm not going to lie.
I shop a lot and I spend a lot of money,
but I've sort of neutralized this bad habit
with a good habit by using Rakuten,
which is the smartest way to save while shopping.
They let you earn cash back at over 3,500 stores,
so many stores, Nike, Ulta, Sephora, Nordstroms,
Bloomingdale's, you name it, they're on Rakuten and they're adding stores all the time.
Whether you're shopping for sneakers or a suit,
beauty products, travel, concert tickets, even groceries,
they've got you covered.
You can get 5%, 10%, 15% cash back on the stuff
that you would have already bought.
I love it.
It's such a good habit to save money on the stuff
that you would buy anyway. Why not? In fact, I've got a check in the mail today. I
checked my mail today like a good girl and I'm gonna open up and see what
Rakuten has for me. All right. $236. Cash back in my pocket.
This is money that would have went down the drain, but instead because of my new good
habit, it's in my pocket.
If you want to start the good habit of Rakuten, membership is free and signing up is easy.
Get the Rakuten app now and join the 17 million members who are already saving.
Cash back rates change daily.
See Rakuten.com for details.
That's R-A-K-U-T-E-N.
Your cash back really adds up.
At Young & Profiting, we are driven by the search for better.
We're obsessed with excellence.
But we're also growing so quickly.
And when it comes to hiring, we found that the best way to search for a candidate isn't
to search at all.
That's right, don't search, just match with Indeed.
If you need to hire, you need Indeed.
Indeed is your matching and hiring platform with over 350 million global monthly visitors
and a matching engine that helps you find quality candidates fast.
Ditch the busy work.
Use Indeed for scheduling, screening, and messaging so you can connect with candidates fast. Ditch the busy work. Use Indeed for scheduling, screening,
and messaging so you can connect with candidates faster. And Indeed doesn't just help you
hire faster. A recent survey found that 93% of employers agree Indeed delivers the highest
quality matches compared to other job sites.
One thing that I love about Indeed is that it makes hiring all in one place so easy because
I don't have to waste time sifting through candidates who aren't good fits for my company.
Indeed's engine learns every day from over 140 million qualifications and preferences,
and the more you use it, the better it gets.
Join over 3 million businesses worldwide that use Indeed to hire A players fast. And listeners of this show will get a $75 sponsor job credit to get your jobs more visibility
at Indeed.com slash profiting.
Just go to Indeed.com slash profiting right now and support our show by saying you heard
about Indeed on this podcast.
Indeed.com slash profiting.
Terms and conditions apply.
Need to hire?
You need, indeed.
Young and profitors, I spent years slaving away in so many different jobs trying to prove
myself, trying to figure out what gave me joy at work, and trying to build productive
teams. Eventually, I figured it all out. But what if you could learn that stuff about yourself
and your team in a fraction of the time that I did?
The Working Genius model will transform your work, your team, and your life by leveraging
your natural gifts.
We each possess a unique set of skills.
And let's face it, you're going to be more fulfilled and successful when you lean into,
rather than away from, your natural true talents.
Working Genius can help you discover how to increase joy and energy at work
by understanding what your working geniuses really are. The working genius assessment only
takes 10 minutes and the results can be applied immediately. I took the assessment and my two
primary working geniuses are inventing and galvanizing. I just love creating new things
and then rallying people together to bring them to life.
That's why I've been starting businesses and growing teams for years.
Your own working genius may be completely different.
The working genius assessment is not just a personality test.
It's a productivity tool.
It can help you identify your own individual talents and provide a great roadmap for creating
productive and satisfied teams.
You and your team will get
more done in less time with more joy and energy. To get 20% off the $25 Working Genius Assessment,
go to WorkingGenius.com and enter the promo code profiting at checkout. That's right,
you can get 20% off the $25 Working Genius Assessment at WorkingGenius.com using promo
code profiting.
I wanna dig into this a bit
because I think it ties in with this idea
or the fact that AI is not yet in all instances
at human level intelligence.
And that's not always the goal.
I read some of your work where you said
human level intelligence is really not always the goal. I read some of your work where you said, human level intelligence is really not always the goal
when it comes to AI.
So I wanna read you a quote from Dr. Fei-Fei Li,
who came on the podcast, episode 285.
She's the co-director of the Human-Centered AI Institute,
which you're also a fellow.
And it was an awesome conversation,
and she said, the most advanced computer AI algorithm
will still play a good chess move when the room is on fire.
So she's trying to explain that AI doesn't have
human level common sense.
It's still gonna play a chess move
even when the room is on fire.
So let's start here.
How do you feel AI stacks up right now
against the human brain as a tool?
So that's great.
Fei-Fei is awesome.
I've heard many of her talks where she makes great points like that.
I guess I would try to avoid trying to make metrics that are one-dimensional, how does
AI compare to humans, for a couple of reasons.
One is I don't want to say the purpose of AI is to replace humans.
We already know how to make human intelligences.
My wife and I did it twice the old-fashioned way.
It's awesome. It worked out great.
Instead of saying, can we make an AI that replaces a human,
we should say, what kind of tools can we make so
that humans and machines together will be more powerful?
What's the right tool?
So we don't want a tool that replaces a human.
We want a tool that fills in the missing pieces.
And we've always had that.
And there's always been a mix of subhuman
and superhuman performance.
So my calculator is much better at me
at dividing 10 digit integers.
So I rely on it rather than try to work it out myself.
And I think we'll see more of that,
of saying what are the right tools for people to use.
Now, in terms of this general AI versus narrow AI,
I think that's really important.
So there's multiple dimensions we want to measure.
So we want to focus on both generality and performance.
So how good are these machines and how general are they?
So yes, we have fantastic chess playing programs that are better than the best human chess
players and recently it's also true in Go and we see sort of every week it's true at
something else.
But we haven't done quite as well at making them good at being general.
So we have these large language models, the chat GPT and Gemini and so on, and they're
good at being general, but they're not completely competent yet at doing that.
So they'll surprise you in both ways.
They'll give you an amazingly good answer one time, and then the next time they'll give
you an amazingly bad answer. So they're not reliable yet at being general.
Can we have incredible tools that are narrow and so looking at this frontier of how can we make things both perform better and more general.
So I think we'll get to the point where we'll say, here's an AI and it can make a chess move
and it can also operate in the world.
But right now we separate those two things out.
And we say, we're gonna have the chess program
that only plays chess,
and then we're gonna have the large language models
and it won't be as good at chess,
but it will be good at some aspects
of figuring out what to do in unusual situations.
Could you give us some concrete examples of AI at some aspects of figuring out what to do in unusual situations.
Could you give us some concrete examples of AI
that we might want superior human level intelligence
versus AI that we wouldn't want
to have human level intelligence with?
It's always better for it to be better,
but sometimes we need that and sometimes we don't.
Sometimes we wanna make our own decisions.
And I guess part of that is I see too much of people saying AI is going to be one-dimensional
and automation is going to be one-dimensional and the more the better.
And I think that's the mistake that I'm worried about.
And there's a great diagram from the Society of Automotive Engineers of level of self-driving cars.
And they defined that as five levels of self-driving, and they did a great job of that, and that's
really useful.
And now you can say, where is Waymo or Tesla?
Are they at level two or level three, or what level are they at?
And that was useful, but the diagram they used to accompany those levels was worrying to me because they've got this diagram.
And at level one, they have this icon
of a person behind the car holding onto the steering wheel.
And then when you get up to level five,
that person has disappeared
and they've just become a dot-like outline.
And so it's like, I don't want technology
that makes me disappear.
I want technology that makes me disappear.
I want technology that respects me.
And I don't want this trade off to be one dimensional of,
if I get more automation, then I disappear more.
I'd rather have it be two dimensional and let me choose.
So sometimes I might want to say, I've got a self driving car and I trust it.
I just want to go to sleep.
It should take over completely.
But sometimes I might want to say it can do all the hard parts, but I still want to be
in control.
I want to be able to say, oh, let's turn down that street or go faster or go slower or let's
make an unscheduled stop.
So I don't want to say just because I have automation that I've given up control. I want me to come first and let me make the choice
of how much the machine is gonna be doing
and how much I'm gonna keep control.
That makes a lot of sense.
So like Dr. Lee, you are an advocate for human centered AI.
Can you help us understand what that is?
I'm essentially a software engineer or programmer at heart.
And so I look at what are the definitions
of these various things.
And software engineering is building systems
that do the right thing.
But artificial intelligence is also building systems
that do the right thing.
So what's the difference?
And I think the difference is that the enemy in software
engineering is complexity.
We have these programs with the millions of lines. We have to get it right. And the enemy in software engineering is complexity. We have these programs with the millions of lines.
We have to get it right.
And the enemy in AI is uncertainty.
We don't know what the right answer is.
And then in human-centered AI, the goal
is to build systems that do the right thing for everyone
and do that fairly.
So that changes how you build these systems.
And part of it is saying, you want to consider everybody involved.
So you want to consider the users of your system, but you also want to consider the stakeholders and the effect on society as a whole.
So we go back to I was talking about this aid for judges in deciding who gets parole.
If you took a normal software engineering approach,
you'd say, well, who's the user?
Okay, it's this judge.
So I want to make this program be great for them.
A pretty display with graphs and charts and so on
and numbers and figures and diagrams
so that they can understand everything about the case
and make a good decision.
And yes, you want that in human centered AI.
But human centered AI says, we also got to consider the other stakeholders.
So what's the effect on the defendant and their family?
What's the effect on past victims and potential future victims and their family?
What's the effect on society as a whole of mass incarceration or discrimination
of various kinds. So you're not just serving one user, you're serving all these different
constituents. I mentioned this idea of varying autonomy and control, so not having to give up
control if you have more automation. And I think there's the aspect that it's multidisciplinary
more automation. And I think there's the aspect that it's multidisciplinary and multicultural. And I think too
often you see companies say, okay, I want to build a system
so the engineers will build it and get it working. And then
afterwards, we'll tack on this extra stuff to make it look
better or make it more fair or less biased and so on. And I
think when you do that, you don't end up with good results.
You've got to really bring in all these people right from the start, both in terms of being
aware of what it means to build a system like this, and then also that, as we were saying
before, a lot of these problems is deciding what is it that we want, what is it that we're
trying to optimize.
And different people have different opinions on that.
And so if you get a homogeneous group of engineers,
they might all think the same thing.
And they say, great, we're agreed,
we must have the right answer.
But then you go a little bit broader to other people
from other parts of society,
and they might say, no,
you forgot about this other aspect.
You're trying to optimize this one thing, but that doesn't work for us.
So you've got to bring those people in right from the start to understand who all your
potential users are and what's fair for all of them.
So one of the things that worries me is that we live in a capitalistic world.
So while it's nice to think that people are going to have a human-centered approach with
AI, I do feel like at the end of the day, people are going to have a human-centered approach with AI,
I do feel like at the end of the day, companies are going to do whatever is going to impact their
bottom line most positively. So what are the ways that you think that there'll be some guardrails
against not using AI in a human-centered way? So that's certainly an issue with capitalism,
a human-centered way? So that's certainly an issue with capitalism, not specifically for AI at all, right?
That's across the board.
So what do we have to combat that?
Part of it is regulations of various kinds, so governments can sit in and get to rules.
Part of that is pressure from the customers saying, here's the kind of company we want, here's the kind of products we want,
and part of that would be competition of saying you build a system that doesn't respect
something that users want, somebody else will build one that's better.
And I think we're in this kind of Wild West period now where we don't quite know what the bounds are going to be. And so
there's so many of these sets of AI principles now. All the big companies
have their own sets. I helped put together the Google one. Various countries
have legislation or sets of principles. The White House put out their set of AI
principles a couple months ago. The professional societies like the Association of Computing Machinery has theirs.
I actually joined an AI principles board with Underwriters Laboratory.
I thought that was interesting because the last time, more than 100 years ago, there
was a technology and people were worried that it was going to kill everyone and it was electricity.
And so Underwriters Laboratory stepped in and said, okay, you all are worried about
getting electrocuted, but we're going to pick this little UL sticker on your toaster and
that means you're probably not going to die.
And consumers trusted that mark and therefore the companies voluntarily submitted themselves
to certification.
And I kind of feel like this third-party, nonprofit certification can be more agile
than a government making laws.
And so I think that's part of the solution.
But I don't think any one part of it can do it all by himself.
I think we need all those parts.
We'll be right back after a quick break from our sponsors. I don't think any one part of it can do it all by himself. I think we need all those parts.
We'll be right back after a quick break from our sponsors.
Young and Profiters, you know me, I love a great deal just as much as the next person,
but I'm not gonna cut coupons or collect loyalty cards
just to save a few bucks.
It has to be easy, no hoops, no BS.
So when Mint Mobile told me it was easy to get wireless for $15 a month with the purchase
of a 3 month plan, I didn't believe them.
But it turns out it really is that easy to get wireless for $15 a month.
And Mint Mobile has made it simple for me to switch.
Everything was online.
It was easy to purchase, easy to activate, and easy to save money.
The longest part of the process was the time I spent on hold on the phone waiting to break
up with my old provider.
Want to get started?
Just go to MintMobile.com slash profiting.
There you'll see that right now all three-month plans are just $15 a month, including the
unlimited plan.
All plans come with high-speed data
and unlimited talk and text delivered on the nation's largest 5G network. And don't worry,
with any Mint Mobile plan, you can use your own phone and keep your current phone number.
How cool is that? Find out how easy it is to switch to Mint Mobile and get 3 months of premium
wireless service for $15 a month. To get this new customer offer and your new 3 month premium wireless plan for just $15 a month,
go to MintMobile.com slash Profiting.
That's MintMobile.com slash Profiting.
Cut your wireless bill to $15 a month at MintMobile.com slash Profiting.
$45 upfront payment required, equivalent to $15 per month.
New customers on first three-month plan only.
Speeds slower above 40GB on unlimited plan.
Additional taxes, fees, and restrictions apply.
See Mint Mobile for details.
Yeah, fam, I'm not a finance person.
I'm a make money person.
I love to innovate, create, and sell.
I don't like to do the boring finance stuff.
I hate thinking about bookkeeping, expensing, invoicing,
tax planning and organization, blah, I hate it.
So I've offloaded all those responsibilities
to my business partner, who's basically our COO
and our CFO, Jason.
And Jason is doing an awesome job.
However, I basically handed a mess over to him
and he's been having to toggle from app to app to app
to get it all done.
And he was looking for a streamlined solution
to handle everything in one place.
And we found that with Found.
Yes, it's called Found, which is very fitting.
Found is a banking and bookkeeping app
that is especially made for solopreneurs and entrepreneurs. It's made
for us. You can do everything from invoicing to bookkeeping to
tax planning. One of my favorite features on found is that it
will actually automatically estimate the taxes that you owe
and then set aside money for that. Similarly, you can create
virtual cards for different things like travel or
marketing and then set spending limits on them.
Found is super cost effective.
First of all, it's everything in one app and there's no hidden fees or minimum balances.
There is no paperwork to sign.
There's no credit checks.
It's a breeze to sign up.
If you want to try found for free, you can go to found.com slash profiting.
That's F-O-U- go to found.com slash profiting. That's f-o-u-n-d dot com slash profiting.
Again, if you want to try Found for free, go to found.com slash profiting.
Found is a financial technology company, not a bank.
Banking services are provided by Peermont Bank, a member FDIC.
Found's core features are free.
They also offer an optional paid product, Found Plus.
Yeah, Fam. they also offer an optional paid product, Vounce Plus. Yeah fam, I have a very large team and we've got sub teams
and all these teams have different needs.
My engagement specialist team works differently
than everybody else at Yap.
They're responsible to respond to comments
and direct messages from my social clients.
So I was having some productivity issues with this team
and I was really looking for a
solution where I could track their work on their phone and I found that with Connect Team and it's
the first mobile first employee management app designed to help businesses who have non-desk
teams. So it's perfect for my engagement team and it's perfect if you have any sort of company where
people are on their feet. You've got a real estate agency or a restaurant
or a cleaning service, and you need people
to be able to track their time,
track their work on their phone.
Connect Team is your solution.
Connect Team allows you to do a lot
of the repetitive tasks you do as a manager
so you can automate payroll,
you can automate schedule creation,
you can create checklists.
Your employees can clock in and clock out easily from their phone.
They don't need to be tech savvy.
And one of my favorite parts about connect team is that you can create forms.
So for example, if your employee is responsible to ask certain questions when
they're speaking to leads or something like that, you could track that.
This is perfect for mobile first teams.
It's super affordable.
You can save time and money
by managing your team with Connect Team.
Enjoy a 14 day free trial with no credit card required.
Open up an account today at connectteam.com.
Yeah, very cool, very interesting. I agree a third party solution sounds like a kid work pretty well.
So we have Sal Khan on the show and he as the Khan Academy,
he talked a lot about how AI could help education.
Do you have any ideas of how AI could support education and students?
Yeah, I think that's awesome.
I think the work Sal is doing has been great right from the start,
and recently over the last year or so with the Conmigo Large Language Model.
So back in 2011,
Sebastian Thrun and I said,
we want to take advantage of this capability for online education.
We put together an online course about AI.
We signed up 100,000 students, far more than we ever expected to sign up.
And we ran that course.
But of course, at that time, the leading technology was YouTube.
We would show students a video, and then we'd have them answer a question.
And we could do a little bit.
If they got this wrong answer, we could show them one thing and if they got another
wrong answer, we could show them something else.
But basically, it was very limited in the flow you could do.
And now, with these large language models, you have a much better chance to customize
the results for the student, both in terms of the learning experience and then I think also in terms
of the motivation for the student. So that was the one thing we learned in doing the class is that
we came in saying, well, our job is really information. If we can explain things clearly,
then we're done and we're a success. And we soon realized that that's only part of the job. And
really the motivation is more important than the information
Because if a student drops out doesn't matter how good our explanations are if they're not watching them anymore
It doesn't do any good. And so I think AI has this capability to motivate much better
To allow students to do what they're interested in rather than
what the teacher says they should be interested in.
But we've got a ways to go yet.
We don't quite know how to do that, right?
So you can't just plug in a language model and hope that it's going to work.
So yes, it would be useful, but you have to train it to be a teacher as well as to understand what it's talking
about.
And we haven't quite done that yet.
We're on the way to doing that.
There's a dozen different problems to be solved and we have candidate solutions, but we haven't
done it all.
So right now, the language models can be badgered too easily.
You say, here's a problem, and the student says, tell me the answer. And at first, the language model would
say, no, you wouldn't learn anything if I told you the
answer. But then you say, tell me the answer, please. And it
says, oh, okay. And so we have to teach these things. When is
it the right thing to give the student the answer? When is it
the right thing to be tough and refuse to do that?
When should you say, oh, you're right, that's a hard problem. Here's a simpler problem. Why
don't you try the simpler problem first? Or to say, looks like you're getting frustrated. Why
don't we take a break? Or why don't we go back and do something else that would be more fun for you?
And so there's all these moves that teachers can take. And so doing education
well is this combination of really knowing the subject matter and then really knowing
the student and the pedagogical moves you can make. And we haven't quite yet built a
system that's an expert on both of those. But Khan and others are working on it. And
so I think it's a great and exciting opportunity.
Do you feel like some of this learning and training
could be applied to the workplace?
Yeah, absolutely.
And some of it I think is easier and better done
for workplace training.
And I think that's gonna be really important.
We built this bizarre system now where we say,
you should go to a college for four years and then we're
going to hand you a piece of paper that says,
you never have to learn anything again.
That shouldn't be the way we do things.
There's a value to college,
maybe it doesn't have to be for everybody.
Maybe more people could be learning more on the job or
learning just in time when they need a new skill.
I think there's a great opportunity for that.
I think that the systems we have right now are better at shorter subjects anyways.
So it's hard to put together a class that says, let's do all of biology one or something.
But it's easier to say, why don't you get trained on this specific workplace thing, how to operate this machine or how to operate this software and so on.
So in some sense, we're better at that kind of training than we are at the traditional
schooling.
So yeah, there's definitely a big opportunity there.
The thing that mitigates against it is we could spend a lot of investment on making
the perfect biology one class,
because there's going to be millions of students that take it.
But for some of this on the job training,
I'm in a small company and we do things a specific way,
and there might be only five people that need to be trained on it.
So right now, it's not really cost effective to say,
can I build a system that will do that training?
But that's one of the goals to say, can I build a system that will do that training? But that's one of the goals to say, can we make it easier for somebody who's not an
expert programmer, not an AI expert, to say, here's some topic I want to teach
and I should be able to go ahead and teach that. And I think that's something
that's oddly missing from our standard playbook.
So you look at, we have these office suites,
and what do they give you?
They give you word processing and
spreadsheets and PowerPoint presentations.
Sure, that's great. Those are three things that I want.
But I would think a lot of people want this.
I want to be able to train somebody on
a specific topic
more than they want spreadsheets,
but we don't have that yet.
But maybe someday we will.
Maybe that'll be a standard tool
that would be available to everyone.
Sounds really cool.
So this conversation made me realize
that there really is no better time to be an entrepreneur
because as we were talking about,
a lot of jobs might get replaced by AI.
And when you're an entrepreneur, when you own the business,
you're sort of in control of all those decisions.
And you're the one who might end up benefiting from the cost savings
of replacing a human with AI.
So do you feel like AI is going to generate a lot more entrepreneurs
than solopreneurs in the future?
Absolutely. It's a combination. So I think AI is a big generate a lot more entrepreneurs and solopreneurs in the future? Absolutely.
It's a combination, so I think AI is a big part of it.
I think the internet and access to data was part of it.
The cloud computing was a big part of it.
So it used to be, if you were a software engineer, the hardest part was raising money because
you had to buy a lot of computers and just to get
started. Now all you need is a laptop and a Starbucks card and you can sit there
and start going and then rent out the cloud computing resources as you need
them and pay as you go. And so I think AI will have a similar type of effect. You
can now start doing things much more quickly.
You can prototype something, go to a release product much faster,
and it'll also make it more widely available.
So I live in Silicon Valley, so I see all these notices going around
of saying, looking for a technical co-founder.
So there's lots of people that say, well, I have an idea,
but I'm not enough of a programmer to do it,
so I need somebody else to help me do it.
I think in the future, a lot of those people
will be able to do it themselves.
So I had a great example of a friend who's a biologist.
And he said, I'm not a programmer.
I can pull some data out of a spreadsheet and make a chart, but I can't do
much more than that. But I study bird migrations and I always wanted to have this interactive map
of where the birds are going and play with that. And he said, and I knew a real programmer could
do it, but it was way beyond me. But then I heard about this copilot and I started playing around
with it and I built the app by myself. So I think we'll see a lot more of that,
of people that are non-technical or semi-technical
who previously thought,
here's something that's way beyond what I could ever do,
I need to find somebody else to do it,
now I can do it myself.
Yeah, I totally agree.
And we're seeing it first with the arts,
for example, now you can use Dolly
and be a graphic designer, you can use
LGBT and be a writer.
So so many of the marketing things are already being outsourced by AI.
It's only a matter of time where some of these more difficult things like creating an app,
like you were saying, is going to be able to be done with AI.
Absolutely.
Cool.
So what are the ways that you advise that entrepreneurs use AI in the workplace right
now? You could help build prototype systems like that. You can do research. You can ask,
give me a summary of this topic. What are the important things? What do I need to know?
As you said, creating artwork and so on, if that's not a skill you have,
they can definitely help you do that. Looking for things that you don't know is useful.
And so I think just being aware of what the possibilities are
and having that as one of the things that you can call upon,
it's not gonna solve everything for you,
but it just makes everything go a little bit faster.
Do you think that AI is gonna help accelerate
income inequality?
I think it's kind of mixed.
Any kind of software or any kind of goods with zero marginal cost tends to concentrate
wealth in the hands of a few.
And so that's definitely something to be worried about.
With AI, we also have this aspect that the very largest models are big and expensive. They require big capital
investments. And if you'd asked me two years ago, I would have said, oh, all the AI is
going to migrate to the big cloud providers because they're going to be the only ones
that can build these large state-of-the-art models. But I think we're already going past
that, right? So we're now seeing these much smaller open source models
that are almost as good and that don't impose a barrier
of huge upfront costs.
So I think there's an opportunity,
yes, the big companies are gonna get bigger because of this,
but I think there's also this opportunity
for the small opportunistic entrepreneur to say,
here's an opening and I can move much faster than I could before and I can build something and get
it done and then have that available. So that's part of it. Then the other part is, well, what
about people who aren't entrepreneurs? And we've seen some encouraging research that says AI right now does alleviate
inequality. And so there have been studies looking at, well, you bring AI assistance into
a call center, and it helps the less skilled people more than the more skilled people,
which makes sense, right? The people who are more skilled, they already know all the answers.
And the people that were less skilled, it brings them up almost to the same level.
So I think that's encouraging, because that means there's going to be a lot of people
who are able to upskill what they do and they'll get higher paying jobs.
They're not going to found their own company, but they're going to do better
because they're going to have better skills.
Makes a lot of sense.
Okay, so as we close out this interview, let's talk about the future a bit.
What scares you the most about AI right now?
I'm not worried about these Terminator scenarios of an AI waking up and saying, I think I'll
kill all humans today.
So what am I worried about?
I guess I'm more worried about a human waking up and saying,
I want to do something bad today.
So what could that be?
Well, misinformation, we've seen a lot of that.
And I think it's mixed of how big an effect AI will have on that.
I mean, it's already pretty easy to go out and hire somebody to create
fake news and promulgate it.
And the hard part really is getting it to be popular, not to create it in the first
place.
So in some sense, maybe AI doesn't make that much difference.
It's still just as hard to get it out.
And maybe I can fight against that misinformation.
So I think the jury is still out on that.
But if you did get to the
point where an AI knew enough about an individual user to say, I'm going to create the fake
news that's going to be effective specifically for you, that would be really worrying. And
we're not there yet, but that's something to worry about. I worry about the future of
warfare. So you're seeing these things today. We just saw a tiny little personal
size drone shot down a Russian helicopter. So we've had half a century or so of mostly
a stalemate of saying the big countries have the power to impose themselves on the others,
but none of them are really
going to unilaterally do it on a large way and we have smaller regional conflicts.
Now we may be transitioning into a world where we say the power is not just in the big countries,
it's in lots of smaller groups.
And that becomes a more volatile situation.
And so there could be more of these smaller regional conflicts
and more worries for civilians that get caught up in it.
So I'm worried about that as well.
And then, like you said, the income inequality,
I think, is a big issue.
Well, let's end on a positive note.
I guess what excites you the most about AI?
So a big part of it is this opportunity for education.
That's where I spent some of my time and I'm really
interested in that now.
So I think that can make things better for everyone.
Just making everyone more powerful,
more able to do their job,
able to get a better job.
So that's exciting.
I think applications in healthcare are a great opportunity.
I got involved a little bit in trying to applications in healthcare are a great opportunity.
And I got involved a little bit in trying to have better digital health records.
And that really didn't go so far, mostly because of bureaucracy and so on.
But I think we have the opportunity now to do a much better job, to invent new treatments
and new drugs.
You've seen things like AlphaFold figures out,
here's how every protein works.
And it used to be, you could get a PhD
for figuring out how one protein worked.
And AlphaFold said, I did them all.
So I think this will lead to drug discovery,
lead to healthier lives, longevity, and so on.
So that's a really exciting application.
It's so interesting to me that AI can do so much good
and then there's also such a risk of it doing so much bad,
but I feel like any good technology
brings that risk along with it.
I think that's always true, right?
If it's a powerful technology, it can do good or bad,
especially if there are good and bad people
trying to harness that way.
And some of it is intentional bad uses and some of it is unintentional.
So internal combustion engines did amazing things in terms of distributing food worldwide
and making that be available, making transportation be available.
But there are also these unintended side effects of pollution and global warming
and some bad effects on the structure of cities and so on.
And we would be a lot better off if when cars were first starting to roll out in 1900, if
somebody said, let's think about these long-term effects.
So I guess I'm optimistic that there are people now thinking about these effects for AI as
we're just starting to roll
it out. So maybe we'll have a better outcome. Yeah, I hope so. Well, Peter, thank you so
much for joining the show. I end my show with two questions that I ask all of my guests.
What is one actionable thing our young and profitors can do today to become more profitable
tomorrow? Keep your eye on what it is that people want. So I said the problem in AI is figuring out
what we want. I'd work some with people at Y Combinator and I still have this t-shirt that
says on the back, make something people want. And very simple advice to entrepreneurs but sometimes
missed. And so I think that's true generally and I think AI can help us do that.
Yeah, it's so true.
The number one reason why entrepreneurs and startups fail
is because there's no market demand.
So make something that people want.
And what is your secret to profiting in life?
And this can go beyond today's episode topic.
Keep around the people you like and be kind to everybody.
Love that. Where can everybody learn more about you
and everything that you do? You can look for me at norvig.com or on LinkedIn or thanks to Google,
I'm easy to find. Awesome. I'll stick all your links in the show notes. Peter, thank you so much
for joining us. Great to join you, Hala.
Thanks for the show, Noah. And Peter, thank you so much for joining us.
Great to join you, Noah.
Well young and profiters, I hope you learned something from my conversation with Peter.
It's so fascinating to hear about the technology and its implications from someone who has
been on the front lines of some huge transformations in how we live and work.
Peter was, like he said, doing AI long before it was cool,
and therefore has some interesting observations
about its capabilities and where it's headed.
He says that rather than thinking about AI
as something that replaces humans,
we should be thinking about it as a tool
that fills in the gaps and helps humans
achieve greater things.
This could even include providing assistance with school
or on-the-job training, with school or on the job training,
with AI serving as the ultimate personalized tutor
or job trainer.
AI is also gonna continue what the internet,
cloud computing and other advances have already started.
It's gonna make it easier and easier
to launch your own business and become an entrepreneur.
The number of entrepreneurs are gonna keep growing
and growing and being able to incorporate AI
into your business, combining the human and the superhuman
will help set you apart.
But it's also important to remember that AI is a tool.
And like any tool, it could be used for good or bad.
It's up to us to shape the future of AI
in a way that benefits everyone. Thanks for listening to this episode of Young and Profiting.
Every time you listen to and enjoy an episode of this podcast, share it with your friends or family.
Perhaps someday an AI bot can do this for you, but until then, we depend on you.
And if you did enjoy this show and you learned something, then please take a couple minutes
to drop us a 5-star review on Apple Podcasts, Spotify, or wherever you listen to your podcast.
Nothing helps us reach more people than a good review from our loyal listeners.
If you prefer to watch your podcast as videos, you can find us on YouTube, just look up Young
and Profiting and you'll find all of our episodes on there.
If you're looking for me, you can find me on Instagram at Yap with Hala or LinkedIn
by searching my name.
It's Hala Taha.
Before we wrap up, I want to give a big shout out to my incredible Yap production team.
Thank you so much for your hard work.
This is your host, Hala Taha, aka the Podcast Princess, signing off. Thanks for watching!