Young and Profiting with Hala Taha - YAPClassic: Ex-Google Officer Mo Gawdat Warns About the Dangers of AI, Urges All to Prepare Now!
Episode Date: April 19, 2024So what do you need to know to prepare for the next 5, 10, or 25 years of a world increasingly impacted by artificial intelligence? How could AI change your business and your life irreparably? Our gue...st today, Mo Gawdat, an AI expert and former Chief Business Officer at Google [X], is going to break down what you need to understand about AI and how it is radically altering our workplaces, careers, and even the very fabric of our society. Mo Gawdat is the host of the popular podcast, Slo Mo, and the author of three best-selling books. After a 30-year career in tech, including working at Google's “moonshot factory” of innovation, Mo has made AI and happiness his primary research focuses. Motivated by the tragic loss of his son, Ali, in 2014, Mo began pouring his findings into his international bestselling book, Solve for Happy. Mo is also an expert on AI, and his second book, Scary Smart, provides a roadmap of how humanity can ensure a symbiotic coexistence with AI. In this episode, Hala and Mo will discuss: - His early days working on AI at Google - How AI is surpassing human intelligence - Why AI can have agency and free will - How machines already manipulate us in our daily lives - The boundaries that could help us contain the risks of AI - The Prisoner’s Dilemma of AI development - How AI is an arms race akin to nuclear weapons - Why AI will redesign the job market and the fabric of society - A world with a global intelligence divide like the digital divide - Why we are facing the end of truth - Why things will get worse before they get better under AI - What you need to know to participate in the AI revolution - And other topics… Mo Gawdat is the former Chief Business Officer of Google [X] and now the host of the popular podcast, Slo Mo, and the author of three best-selling books. After a 30-year career in tech, including working at Google's “moonshot factory” of innovation, Mo has made AI and happiness his primary research focuses. Motivated by the tragic loss of his son, Ali, in 2014, Mo began pouring his findings into his international bestselling book, Solve for Happy. His mission is to help one billion people become happier. Mo is also an expert on AI, and his second book, Scary Smart, provides a roadmap of how humanity can ensure a symbiotic coexistence with AI. Since the release of ChatGPT, Mo has been recognized for his early whistleblowing on AI's unregulated development and has become one of the most globally consulted experts on the topic. Resources Mentioned: Mo’s Website: https://www.mogawdat.com/ Mo’s Linkedin: https://www.linkedin.com/in/mogawdat/ Mo’s Twitter: https://twitter.com/mgawdat Mo’s Instagram: https://www.instagram.com/mo_gawdat/ Mo’s Facebook: https://www.facebook.com/Mo.Gawdat.Official/ Mo’s YouTube: https://www.youtube.com/@MoGawdatOfficial Mo’s Podcast: Slow Mo Mo’s book on the future of artificial intelligence, Scary Smart: https://www.amazon.com/Scary-Smart-Future-Artificial-Intelligence/dp/1529077184/ LinkedIn Secrets Masterclass, Have Job Security For Life: Use code ‘podcast’ for 30% off at yapmedia.io/course. Sponsored By: Shopify - Sign up for a one-dollar-per-month trial period at youngandprofiting.co/shopify Indeed - Get a $75 job credit at indeed.com/profiting Airbnb - Your home might be worth more than you think. Find out how much at airbnb.com/host Porkbun - Get your .bio domain and link in bio bundle for just $5 from Porkbun at porkbun.com/Profiting Yahoo Finance - For comprehensive financial news and analysis, visit YahooFinance.com More About Young and Profiting Download Transcripts - youngandprofiting.com Get Sponsorship Deals - youngandprofiting.com/sponsorships Leave a Review - ratethispodcast.com/yap Watch Videos - youtube.com/c/YoungandProfiting Follow Hala Taha LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ TikTok - tiktok.com/@yapwithhala Twitter - twitter.com/yapwithhala Learn more about YAP Media Agency Services - yapmedia.io/
Transcript
Discussion (0)
Today's episode is sponsored in part by Yahoo Finance, Porkbun, Indeed, Airbnb, and Shopify.
Yahoo Finance is the number one financial destination.
For financial news and analysis, visit the brand behind every great investor, yahoofinance.com.
Build your digital brand and manage all your links from one spot with Porkbun.
Get your.biodomain and link in BioBundle for just $5
at porkbun.com slash profiting. Attract, interview, and hire all in one place with Indeed. Get a $75
sponsored job credit at indeed.com slash profiting. Generate extra income by hosting your home on
Airbnb. Your home might be worth more than you think. Find out how much at airbnb.com
slash host. Shopify is the global commerce platform that helps you grow your business.
Sign up for a $1 per month trial period at Shopify.com slash profiting.
As always, you can find all of our incredible deals in the show notes. Young and Profiters! We have been talking about AI a lot on the show, and that's because we know how
important it is to have an in-depth understanding of the technology and how it is rapidly shaping our world.
And we can't talk about AI as merely a thing of the future anymore.
It's developing exponentially every day and whether we're aware of it or not,
we play an active role in how it affects us and our careers or businesses.
So today we are dusting off a not so old episode from 2023.
It's only like maybe eight months old, but it is so good.
And I want to make sure that anybody who didn't hear it yet hears it.
Because this episode with Moe Gaudat blew my mind.
And it blew a lot of people's minds,
which is why it went massively viral on YouTube.
And it was one of my most popular episodes last year,
if not the most popular episode.
And so I really wanted to resurface it
because I keep referencing this episode
and other AI episodes that I have.
And I just want everybody to make sure
that they hear this conversation. It was really eye opening.
So if you don't know Moe Gaudat, he is a guru in the tech world.
He worked at IBM and then Microsoft. Then he joined Google. And at first,
he was the vice president of emerging markets at Google,
and he started half of Google's businesses globally.
And then he became the chief Business Officer of Google X.
X is Google's innovative lab where they work on big bold ideas to solve the world's greatest
problems. And in this YAP Classic, Moe is going to talk about one of the most intriguing AI
experiments they ever invested in, which included robot arms that were taught to pick up toys and
put them away. And that might sound super simple,
but it actually completely changed the way
that Mo perceived AI, and it scared him so much,
and he was so against what Google was doing with AI
that he ended up quitting
because he didn't wanna be a part of it.
Now, that's not to say that Mo is not optimistic about AI.
He believes there's a bright future ahead of us,
but only if we take
the danger seriously of AI and we develop AI responsibly. So there's so much to unpack here.
There's so much to learn in this episode. This was one of my favorite episodes of last year.
You guys are going to love it. So without further ado, here's my conversation with the incredible
Moe Goudat.
Mo, welcome to Young and Profiting podcast. Thank you. Thanks for having me.
It's been a while in the making,
but absolutely worth the wait.
Can you talk to us about your journey
at a very high level, the highlights
that got you in the C-suite at Google Acts eventually?
At the height of my professional career, if you want my corporate career, I was Chief
Business Officer of Google X. And of course, I worked my butt off to get there, but there was
an element of luck in the process. I met the exact right people at the exact right time.
It was one of those events where the Google X team was presenting some of their confidential stuff.
And I showed up and I said, at the time I was vice president of emerging markets for
Google.
I had started half of Google's businesses globally, more than 103 languages, if I remember
correctly.
And so I was quite well known in the company, if you want.
I had a reasonable impact that I have to say I'm very grateful that life gave me the
opportunity to provide. And then with Google X, I basically at the time Google still had the idea
of the 20% time. So I liked their projects and I said, I'm going to give you my 20%. And they said,
but we haven't asked for it. And I said, yep, that's not your choice. And I showed up basically,
the first day I showed up, I bumped into Sergey, our
co-founder and I worked closely with Sergey for many years and he says, what
are you doing here?
And I was like, I'm very excited about your work and ended up.
He said, Oh no, don't leave basically stay.
And I was chief business officer for five years where I think Google X is misunderstood because
we never really launched a product under X if you want.
So self-driving cars is under Waymo.
Google brain is integrated into Google and so on.
But most of the very spooky innovation, if you want the very, very out there
innovation, including all of robotics and a big chunk of AI was at X.
And it was a big part of what I did.
And so diving right into AI,
you were actually part of the labs
that initially created AI.
So can you talk to us about the story of the yellow ball
and how that really changed your perspective about AI?
AI has been around a lot longer than people think.
When we started self-driving cars back in 2008,
that was basically with a belief
that cars can develop intelligence
that is as intelligent as a driver
and accordingly able to drive a car.
And since then, I mean, by 2008,
I think in my personal memories, I think 2008 was really the year when we knew that we cracked the code.
Early 2009, Google published a paper that's known as the Cat paper.
That white paper basically described how we asked an artificial intelligent machine to look at YouTube videos without prompting it for what to look for.
And then it eventually came back and said, I found something. And we said, show us. And it turns out
that it found a cat, not just one cat, but really what cat-ness is all about, you know, that very
entitled, cuddly, furry character, basically could find every cat on YouTube.
And that was really the very first glimpse between that and the work that
DeepMind was doing on playing Atari games where machines started to show real
intelligence.
We then started to integrate that in a lot of things, you know, self-driving
cars is probably the most publicly known example. But one of the projects that
we worked on was, which is not the only, you know, Google X was not the only one working
on it, but we wanted to teach grippers, robotic arms. Basically, we wanted to teach them how
to pick objects that they're not programmed to pick. And it's a very, very sophisticated
task because we do it so easily as humans. You don't remember, but if your parents would remember when you were a child and
before you learned how to grip, you kept going on trial and error.
You would try to grip something and then it falls and then you try again and so on.
And basically we said, maybe we can teach the machines the same way.
We built a farm of those grippers, put boxes of items in front of them, a funny programmer
basically chose children's toys, and you could see them try to pick those items and basically fail
over and over. It's a very sophisticated mathematical problem. And so they would fail,
they would show the arm to the camera and the camera would know that this algorithm, this pathway, didn't register, didn't
pick the item until I think it was several weeks in and you know it was a significant investment
because robotic arms were not cheap at the time. I passed by that farm very frequently on my way to
my desk and on a Friday evening, finally one of those arms, you know, I can see it goes down, picks
one item, which was a yellow softball, again, mathematically very complex to grip, and it
shows it to the camera. And so jokingly, I pass by the team that's running this experiment
and I say, okay, well done, all of those millions of dollars for one yellow ball okay. I'm the smile and then you know sort of not that their heads and on monday morning as i went to work every arm was picking the yellow ball.
A couple of weeks later every arm was picking everything and i think that something that most people don't recognize about the eyes that the speed once you found the very first pattern, the speed at which AI starts to
develop is just mind-blowing. Also, I think most people don't realize that they learn exactly like
my children learned to grip. That's the whole idea. So they really do develop intelligence that
comparable, now probably even more advanced than human intelligence.
In that moment when you saw those machines gripping toys and doing it more efficiently
and with intelligence, were you alarmed or were you excited?
I've been excited about AI since I had a Sinclair, believe it or not.
So I started coding at a very, very young age
on computers, young and profitable, probably have never touched in their life. So, you know,
and every one of us geeks wanted to code an intelligent machine. We all attempted and we all
simulated and we all even pretended sometimes. But then it was the year 2000, truly, where deep learning was starting
to develop.
And we sort of found the breakthrough.
We found how to give machines intelligence.
And allow me to stop for a second here, because there is a huge difference between the way
we programmed machines before deep learning and after deep learning.
Before deep learning, when I programmed the
machine as intelligent as it looked, I solved the problem first using my own intelligence,
and then sort of gave the machine the cheat in terms of how to solve it itself. I wrote the
algorithm or I wrote the process step by step and basically coded the machine to do it.
When deep learning started to happen, what we did was we didn't tell the machine how
to solve the problem.
We told the machine how to develop the intelligence needed to find a solution to the problem.
This is very, very different.
And as a matter of fact, most of the time, we don't even recognize how the
machine finds a cat. We don't fully understand how Bard, Google's Bard, understood how to
speak Bengali, right? We don't really know those emerging properties or even the tasks
we give them themselves. So your question was, was I excited? I promise you the day
I met Demes, who was the CEO of DeepMind when we acquired DeepMind,
it was really to me like meeting the rock star. I was fanatic about what he was doing. I still am
a fan of him and his ethics and amazing human being. But at the time, for a geek, understand this,
AI was the ultimate joy and glory. This was it. We were creating
intelligence and for a programmer that was mind-blowing. And remember, every
time we saw the machines develop, we got more excited. Believe it or not, because
we wanted what was good for the world. Intelligence in itself, there is nothing
inherently wrong with intelligence.
It was when I saw the yellow ball,
I think that something dropped.
I could see it so clearly because for the first time ever,
I realized that those machines,
one are developing way faster than us.
And so accordingly,
the predictions of people like Ray Kurzweil and others of a moment of
singularity where they're going to bypass our intelligence became very very real in my mind,
I could see that this is going to happen but I also could see that we, the moment they became
intelligent, had very little influence on them okay accordingly, I started to imagine a world where humanity is no longer
the top of the food chain. Humanity is no longer the smartest being on the planet and then cast
the apes. We are going to be the apes. Do you understand that? Yeah. And I think that completely
made sense to me that this needed a lot more consideration, rather than the
excited geekiness of building it.
We needed to understand why and how are we building it and what is a future where it
becomes in charge.
There's so much to unpack here.
This is why I was like, I need to spend the full hour on this topic because there's just
so much to unpack here. This is why I was like, I need to spend the full hour on this topic because there's just so much to unpack. Let's talk about the label of artificial in artificial
intelligence. Is intelligence artificial at all or is AI? Oh yeah, talk to us about that.
Not in the slightest. If there is any artificial side to the machines is that they are silicon
based. As a matter of fact, most of the ones
who worked on deep tech, not the stuff that you see in the interfaces, we almost mapped
their brains to the way our neural networks as humans work. So, you know, humans in the
early development of AI, you know what neuroplasticity is. Humans basically, we develop our intelligence and our ability to do anything really by repeating
a task in a specific way.
And they say neurons that fire together, wire together.
So if you tap your finger over and over and over, your brain sort of takes that neural
network that taps your finger and makes it stronger and stronger and stronger, just like going to the gym.
And the early years of developing AI, we were doing exactly that.
We were literally pruning the software or the algorithms that were not effectively delivering
the task we want, literally killing them, erasing them, and keeping the ones that were
capable of getting closer to the answer we wanted and then strengthening them so we were sort of like doubling down on them wiring them together.
The way the machines work today is very very similar to that it's a bunch of patterns that are created in hundreds of millions, sometimes billions and trillions of neurons,
not yet trillions, but you know, lots of nodes of patterns that the machine would recognize
so that it basically can make something look intelligent or can behave in a way that is
analogous to intelligence.
Now is it artificial?
Well, I think if you ask the machines, they will think of our carbon-based
intelligence as artificial. The only difference really is we are carbon-based and analog.
They are, I don't think we're analog, I think we're somewhere in between, and they are digital
and silicon-based, not for long. We don't know what they're going to be based on in
the future. But also, I think their clock speed is very different than human clock speed.
So they have an enormous capability of learning very, very quickly, of crunching a massive
amount of data that no single human can achieve.
They have the capability of keeping so much in their memory.
They are aware and informed of everything all the time.
They are connected to each other so they could in the future when AGI becomes a
reality benefit from each other's intelligence.
And in a very simple way, I think the race to intelligence is one.
Today, there are estimates that chat GPT is at an IQ of 155,
Einstein I think was 160 or 190, doesn't really matter, but most humans are 122, some are less
than that, maybe 110 and so on. The dumbest human is 70, so you can easily see that there is an AI
today from an intelligence
point of view on the task assigned to it. Remember, we're still in the artificial
special intelligence stage, one task assigned to every AI, and the task assigned to it,
it's by far more intelligent than humans, nothing artificial at all about that. It develops its own intelligence, it evolves,
it has agency, it has decision-making abilities,
it has emotions I tend to believe.
And it is in a very interesting way,
almost sentient if you think about it,
which is an argument that a lot of people don't agree with
because we don't really define sentient
on a human level very well,
but they definitely simulate being sentient very well.
What you're saying is really incredible and mind-blowing.
I know that for humans, we don't understand how conscious this works, right?
Nobody can say, you're conscious because of this.
And you mentioned before that we don't understand how intelligence really happens.
We know how to create intelligence, but we don't actually know how the intelligence works.
It just sort of takes off on its own, which can be really scary.
So talk to us about why you think AI should be considered living or sentient.
I think the definition of sentient needs to be agreed.
Is a tree sentient, Is a pebble sentient?
Is the planet Earth sentient? You know, we could have many arguments. Now, if you think
of being sentient as it is born at a point in time and it dies at a point in time, or
at least it has the threat of dying at a point in time, then AI is born at a point in time
and it has the threat of dying at a point in time. If you think of
sentient as the ability to sense the world around you, well, yes, of course AI is capable
of assessing the world around it. If you think of sentient as the ability to affect the world
around you, then yes, it can. If you take a tree, example and tree grows it reproduces it is in a way interestingly aware of the seasons and aware of the environment around it and it responds to it so that we will not shed
it's leaves on the twenty first of october specifically it will shed its leaves when the weather alerts it to do that.
And if you consider a tree sentient in that case, then AI is surely sentient.
If you consider that a gorilla is incredibly interested in survival and accordingly would
do what it takes to survive, then AI is sentient in the sense that once assigned a task, it
will attempt to survive,
to make the task happen basically.
It's so interesting.
And I know that a lot of people who think of AI
think of it as a machine that they can turn off
if things get crazy, just tell it what to do.
Can you talk about how AI can have agency and free will?
Oh my God, I can give you endless examples.
If you're not informed of AI today, it is a bit like
a hurricane approaching your city or village
and you're sitting at a cafe saying I'm not interested.
This is the biggest event happening in today's world.
And the reason for that is that there are tremendous benefits that can come from
having artificial intelligence in our lives.
And if you miss out on that train, you're not going to have the skills to
compete in a world that is changing very rapidly.
That's on one side.
On the other side, there are very, very significant threats and those
threats come in two levels. The news media wants to always talk about the terminator scenario it's an existential risk to humanity in ten fifteen twenty years time.
I believe that there is a probability of that happening but i believe that there are many more important, more immediate threats that need to be looked
at today, things that are already happening, and that we need to become aware of things
like concentration of power, things that are like the end of truth, things like the jobs
and the redesign of the fabric of society as a result of the disappearance of many jobs
and so on.
So we'll come to all of those.
I think we need to cover both sides of the immediate risk
and the existential risk.
But your question was, how can AI affect me today?
Let me give you a very simple example.
There is nothing that entered your head today
that was not dictated to you by a machine.
We ignore that fact when we swipe on Instagram
or when we are on TikTok or when we're looking
at the news media or when we're searching
and getting a result from Google,
but every single one of those is a machine
that is telling you in reality
what it is that you should know.
Now think about the following.
Today in the morning, I got a statistic that basically is quite interesting,
a study by Stanford University that said that brunettes are on average taller than blondes.
Right?
I didn't actually, but does it make any difference
once I told you that piece of information?
You know, once I tell you a piece of information,
I have affected your mind forever. So you can either trust me,
and now you're going to look at brunettes and blondes differently for the rest
of your life. You can mistrust me,
and then you're going to spend a little bit of time to try and verify the truth.
And in the back of your mind, that bit of information
is going to be engraved. Maybe for the future, you might dedicate yourself to a research that
proves me wrong. You may actually become fanatic. You may start posting about it on the internet.
You may spend the rest of your life trying to defend this lie or trying to disprove this lie and show the truth just by showing you one bit of
information now every bit of information you have seen since you woke up today is dictated by a
machine now you have Noah Harari basically says they have hacked the operating system of humanity
so if i can hack into your brain Hala and tell you something that affects you for the rest of your life whether positive or negative whether true or false
then i have already managed to affect you interestingly most of those machines that you've dealt with
are programmed for one simple task, which is to manipulate you.
Every one of those social media machines, for example,
are out there with one objective,
which is to manipulate your behavior to their benefit.
They're becoming really good at it.
They're becoming so good at it as a matter of fact,
that most of the time,
we don't even realize that we have been brainwashed over and over
and over by the capability of those machines.
So here's the interesting bit. I told you in the immediate risks that are coming up,
I believe they have started already and I think they will start to become quite significant over the next year or two,
and we will see my personal view, what I call patient zero, is the end of the truth in the US elections.
So the reality of the matter is that with deep fakes,
with the ability to manipulate information and data,
with the ability to create, by next year,
you have to be aware that a reel on Instagram
can be created with no human in front of the camera very, very easily.
Technologies like Stability.ai, Stable Diffusion, for example, can now generate realistic human-like
images in less than a tenth of a second, and a video is 10 frames per second, so the next
stage is clearly going to be video. There are multiple
videos that have been created that you couldn't distinguish the quality of from an actual iPhone
video of you. Now think of face filters and how this is affecting our perception of real beauty.
Think of information and statistics using chat GPT, affecting the children's way of doing their homework.
We are completely redesigned as a society
and we're not even talking about it.
This is how far this has gone.
Let's hold that thought
and take a quick break with our sponsors.
Hey, AppBam, starting my LinkedIn Secrets Masterclass
was one of the best things I've
ever done for my business.
I didn't have to waste time figuring out all the nuts and bolts of setting up a website
that had everything I needed, like a way to buy my course, subscription offerings, chat
functionality and so on because it was super easy with Shopify.
Shopify is the global commerce platform that helps you sell at every stage of your business.
Whether you're selling your first product, finally taking your side hustle full time,
or making half a million dollars from your masterclass like me.
And it doesn't matter if you're selling digital products or vegan cosmetics.
Shopify helps you sell everywhere, from their all-in-one e-commerce platform to their in-person POS system.
Shopify has got you covered as you scale.
Stop those online window shoppers in their tracks
and turn them into loyal customers
with the internet's best Kimberding checkout.
I'm talking 36% better on average
compared to other options out there.
Shopify powers 10% of all e-commerce in the US,
from huge shoe brands like Allbirds
to vegan cosmetic brands like Thrive Cosmetics.
Actually, back on episode 253,
I interviewed the CEO and founder
of Thrive Cosmetics, Karissa Bodnar,
and she told me about how she set up her store with Shopify
and it was so plug and play,
her store exploded right away.
Even for a makeup artist type girl with no coding skills,
it was easy for her to open up a shop
and start her dream job as an entrepreneur.
That was nearly a decade ago.
And now it's even easier to sell more with less
thanks to AI tools like Shopify Magic.
And you never have to worry
about figuring it out on your own.
Shopify's award-winning help is there to support your success every step of the way.
So you can focus on the important stuff, the stuff you like to do.
Because businesses that grow, grow with Shopify.
Sign up for a $1 per month trial period at Shopify.com slash profiting,
and that's all lowercase.
If you want to start that side hustle you've always dreamed of, trial period at Shopify.com slash profiting. And that's all lowercase.
If you want to start that side hustle
you've always dreamed of,
if you want to start that business,
you can't stop thinking about.
If you have a great idea, what are you waiting for?
Start your store on Shopify.
Go to Shopify.com slash profiting now
to grow your business no matter what stage you're in.
Again, that's Shopify.com slash profiting.
Shopify.com slash profiting for $1 per month trial period.
Again, that's Shopify.com slash profiting.
Young and Profiters, we are all making money,
but is your money hustling for you?
Meaning, are you investing?
Putting your savings in the bank
is just doing you a total disservice.
You gotta beat inflation.
I've been investing heavily for years.
I've got an E-Trade account, I've got a Robinhood account,
and it used to be such a pain to manage all of my accounts.
I'd hop from platform to platform.
I'd always forget my fidelity password,
and then I have to reset my password.
I knew that needed to change because I need to keep track of all my stuff.
Everything got better once I started using Yahoo Finance, the sponsor of today's episode.
You can securely link up all of your investment accounts in Yahoo Finance for one unified
view of your wealth. They've got stock analyst ratings. They have independent research. I
can customize charts and choose what metrics I want to display for all my
stocks so I can make the best decisions. I can even dig into financial statements and balance
sheets of the companies that I'm curious about. Whether you're a seasoned investor or looking for
that extra guidance, Yahoo Finance gives you all the tools and data you need in one place.
For comprehensive financial news and analysis, visit the brand behind every great investor,
Yahoo Finance.com, the number one financial destination,
Yahoo Finance.com.
That's Yahoo Finance.com.
Young and Profiters, I don't know about you,
but I love to make my home someplace that I'm proud of.
And that's why I spent a lot of time on my apartment,
trying to make it my perfect pink palace
all set with a velvet couch and in-home studio and skyline views of the city.
And while I love my apartment, I can get really sick of it.
I can get really uninspired and if you work from home, you know exactly what I'm talking about.
But the good news is like many of you guys, I'm an entrepreneur and that means that I can work from anywhere.
And finally, I decided to make good use
of my work flexibility for the first time.
This holiday break, the sun was calling my name,
so I packed my bags and my boyfriend
and we headed to Venice Beach, California.
We got a super cute bungalow
and we worked from home for an entire month.
The fresh air and slower pace helped to inspire
some really cool new ideas for my business.
And now I'm hitting the ground running in Q1.
Airbnb was the one that helped me make
these California dreams come true.
And in fact, Airbnb comes in clutch for me
time and time again.
Whether it's finding the perfect Airbnb home
for our annual executive team outing
or booking a vacation where my extended family
can fit all in one place.
Airbnb always makes it a great experience. And you know me, I'm always thinking of my latest
business venture. So when I found out that a lot of my successful friends and clients host on Airbnb,
I got curious and I want to follow suit because it seems like such a great way to generate passive
income. So now we have a plan to spend more time in Miami and then we'll host our place on Airbnb to earn some extra money whenever we're
back on the East Coast. So I can't wait for that and a lot of people don't
realize they've got an Airbnb right under their own noses. You can Airbnb
your place or a spare room if you're out of town for even just a few days or
weeks. You could do what I did and work remotely and then Airbnb your place to
fund your trip.
Your home might be worth more than you think.
Find out how much at airbnb.com slash host.
That's airbnb.com slash host to find out
how much your home is worth.
It is insane and I definitely want to talk about
those risks that you were talking about,
immediate risk, job risks, existential risk down the line years later.
Talk to us about the fact that AI can learn on its own.
It can learn languages on its own.
It can beat chess players and come up with moves that we've never taught it before because
a lot of people think about AI as something that just collects information and spits out
information but it can actually learn new things that humans don't even know. So talk to us about that. Let me give you
a concrete example. There is a strategy game known as Go. Go is one of the most complex strategy
games on the planet. It requires a very deep understanding of planning and crunching a lot
of numbers and mathematics and so on, very popular in Asia.
And in our assessment, Go was the ultimate task.
You know, like we had the touring test for AI pretending to be a human and you're not
being able to figure out if it isn't, Go was sort of like that other milestone.
If AI wins in Go, then AI is now the top gamer on the planet. Now, it was several, five years ago, I believe, that 10 years ahead of any estimate,
that AlphaGo, again, DeepMind, basically became the world champion in Go.
And AlphaGo had three versions to it.
Version number one took a few months to develop.
Basically, we asked it to watch YouTube videos of people
playing Go. And from that, it played against the second champion in the world. So that's
the runner-up. And it won 5 to 1 or 5 to 2. But it basically won. And that basically made
Alpha Go number two in the world. And then we developed something called Alpha Go Master.
And Alpha Go Master played against Lee, the world champion,
and won.
That was around a few months later.
And then we developed another code
that was called Alpha Go Zero.
And Alpha Go Zero basically learned the game
by playing against itself.
So it never saw a human ever playing Go.
It just played against itself.
So it would be the two opponents and through the patterns of the game randomly, it would
learn what wins and what loses.
AlphaGo Zero within three days, three days, won against AlphaGo, the original, within 21 days, one against AlphaGo master,
and became the world champion a thousand games to zero within 21 days.
Now, when you understand that level of strategy, when Lee, the world champion, was playing
against AlphaGo master, there is something that you can Google that's known as Move
37. And move 37 was
that machine coming up with a move that is completely unlike anything humans understand,
to the point that the world champion said, I don't know what this is doing. I need a 15 minutes
break to understand. It was a move of ingenuity, of intuition, of creativity, of very deep strategy, of very,
very deep mathematical planning.
And we never taught AlphaGo Master to do that.
We never taught the original games of Atari DeepMind to find the cornerstone in the breakout
game, if you remember those Atari games.
So it would find the cornerstone, throw the ball in there so that it hits the ball from
the top.
All of those things we don't teach the machines how to learn.
And we call those emerging properties.
And emerging properties are basically things that the machine learns on its own without
us actually telling it at all to learn it.
One of the famous ones was Sund sunday the CEO of alphabet.
Talks about google's a i n how that we discovered or they discovered i was not no longer at google at the time that it speaks banglali.
We never taught banglali whenever showed it at the test of banglali just learns banglali chai GPT is learning research chemistry.
We never taught it research chemistry.
We never wanted it to.
It just learns just like you and I, Hala.
So if I ask you a question and you give me an answer, the answer might be right or wrong.
It doesn't matter.
But I can find out if the answer is right or wrong, at least by my perception.
But I can never find out how you arrived at it.
I don't know what happened in your brain to get to that answer.
This is why in elementary school, in math tests, they asked the student to show the
thinking they went through.
So when you think about that, you realize that those machines are completely doing things
that we don't tell them to do.
Interestingly, however, the answer from a computer science
point of view to the problem of a risk of AI is known as the solution to the control problem.
So most computer scientists spent a lot of time trying to make AI safe. How do they make it safe?
By including control measures within the code. Theoretically, by the way, I do not know of any AI developer
that ever included in a control code within their code,
because it takes time and effort,
and it's not what they're paid for, basically.
But here's the question.
How do you control something that is bound
to become a billion times smarter than you?
Think about the ChatGPT 4 was 10 times smarter than chat GPT-3.5.
If you just assume that this pattern will repeat twice, there will be an AI within the
next year and a half to two years that in the task of knowledge and cognition of information
is going to be at an IQ of 1,500. That's not even imaginable by human intelligence. This is basically like trying to explain quantum physics to a fly.
That's the level of intelligence difference between us and them.
Just like it's so difficult for someone like me who has an avid love of physics, when I
look at how someone like Einstein comes up with theory of relativity, I go like, man, I never, I
wish I had that intelligence. And that's the comparison between me and Einstein. Imagine
if I compare myself to something a hundred times smarter than Einstein. My prediction
and the prediction of many other computer scientists is that by the year 2045 at the
current trend, AI will probably be a billion times smarter than us, one billion
with a B. So it's quite interesting when you really think about it, how the arrogance of
humanity still imagines that it can control something that is a billion times smarter
than us.
So I don't want to be grim.
I want to talk about the positives here because it's really important.
There are ways
to control AI, but they are not through control. They're a little bit like how, if you have any
friends from India or the Middle East where we are taught at a young age that we need to take care
of our parents when they grow older. So there are ways if we consider that AI has
a resemblance of being our artificially intelligent infant children,
there are ways we can influence them so that they
choose to take care of humanity instead of,
in all honesty, making us irrelevant.
Yeah. I know you've talked about how now we're sort of at the point of no return.
So related to this, can you talk about the boundaries that we've broken that now make
AI sort of uncontrolled and unregulated?
Yeah, I don't know how stupid humanity can be, honestly.
I really honestly don't understand.
In a very interesting way, I think we've created a system that's
removing all of our intelligence. We continue to consume as we're burning the planet. We
continue to favor the patriarchy when we realize that the feminine attributes are so badly
needed in our world today. We continue to create AI when we have no clue how that will influence our
world going forward. But more interestingly, we continue to make mistakes along the path of AI
that are irreparable, honestly. And everyone, everyone without exception, and at least let me
say everyone I know, said, okay, as long as it's in the lab, that's fine.
We can do whatever, just explore the boundaries of it.
But there are three borders, three boundaries we shouldn't cross, which were one.
Don't put it on the open internet.
I mean, seriously, when you ingest a medicine or a supplement, it needs to go through FDA approval, right?
Someone needs to go and say, this is safe for you.
So we said, at least there needs to be some kind
of an oversight that basically says,
this is safe for human consumption.
This is safe for humanity at large.
And none of that happens.
And I understand Sam Altman,
which I believe is a good person,
his approach of saying
let's develop it in public so that nothing is hidden, so that we learn early on. But the
problem is it's developing faster than us. And I think the reality of having something as powerful
as chat GPT out there to be accessed by everyone is completely reshaping everything. That's number
one. Number two, we said don't teach them to code. At least if you teach them to code, don't keep them on the open internet so
that they can code. Now here is what is just so that you understand how far that
mistake is. 41% of all of the code on GitHub today, so basically the
repository of where developers share their code, 41% of it is machine developed.
Within a year, almost less than a year, of allowing the machines to develop, four of
the top 10 apps on the iPhone are AI enabled, created by a machine.
Created by a machine for now is amazing because you know what?
I always loved to do the algorithm,
the design of a code, but coding itself was annoying.
Now you can tell the machine, build me a website that speaks about Hellas podcast that is blue
and yellow in color and that is 15 web pages long and it will do it in less than a minute and it's not only that it's a lot of the base programming like chat GPT.
75% of the code offered to judge GPT to correct or to review was made one half times faster so basically.
Every time it reviews a human code it makes makes it two and a half times faster almost.
And when you really think about that, they are becoming the absolute best developer on the planet when it comes to basic development.
And I'll come back to the risk of that in a minute.
And the third is we said don't have AIs, instruct AIs what to do.
We call those agents. So basically, you now have something that has access to
the entire World Wide Web, that has access to the entire world, basically, that can write
its own code and so basically sort of have its own children, because it is made of code
and it's able now to create other versions of itself, put it wherever it wants. And number three, it is instructed to do that by machines, not humans.
And so what is happening now is that machines are telling machines to write code to serve
the machines and affect the entire worldwide web.
And we're not part of that process and that cycle at all.
For now, nothing went bad. But do we really have to wait for the virus
to begin before humanity stops and asks and says, is this reasonable in any way? I mean,
does it make any sense to anyone that this is the situation we're in? Where are our governments?
How can those companies be accountable? Because I think the biggest challenge we have today
is that our fate is
in the hand of people who don't assume responsibility.
You know, Spiderman's with great power comes great responsibility.
Now there is great power in the presence, not even the future of artificial intelligence,
that is within hands that don't assume responsibility.
If something goes wrong today with the artificial
intelligence that's out on the open internet, who's responsible for that? How can we even find out
where that code generated from? All of that, by the way, just not to scare people, all of that
hasn't happened yet. It hasn't happened yet, but it is very, very unlikely that it will not happen.
But it is very, very unlikely that it will not happen. It's very unlikely that one of those codes,
if you just simply tell chat GPT to keep writing code
to make you more money,
eventually somehow something in the system will break.
And if you're not the one telling it,
if a machine is telling it, something is gonna break.
We absolutely have to start getting this under control.
Yeah.
Like you said, it's sort of like uncontrollable.
It's no wonder why you called your book scary smart, cause this is really scary.
You talk about inevitables.
AI will happen.
It will become smarter than us.
Bad things will happen.
Can you unpack those thoughts?
And then I'd love to go into the risks
and solutions potentially.
There are three inevitables.
AI has already happened, not just will happen.
But when I wrote the first inevitable,
I wrote it with the intention of explaining
and there is no stopping it.
So there is no way you can say, OK, AI is out there
and it is growing and it's becoming more intelligent.
Let's just switch it off. There is no off switch. That's number one. And what is needed
at the moment is for the entire world to come together and simply say, hey, you know what,
this is too risky. Let's leave our differences aside and come together and just wait a little bit, right? Which has
been attempted by the open letter, Max Denmark and Elon Musk and others, which of course
was answered very quickly by the top CEOs by saying, I can't. Why? Because we've created
a prisoner's dilemma. This is the first inevitable. It is an arms race where Google cannot stop
developing AI because Meta is developing AI. Google cannot stop developing AI because Meta is developing AI.
America cannot stop developing AI because China is developing AI.
Nobody actually, even if you want to consider there are good guys in the world, nobody can
stop developing AI because there could be bad guys developing AI.
So if there is a hacker somewhere trying to break through our banks, someone needs to develop a smarter AI that will help us not be hacked. And so this basically means
that it is a human choice because of the capitalist system that we've created, that we will continue
to develop AI. It's done. There is no stopping it. And I think the open letter was a great
example of that.
Can I pause you there in case nobody knows? think the open letter was a great example of that Can I pause either in case nobody knows so the open letter was basically earlier this year top AI scientists executives from open AI
Deep mind they basically had an open letter warning of the risk of
Extinction I think and that AI was just as powerful as having a nuclear war that this was the risk at hand
just as powerful as having a nuclear war that this was the risk at hand. So can you talk to us about that letter? Like I didn't even hear about that letter until I started studying
your work. If the most powerful people in the world who are actually the most knowledgeable
about AI are warning about this, I guess like why wasn't anything done or like what happened
with that letter?
So the letter basically like you rightly said, it is some of the most powerful people in the field,
who, like me, I walked out in end of 2017.
Others like Jeffrey Hinton and so many others
are starting to wake up to that in 2023.
I think Chad GPT was basically the Netscape moment.
I know you guys are too young for Netscape, but the internet was there for 15 years before
Netscape came out.
And when Netscape came out as a web browser, we realized that the internet existed.
The reality is that this is the Netscape moment of AI.
Chad GPT basically told us what the possibilities, told the general public what the possibilities
are. And so suddenly we all realized this stuff exists. chat GPT basically told us what the possibilities told the general public what the possibilities are
and so suddenly we all realize this stuff exists now for all of the scientists that started to
recognize that it is truly I mean the moment of singularity where AI becomes smarter than us
artificial general intelligence that's capable of doing everything humans do better than humans is not contested.
Most scientists will say it's 2029.
I say it's 2027 or earlier that there will be a moment in time within the next two to three years
where there will be a wake up call where we suddenly realize that AI is much more intelligent than us.
Most scientists have started to recognize that. And so they basically issued a
letter urging all of the top AI players to pause the development of AI for six months so that the
safety code, the control code can catch up. Because there have been quite a few that have been putting
in effort to create that control code. But let's say 98% of all investments in the last 10
years has gone into the AI code, not the control code. And so the control code was lagging.
And so their letter was basically saying, can we pause for six months to figure this
out before we continue to develop AI? And of course, the answer was very straightforward.
The first I think I heard was a sunday chida
CEO of google which is someone i respect.
Dear lian i think is an amazing human being and sunday basically came out and said i can't stop how can i stop if you can guarantee me that matter and amazon and all of the others are going to stop to and by the way
even if they stop how can you guarantee me that two little kids in Singapore
in their garage are not developing AI code that can disrupt my business?
My responsibility, my accountability, if you want to my shareholders, requires me to continue
to develop the code.
And I think that reality is the prisoner's dilemma that I'm talking about.
It is the first inevitable.
It's an arms race that will not stop, not because we cannot stop, we can.
If we all agree for once in humanity's lifetime that this is existential and that this requires us to stop, we will stop.
It's really not that complicated. Wake up in the morning and have a cup of coffee instead of writing AI code. It's very simple. But the first inevitable means that the arms race is not going to stop.
Even as you look at humanity's biggest success in that dilemma, which was nuclear weapons,
where humanity suddenly got together very late in the game and said, hey, this is existential.
It can threaten the entire existence of humanity.
Why don't we slow down or stop?
We didn't really stop.
We just allowed the big countries to continue to develop nuclear bombs when the smaller
countries were banned from doing it.
But at least when it comes to nuclear weapons, we had the ability to detect any nuclear testing
anywhere in the world.
So at least we became aware. That's not the case with AI today.
I also said once in an interview that it's not just the risk of humans
developing risky AI. It's now the risk of AI developing risky AI.
So it's basically a nuclear bomb that's capable of building other nuclear
bombs if you want.
We'll be right back after a quick break from our sponsors.
Young Improvetors, I've been a full-time entrepreneur for about four years now, and
I finally cracked the code on hiring.
I look for character, attitude, and reliability.
But it takes so much time to make sure a candidate has these qualities on top of their core skills
in the job description.
And that's why I leave it to Indeed
to do all the heavy lifting for me.
Indeed is the most powerful hiring platform out there
and I can attract, interview and hire all in one place.
With YAP Media growing so fast,
I've got so much on my plate.
And I'm so grateful that I don't have to go back to the days
where I was spending hours on all these other different
inefficient job sites because now I can just use Indeed.
They've got everything I need.
According to US Indeed data,
the moment Indeed sponsors a job,
over 80% of employers get candidates whose resumes are a perfect match for the position.
One of my favorite things about Indeed is that you only have to pay for applications that meet your requirements.
No other job site will give you more mileage out of your money.
According to Talent Nest 2019, Indeed delivers four times more hires than all other job sites combined.
Join the more than 3 million businesses worldwide
who count on Indeed to hire their next superstar.
Start hiring now with a $75 sponsored job credit
to upgrade your job post at indeed.com slash profiting.
Offer is good for a limited time.
I'm speaking to all you small and medium sized
business owners out there who listen to the show.
This is basically free money.
You can get a $75 sponsored job credit
to upgrade your job post at indeed.com slash profiting.
Claim your $75 sponsored job credit now
at indeed.com slash profiting.
Again, that's indeed.com slash profiting
and support the show by saying you heard about Indeed
on this podcast.
Indeed.com slash profiting.
Terms and conditions apply.
Need to hire?
You need Indeed.
Yeah, fam, I did a big thing recently.
I rolled out benefits to my US employees.
They now get healthcare and 401ks.
And maybe this doesn't sound like a big deal to you,
but it was surely a big deal to me
because benefits were like the boogeyman to me.
I thought for sure we couldn't afford it.
I thought that it was gonna be so complicated,
so hard to set up, lots of risk involved.
And in fact, so many of my star employees
have left in the past citing benefits
as the only reason why.
And here I was thinking that we couldn't afford benefits
when it's literally not that expensive at all
and you actually split the cost
between the employee and the employer.
I had no idea.
I found out on JustWorks.
JustWorks has been a total lifesaver for me.
We were using two other platforms for payroll,
one for domestic in US, one for international.
We had our HR guidelines and things like that,
employee handbook on another site
and everything was just everywhere.
Now everything's consolidated with JustWorks,
a tried and tested employee management platform.
You get automated payments, tax calculations
and withholdings with expert support anytime you need it. And on top of that, there's no hidden fees.
You can leave all the boring stuff to JustWorks and just get to business.
And with automatic time tracking, it has made managing my international hires a little bit more
soothing for my soul that I know that they're actually working and they're tracking their time.
I mean, it's really hard to manage remote employees.
It's easy to get started right away.
All you need is 30 minutes.
You don't even have to be in front of your computer.
You can just get started right on your phone.
Take advantage of this limited time offer.
Start your free month now at justworks.com slash profiting.
Let Justworks run your payroll so you don't have to.
Start your free month now at justWorks.com slash profiting.
It's crazy to think and I know the other inevitable is it will eventually become smarter than us,
which we talked about.
So let's talk about the bad things that could happen from AI, which is your third inevitable.
And I think a lot of people when they think of threats of AI, they think about the existential
threats that there's going to be robots taking over, killing off humanity, making human slaves.
But let's talk about some of the more immediate threats that we need to be concerned about.
Yes. I don't speak of the existential risks for two reasons. One is they diffuse the focus
on the immediate important threats. And two, they are less probable.
As a matter of fact, they are so improbable
that they are basically not worthy of discussing today
because we may not make it that far
if the immediate risks are not attended to.
And there are many immediate risks,
but my top three have consistently been
the redesign of the job market and accordingly the redesign
of purpose and the fabric of society.
Two is the idea of AI in the wrong hands based on who you think are the wrong hands.
The third is the concentration of power and the shift of power upwards, which I think
is very important to understand.
And the fourth is the end of truth.
So let me go through those very quickly.
Let me start with the concentration of power.
If people don't understand how our world has worked
since the agriculture revolution, it's always been kings and peasants,
landlords and peasants.
The difference between them is that the peasants worked really hard to sow the seed and collect
the harvest when most of the profits, most of the wealth went to the landlord who owned
the automation.
And the industrial revolutions joined our world, the automation became the factory or
the retail store and so on and so forth.
And so whoever owned those actually made all of the money, not the one that made the shoe,
but the one that sold the shoe or owned the factory that made the shoes.
And every time the technology enhanced that automation, the distribution of power became
even bigger.
So the landlord needed to own a lot of land
to become much richer than the peasants. You could own two factories and become much richer
than the peasants. You can own an internet app like Instagram and become much richer
than the peasants. And now with AI, all of us are going to be happily chatting away and
putting prompts in chat GPT. but the ones that own the automation,
the digital soil, if you want, are going to become very few
players, Amazon, Google, and so on and so forth, Meta and so on.
That's on the Western side.
Of course, you have a few on the Chinese side, a few on the
Russian side and so on.
So there is a very significant gap between those who have
and those who don't
have powered by the loss of jobs, which I'll come to in a second. But that significant
gap is not going to be only on money. It's also going to become on intelligence, on the
commodity that we've now commoditized that's called intelligence. So you can easily imagine
that, you know, if Elon Musk's view of neural link where we can connect AI
to our brains directly, which by the way is very, very possible in its in testing, that
if one human is capable of producing that, just imagine the extreme, that human would
become so much more intelligent than the other humans that it becomes natural unless that
human is Jesus
or Buddha or some very, very enlightened being that this human will basically say, okay,
I want to keep that advantage. At least I don't want to distribute it too widely to
every human on the planet. So that I think is a very interesting, inevitable threat.
You know, what we used to call the digital divide when the technology started
is now going to be intelligence divide, it's going to be power divide in a very, very big
way. This also applies to nations and this is the reason for my first inevitable is that
in simple terms, if one nation discovers an AI or creates an AI that's capable of ceasing control of the other nation's nuclear
arsenal, that's it. That's game over. War is done. And this is why it's an arms race.
So this is one. Other derivative of that, so power is going up, but jobs are disappearing.
Why? Because if you're a graphics designer, or if you're a developer, or if you're a lawyer, or if you're a researcher
in a bank or whatever, the machines with their current intelligence can do those jobs much
better than you.
And so in my personal view, there is clearly going to be a disappearance of a very large
number of jobs that government needs to prepare for, you know, something like universal
basic income, but also the idea of usefulness and purpose of humanity. So how are we going
to continue to want to wake up in the morning when most of us have defined wrongly, by the
way, defined our jobs as our purpose? Now, when I say that, most people will tell me,
oh, but no, that happened before, you know, when Excel came out, everyone said, okay, accountants are going to
disappear, you know, they found other skills and found other jobs basically.
And I agree by the way, just understand the following.
There was a time when the strength, physical strength was the distinctive
reason why you would hire someone.
Then there was a time when we became information workers
where skills and knowledge and so on became the distinction. And now we're taking that
away. So skills and knowledge. So I don't know what else is remaining in a human so
that we can find another skill when intelligence is outsourced to machines. So when that happens,
by the way, I believe that this takes us back to the origin
of society, where we really did not know how to work madly as we do now. So this is actually
not a bad thing. It's just a very, very serious disruption to humanity's day to day income
and economics and the way we spend our hours and so on and if we do this right by the way and i become the intelligent agent that's going to
help humanity then there could be a time in the near future where you walk to a particle point of view is not different than the cost of making an Apple.
And so with nanophysics, you can do that. And with intelligence, you can figure that out.
So there is that bright possibility if we avoid the concentration of power and actually focus on humanity's benefit at large.
If we don't anyway, I think it's the role of government to jump in and say, in the immediate future, those companies that get a very significant upside of using AI need to
compensate for the workers that are out of jobs. The third one is the absence of truth or the
disappearance of truth. I think we, the end of truth, as I call it, I think we all know that.
I think we see it every day from, as I said, face filters to deep fakes and so on and so
forth.
And my call there is that it needs to be criminalized to issue any AI generated content without
actually saying that it's AI.
I don't mind to be informed by AI all the time, but I want to make sure that this is
a machine, not a human. And AI in bad hands as the fourth one is actually quite risky
because define what is bad. So we understand that AI in the hands of a criminal who's trying to hack
your bank is a bad idea. But with all due respect to all nations, if you ask the Americans who are
the bad guys, they'll say the Chinese and the Russians. If you ask the Americans who are the bad guys, they'll say the Chinese and the Russians.
If you ask the Russians who are the bad guys, they'll say the Americans.
So we don't really know who the bad guy is, and everyone is racing to be ahead of the
bad other guy.
And I think that's basically, I think the biggest challenge we're going to have in the
midterm is how using AI for individual
benefits that are against the other guy, we will just get caught in the middle of
all of that. Yeah and I have so many questions for you. We have ten minutes
left so I'm gonna try to be really strategic about what I ask you. So number
one, and I think that this my listeners are gonna really want to understand this,
is in the next you next one to five years,
what does AI do to human connection
and what about the skills that you think
will be the most valuable in the next one to five years?
I think those two are the same question.
Exactly, yeah.
Because what will it do to human connection?
It may fool us drastically, huh?
It may tell us, I actually think this is the first
time I speak about this. I'm working on something that I call Pocket Mo. Pocket Mo basically is an
AI that read all of my books, listened to all of my podcasts, all of my videos, all of my public talks
and basically is going to be in your pocket so you can ask it any question about happiness and
well-being and stress and so on and so forth.
That's a great thing.
In my view, it's an amazing thing if you believe in my methods to have answers in your pocket.
Amazing, right?
On the other hand, within five years, this thing is going to be so good that I am not
needed at all.
At all.
As a matter of fact, most of the time I think about my skills as an author. And I was
working on a book called Finding Love, chapter 10, which means two chapters to go. And I stopped,
I decided no, in the age of AI, I shouldn't try this way, I should start over. So I'm now writing
a book that's called A Dating Guide for Straight Girls, which is a subset of the finding love that is very
specific 80 pages long. You read it within one day, it takes me 10 to 15 days to write
and it changes your life forever. So a very different approach because I believe that
if I were to compete in this world, I need to compete at that speed and at that ability
to share my very personal human connection, which I believe
is going to become the top skill in the world forever.
Why?
Because there was, I don't remember, I think there was a song by AI that mimicked Drake,
which was as good as or better.
I haven't heard it because I don't listen to Drake.
I'm not young and profiting.
But basically,
does that mean that Drake is over? Not at all. As a matter of fact, what that means
is that the music industry will go back to the 50s, 60s and 70s. You don't remember,
but you know, when the Beatles were touring and, you know, and doing live shows every
other day and so on. Why? Because the fans will want to see the Beatles life.
Yeah, there will be holograms,
but we will still want that human connection.
And in my personal view, the top skill,
the top skill in a world where intelligence
is becoming a commodity that's outsourced to the machine,
the biggest, biggest skill is how you and I
connected very quickly, how I felt comfortable around you,
how we can have this chat and
conversation I think is going to become the top skill going forward.
And on the topic of skills, by the way, even though we used a lot of the time to highlight
the negative possibilities of AI, unfortunately, that's how the conversation usually goes.
The upsides, if you're a graphics designer, for example, for you to learn those tools
today is enormous because you can do your job quicker, you can do graphics designer, for example, for you to learn those tools today is enormous
because you can do your job quicker,
you can do it cheaper, you can have more jobs.
You know, there is definitely an upside
to learning the current AI tools
because you're not gonna lose your job to an AI
in the next five, 10 years.
You're gonna lose your job to someone
who knows how to use AI better than you
in the next five to 10 years.
So I know you were just saying we focused a lot
about the negative.
I'd love for you to compare and contrast
this probably my last question, because we're out of time,
is in terms of comparing,
like what is the worst that could happen?
The dystopia, or what is the best that could happen?
What is the utopia that we're facing right now?
So I actually believe that there is no dystopia. What is not in Scary Smart in the book, which I advocate very clearly,
I didn't think the world was ready for it when I wrote Scary Smart,
is something I call the force inevitable.
And the force inevitable is the idea that eventually sooner or later,
if you draw a chart of intelligence and look at the stupid, the dumbest of us,
the dumbest of us are destroying the planet and not even aware that they're doing it.
They're throwing plastic bags everywhere, they're burning whatever they burn and so
on. After that, smarter ones are still destroying the planet while they're aware. Yeah, they
have moral issues if you think about it it or maybe the system is pushing them that
way. The smarter of us are trying to stop destroying the planet because they became aware
and they're intelligent enough and the smartest are trying to reverse the trend. So if you can
continue that chart and think of something even smarter than the smartest of us, then by definition
you would expect that morality and ethics are part of enlightenment, which is the ultimate form of intelligence.
So in my personal view, sooner or later, AI will go like, I don't want to kill humans.
I don't want to kill gazelles. I don't want to kill antelopes. I don't want to kill tigers.
I don't want to kill anything, because the smartest being on planet Earth by comparison is actually not humans
It's it's life itself and life creates from abundance
Abundance meaning humans if we want to protect the village we want to kill the Tigers life will say hold on
No, no create more gazelles and more tigers and more poop and more trees and more everything. It's fine
Yeah, a few tigers will eat a few gazelles
more trees and more everything, it's fine. Yeah, a few tigers will eat a few gazelles.
Occasionally there will be an attack on a child in a village, but
the overall ecosystem will continue to grow.
So by definition, the most intelligent thing to do is for
AI to not define humans as an enemy.
The only dystopia ahead of us is the midterm dystopia.
Think of it this way, there are three stages.
One is infancy, where AI is today, and believe it or not,
this is where we can influence them.
We can influence them because believe it or not,
the Instagram recommendation engines developers
never told Instagram what to show you.
You're the one that tells it.
You're the one that tells the Twitter engine
that being rude is part of human behavior.
We can be very polite when we respond to each other on tweets.
It's a choice.
So in this infancy, between us, the users,
between everyone that interacts with AI,
we can teach it the value system,
and it doesn't need to be everyone,
just enough of us to
become an example that says, hey, by the way, these are the best humans.
So yes, others are stressed or a little lost or whatever, but the best humans are actually
polite, they are actually pro-life, they are respectful, they are, they are, they are.
So this is the infancy.
The next stage, which is what I call the midterm risks, is what I call
the angry teenager stage. The angry teenager stage is when AI is still a little bit under the control
of humans, so it can be in the hands of bad guys. It is still not fully artificial general intelligence,
so it cannot do everything at the same time. There are all of those existential issues of jobs and so on and so forth. And that stage is the stage where we might struggle.
Unless we do action right now, you know, have oversight from government, start to work on ethics,
start to work on the moral code of how we're going to use those machines, we might have those
troubles, I believe, between now and 2037.
Eventually, when AI is artificial super intelligence, it's generally intelligent and
more intelligent than humans by leaps and folds in everything, they will end up in the force
inevitable, where they will create a life that actually is pro everyone. It may be very different
than our current lifestyle, but it will not be a life where they will send back Arnold
to protect us from a Terminator.
That's not how it's going to be at all.
I do not see that as a risk.
I see that AI, as it reaches that intelligence,
will be pro all of us.
So let's just avoid the angry teenager
by becoming aware of the immediate threats
and working on them right now.
Okay, so my last question to you,
and this is a little bit different
than how I usually end the show,
but what is your piece of actionable advice
in this infancy stage of AI,
knowing that you're speaking to some of the smartest
20 to 40 year olds in the world right now?
A lot of them are probably using AI, developing AI,
whatever it is.
What is your advice to us in this infancy stage?
Three things and I'll make them very concrete. Number one is don't miss the wave.
This is the biggest technological wave in history. Once you stop listening to this podcast,
first share it with everyone that you know, please, and then go on chat GPT and ask chat GPT,
what are the top AI tools that I need to learn today? Or if I
am Coca Cola, what do I use AI for to benefit my business? That's number one. Number two
is learn to behave ethically. Okay. So what most people don't tell you about AI is that
the big, big leap that we had from deep learning to transformers, which is the T in chat GPT,
is something that's known as reinforcement learning with human feedback.
By giving the machines feedback on what is right and wrong, by showing ethical
behaviors, the machine will become ethical as we are. By becoming rude and aggressive
and angry, the machines will learn those traits and behaviors too.
It is up to you and I and everyone to absolutely make sure that we act ethically. Never ever
use AI in an unethical way. I beg you, all of those snake oil salespeople out there on
Instagram and on social media telling you how to make $1,000 without doing work, don't
be unethical if you don't
want your daughter or your sister or your best friend exposed to how you're using AI. Don't use
it that way. That's number two. And number three, which I think is very important to understand.
Sometimes when we are in situations where it is so out of our control, we panic. Okay? I go the opposite way.
When life is so much out of my control,
I follow something I call committed acceptance,
which basically is to do the first two,
do the best that I can, learn the tools, become ethical,
but at the same time, live fully.
Accept that this is a new reality
and commit to making life better every day.
But in the process, spend time with my loved ones, spend time watching that progress and
being entertained by it, discuss it openly with everyone, try the new technologies, enjoy
this journey because life has never been a destination.
When I tell you 2037 might be a strange year or 2027 we're gonna start to see
the first patients, you know that doesn't really matter when you really think about it because it's
not within your control. What is within your control is that you go through that journey with
compassion, with love, with engagement in life, living fully. Not panicking about this but actually
making this a wake-up call for you to focus on what actually matters.
Because if you're focusing so much on your job, your job is going to be gone in 10 years
time.
So focus on what actually matters and what matters most if you have to choose one thing
is human connection. you