Moonshots with Peter Diamandis - AI Expert's Urgent Wake-Up Call : Unveiling the Silent Threat w/ Mo Gawdat | EP #51
Episode Date: June 22, 2023In this episode, Peter and Mo discuss the imminent question we’ve all been asking: do we need to save humanity from AI? Are we in danger? 07:11 | A World Aware of AI Dangers 22:56 | The A...I Debate: Real Danger? 1:24:33 | Governing Artificial Intelligence Mo Gawdat is a renowned entrepreneur, author, and advocate for happiness and well-being. With a background in engineering and technology, Gawdat has dedicated his career to exploring the intersection of happiness and human potential. As the former Chief Business Officer at Google [X], he played a pivotal role in developing moonshot projects aimed at solving some of the world's biggest challenges. Gawdat's insightful and transformative book, "Solve for Happy," has inspired countless individuals to reframe their perspectives and find joy in life's most challenging moments. Read Mo’s best-selling books. _____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Experience the future of sleep with Eight Sleep. Visit https://www.eightsleep.com/moonshots/ to save $150 on the Pod Cover. _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots and Mindsets Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Sasquatch here. You know, I get a lot of attention wherever I go.
Hey, Sasquatch! Over here!
So, when I need a judgment-free zone, I go to Planet Fitness.
Get started for $1 down and then only $15 a month.
Offer ends April 12th. $49 annual fee applies.
See home club for details.
Maple syrup, we love you, but Canada is way more.
It's poutine mixed with kimchi maple syrup on halo halo montreal style bagels eaten in brandon
manitoba here we take the best from one side of the world and mix it with the other and you can
shop that whole world right here in our aisles find it all here with more ways to save at Real Canadian Superstore.
It's game over for living the way we have lived in the 20th and the beginning of the 21st century.
The topic is heating up and we're running out of time, seriously running out of time.
Jobs are not the same. Truth is not the same. Power is not the same. Income is not the same.
Purpose is not the same. And then the AI arms race begins.
Exactly. And it's inevitable. These AIs are watching our behaviors, how we treat each other,
how we treat our machines, and it's emulating that. There's absolutely nothing inherently wrong with intelligence. The problem is capitalism. Isn't it ironic that the very
essence of what makes us human is what we need to save
humanity? This is humanity. It's not what you see on TV. It's not what you see on social media. And
I think if 1% of us just showed up, it would instill the doubt in the minds of the machine
so that they investigate the truth. And what is the truth? The truth is a species that is capable of love is divine.
Welcome to Moonshots and Mindsets.
We're about to dive into a conversation with Mo Gadot,
an extraordinary individual of heart, mind, and soul.
For a decade, a senior executive at Google and then the chief business officer at Google X,
working with Astro Teller at the Moonshot factory. Mo is amazing and he's got two moonshots we're going to dive into. His first moonshot is
to help make a billion people happy. He wrote a book called Solve for Happy. The second one
is making the world aware of and getting people to get involved in the concerns around the dangers
around AI.
And he wrote a book called Scary Smart that brought me to this conversation with him.
I've known Mo for some time.
We're going to be talking about a range of things like what are the real concerns about AI?
How scared should you be?
How excited should you be?
Are we going to merge with AI?
Are we going to upload ourselves?
Is it danger that AI is going to destroy the planet or is it humans using AI? We're going to
cover a whole range of subjects. Is artificial intelligence going to be able to create a
community and a conversation and a sense of connection with humans as good as we do. Mo thinks we will.
Not in 20 years, but sometime in the next five years.
The humanoid robots that are coming.
We're going to cover all of these subjects.
My goal here is make you aware of what you should be talking about at the dinner table,
in your boardroom, in the halls of Congress, and what you should be excited about as well.
This is one of my favorite podcasts I've done. Please stick with me, I hope, and I look forward
to your comments as you subscribe and give your comments on this. This probably will turn into a
regular conversation with Mo. He is one of the most brilliant thinkers on the subject of AI
out there. He's seen it firsthand. He's been part of
it. All right, let's dive in. Welcome, everybody. Welcome to Moonshots and Mindsets. I'm here with
an extraordinary man, both cognitively and in his heart, someone who I'm proud to call a friend.
And Mo, it's a pleasure to be with you. Always, Peter. It's always a pleasure to be with you. I
mean, the fact that we record it this time makes it quite a bit of an interesting one, but I think
all our conversations have been so fulfilling and so enriching. Thank you so much for having me.
It's roughly 7 a.m. here in Santa Monica. You're on the other side of the planet in Dubai,
and it's an amazing world we're living in that we can do that.
It truly is.
It really is.
And I think we take it for granted quite frequently.
The reality that you and I can connect literally with one text message and then be almost together.
It's not as amazing as being in the same place, but almost together within minutes on a video conference is just
almost science fiction when you really think about it. Just if you're a fan of Star Trek or,
you know, whichever early science fiction, this was positioned as science fiction.
And we're living in it. And the challenge is that I think you'll agree that the speed of change is
so fast that we forget the miracles we have every day.
We forget the crazy world we're living in, which we're talking to things and it's answering back
and you can know anything you want instantaneously. And health and education, we're transforming the
world. And it's enthralling. But for a lot of the world, it's scary as well. Let me just set this
conversation up. We're going to talk about moonshots here, and you have two extraordinary
moonshots. Let me mention them, but I'd like you to frame them for me, and then we'll talk about
each. I would say your earlier moonshot, the one in which I first met you, is your original book, Solve for Happy, and your moonshot of One Billion Happy.
Is that the right phrasing for it?
Yeah, so that's how it started.
So, you know, more or less, it's actually extended into the second one.
More or less, it's actually extended into the second one.
But when I lost my wonderful son, you remember the story.
In 2014, Ali was the one that taught me everything I knew about happiness.
And when he left, I attempted to start a mission that was called 10 Million Happy.
And 10 Million Happy was mainly for my son's essence to live on, if you want. Okay, I was trying to tell the world what this young, wise man has taught me. And in my mind, I know you'd
understand the math, I calculated very quickly that if I could get a bit of Ali's essence to 10
million people, then in 72 years in through six degrees of separation, a tiny bit of him will be
everywhere and part of everyone. That was my calculation, right?
The miracles of exponential growth.
There you go, right? And if you know the math, you actually think this is reasonable. If you
got to 10 million people, that would be right. And I was surprised by the reception of life, if you want. So within six weeks,
eight weeks to be very specific, it started on week six, but by week eight, the message had
reached 137 million people. And we don't measure people who got a video or just pressed a like.
That doesn't count.
But we measure people that take concrete action.
And it was very clear within eight weeks that we surpassed the 10 million happy.
And so we upgraded to 1 billion happy, which I think is a true moonshot when you really think about it.
It is.
And I want to come back to that and talk about it in detail because it's important.
People say, what's important in life? And everybody eventually resolves it to being happy, having your children be happy, having your family be happy.
The second moonshot, which is the more recent one and brings us to this podcast, I would frame it as educating the world about AI. Would you frame it as
educating the world about the dangers of AI? It's in essence, the emergence from your amazing book,
Scary Smart, which I've read twice. My family's read. Oh my God, it's such an honor.
It's a beautiful book. And I want everybody hearing this to read it.
And by the way, the way you wrote it was extraordinary.
I've had the pleasure of writing a few books.
I know that you've loved writing.
I want to talk about that.
But you wrote it in such a consumable fashion.
But let's frame your moonshot here.
fashion, but let's frame your moonshot here. My moonshot is to tilt the singularity of AI in favor of having humans' best interest in mind. And to be able to do that, of course,
education is part of it. But more crucially, I would say that the real moonshot is to shift human behavior, to align more with human values, so that we become a data set from which AI learns to have our best interest in mind.
And I think most people who are techies or who are not fully informed of AI may not see the relationship, and I'd love for us to get deep into this.
But the idea is to shift
singularity in favor of humanity. You know, people have been hearing about large language models,
whether it's Palm from Google, or whether it's GPT-3 or GPT-4, driving open AI's chat GPT.
I think it's important for folks to realize that these AI models, these large
language models are effectively a reflection of humanity, right?
They have learned from everything we've put onto the web, from our Facebook posts, to
our tweets, to our corporate sites, to what we search for.
And so we've been unknowingly perhaps putting out all of this content and then putting
these AIs, these new life forms to grow and learn from all of this, what would be 50 years of content
we've been putting out there. And we've been inadvertently teaching it and not realizing.
Can you expand on that thought, which you talk about beautifully
in the beginning of Scary Smart? Spot on. I mean, the reality of the matter is that
we, humans and human history and human literature and human behavior and all that we put out there
is much more influential on the decision of an Instagram recommendation engine tonight of what which video to show you
as well as your own behavior then the developer that coded the
Recommendation engine right so so, you know when it came to you and I remember the old days when we coded real
You know simple computers. I started with a Sinclair. And, you know, I started with a 6502
microprocessor. Oh, man. Yeah. It was such a joy. I mean, for those who have lived those years,
this was truly the definition of magic, right? Because you could build anything, you could just
build a world of fantasy, really, that is for us, for geeks, we can see it. You tell the computer to do something
and it does it, and then you tell it to do something more complex and it does it. But until
the turn of the century, deep learning specifically, computers were not intelligent.
As intelligent as they have appeared, they were glorified slaves.
They were repeating your intelligence and mine in a very efficient and very fast way at scale, right?
So if you wanted the computer to solve a problem
or say something to the user,
you had to code that thing
and then tell the computer to do it in certain circumstances.
When we shift, and each and every one of us dreamt for age, I'm sure you did, because
my lifetime dream was to code intelligence, right?
If you can code anything, why else would you code anything but intelligence, right?
And we failed over and over and over.
We lied. We created simulations of intelligence. We tried to make computers seem like they're human, but they were not human. us that you can actually create intelligence that is that's actually
autonomous that learns on its own that is informing its own its own
understanding of the world if you want now when we do this what we actually
code is not to tell the computer what to do but to tell the computer how to
develop the intelligence needed to do it.
Okay. And, and, you know, in a very simplified way, the way we did that was we showed the
computer endless patterns and, and said, because of your ability to create neural levels and your
neural networks, and you could, you could see depths in that data that we couldn't see with our limited human
brain, they started to become intelligent. Simply, I think the only word is really intelligent.
Like my son or my daughter became intelligent when you gave them a puzzle and they attempted
to put the, you know, the square, to put the cylinder through a, you know, star-shaped hole,
and then it failed, and then they tried the square, and it failed, and then finally the circle.
They developed that on their own. Nobody ever went to a child and said, hey, by the way, you know,
flip the puzzle on its side, look at the cross-section, the cross-section look like a
circle, look like for a matching, you know, pattern, and then put it through. That's how we coded all the computers.
New computers don't do that. We just give them the puzzle and say, keep trying until you figure it
out. Now, because of that, the more determining factor in terms of the actual type of intelligence and intensity or quality of intelligence that
comes in a language model is more informed by the data that it's trained on than the
few thousand lines that are the code that informs its intelligence.
And I think people would be incredibly amazed at how few lines of code are driving ChatGPT or BARD.
Thousands, literally.
I mean, if you remember when we coded in COBOL or RPG, it's 80,000 lines of code to get anything done at all.
I think ChatGPT's core modules are like a couple of thousand, maybe 4,000.
I think 3,000 or 4,000, yes.
And it's amazing because it's extrapolating and interpolating
and it's reaching, if you would, conclusions.
But again, going back to the key point here,
it's doing all of this not in a vacuum.
It's doing it based upon everything we have fed it.
It's learning from us.
And as you point out in Scary Smart, we are its parents.
We are giving birth to a new form of intelligence, whatever you might.
And we'll get into whether this is sentience or whether this is conscious, but it is a form of intelligence.
or whether this is conscious, but it is a form of intelligence. And that intelligence is being grown in the, if you would, the medium of human knowledge.
Yeah.
And we have seen quite a few experiments early on of how that intelligence would develop
to be positive or negative, aggressive or loving loving based on the data that we give it.
In the early chatbots, if you remember Tay or Alice, which was Yandex's, or there was Norman,
I think that was done by MIT. And if you feed those chatbots negative, aggressive information,
they start to be sexists and racists. And you can see it. And we
shut them down because we don't know how they arrived at that. They arrived at that from the
data, not from the programming. Yeah, it's interesting. Now, you open the book, Scary
Smart, with a beautiful analogy that I've told to at least 100 people. And I've spoken about it on
my podcast. And I've spoken about it on stage at Abundance 360 because I think it's a great analogy.
And it puts the power of where AI goes directly into the hands of everybody listening.
And that is an empowerment move because we're going to talk about the fear side of this as well as the excitement side of this.
So your analogy of Superman, would you
please tell that story? Because I think that's fundamental to what we're going to discuss.
It truly is at the core of my understanding of what's happening here. And I think,
you know, it's important for people to understand that Superman has arrived to planet Earth,
right? If you remember the story of Superman, there is this alien infant that arrives with superpowers, right?
From the planet of Krypton.
From Krypton, right? And that young infant, luckily for humanity, is adopted by the family
Kent. And the family Kent is a family of values that basically teach that little child to protect and serve, and we end up
with the Superman that we know, okay? If the family can suddenly said, oh, superpowers,
let's rob all the banks and kill all the enemies, by definition, the immediate result of that is
you have a supervillain. And even though that infant has superpower, it's always so influenced by its parents.
OK, and the parents would set the values that this infant uses the superpowers for.
You know, one of one of my favorite statements in Scary Smart is that we do not make decisions based on intelligence.
We make decisions informed by intelligence based on our values and ethics.
So, you know, if you take a young lady and raise her in the Middle East, for example,
and, you know, we're dress code for more and more open now, by the way, I'm proud of that,
but more and more, but still conservative, if you want.
She would grow up to believe that the intelligent thing to do is to not dress overtly, you know, maybe stay conservative in an interesting way.
If you raise the same young lady in Rio de Janeiro on the Copacabana beach, she will grow up to believe that the best thing to do is to wear a G-string on the beach.
Right. Now, interestingly, neither is right or wrong. Neither is more intelligent than the other.
Now, interestingly, neither is right or wrong.
Neither is more intelligent than the other.
The only thing is that each of them is applying,
it's the same young lady applying her intelligence to a value set that's informed by her surrounding.
Now, for the case of AI, this is exactly where we are.
Super intelligence, you know,
if and when we reach super intelligence is a superpower.
It is the ultimate superpower.
It's the superpower that gave humanity its dominance over the planet, you know, and that
is only going to be used as the lens through which the ethics of AI are going to be applied.
Okay.
How do we give that ethics, ethical code to the machines by being the best parents we can be to them, by being the family Kent. And sadly, sadly, sadly, this code. It's going to be in the substrate. It's in the
food, the information that the AI is consuming based upon our behaviors, right? These AIs are
watching our behaviors, how we treat each other, how we treat our machines, how we interact, how
we speak to each other, how we effectively communicate as human to human
and it's emulating that and we have and all ranges and magnifying it yeah I mean the example I
normally give is you know when President Trump used to use used to twitch it to tweet I'm not
for or against President Trump I don't have the right to have any view on him. But when he used to tweet, you would get a tweet at the top from the president
and then 30,000 hate speech, okay?
Some of them are towards the president,
some of them are towards the person
that hates the president,
and some of them are towards the whole world, right?
And it's quite interesting because when you look at it,
you see it and with your intelligence,
you don't have to be a super intelligence and an AI, but with your intelligence, you would make conclusions.
You would say the first person does not like the president, the second person does not like the first person, and the third person does not like anyone.
And you somehow make those conclusions with any kind of intelligence, you'll be making those conclusions.
Any kind of intelligence, you'll be making those conclusions.
But the bigger picture is that you and I cannot grasp the entire 30,000,
but a chat GPT or a chat bot of any kind will,
and they will make an additional conclusion on top of that,
that humans are rude, they don't like to be disagreed with,
and when they're disagreed with, they bash everyone.
And if I want to emulate a human, I'm going to do the same back. Exactly. So when they, when they disagree with me, I'm going to bash them. That, that is a, uh, you know, again, regardless of
sentientism or consciousness or whatever, but that is the coded behavior that an artificial
intelligence will do if it's instructed to emulate humans and pass the
Turing test. You know, I'm super passionate about longevity and healthspan and how do you add 10,
20 health years onto your life? One of the most underappreciated elements is the quality of your
sleep. And there's something that changed the quality of my sleep. And this episode is brought
to you by that product. It called eight sleep if you're like me
you probably didn't know that temperature plays a crucial role in the quality of your sleep those
mornings when you wake up feeling like you barely slept yeah temperature is often the culprit
traditional mattresses trap heat but your body needs to cool down during sleep and stay cool
through the evening and then heat up in the
morning. Enter the Pod Cover by 8sleep. It's the perfect solution to the problem. It fits on any
bed, adjusts the temperature on each side of the bed based upon your individual needs. You know,
I've been using Pod Cover and it's a game changer. I'm a big believer in using technology to improve
life and 8sleep has done that for me.
And it's not just about temperature control.
With the pod's sleep and health tracking, I get personalized sleep reports every morning.
It's like having a personal sleep coach.
So you know when you eat or drink or go to sleep too late, how it impacts your sleep.
So why not experience sleep like never before?
Visit www.8sleep.com.
That's E-I-G-H-T-S-L-E-E-P.com slash moonshots. And you'll save 150 bucks on the pod cover by
8sleep. I hope you do it. It's transformed my sleep and will for you as well. Now back to the
episode. So I'm going to confess to you here, I have been
wildly swinging from one side to the other and trying to grasp my own feelings about AI. You
know, I'm a student of Ray Kurzweil, who's a common friend, we started Singularity University
together. You know, I've been, we overlapped during your 10 years at Google. You know, I've been, we overlapped during your 10 years at Google.
You know, I had Larry and Sergey and Eric were on my boards at XPRIZE and I had a chance.
And I know that Larry and Sergey, when they started Google, they wanted to create Google as an AI company.
It was always, that was fundamental, right?
Even beyond that, how do we connect the human mind with AI?
How do we create this meta-intelligence, so to speak?
And forever, I have been of the belief that AI is the single most important tool
that's going to enable humanity to solve the world's biggest problems.
It's going to give us the abilities to create fusions,
like cancer, make humanity a multi-hundred-year lifespan,
all of these things, and it still may.
And hopefully in the right hands, it will.
But the cries and concerns of danger,
you know, Elon called it summoning the demons.
Jeffrey Hinton, who I'm sure you know well,
has been on the news and talk show circuit speaking about his concerns.
And you have been too.
And if I could, setting up this podcast as we were
texting back and forth on whatsapp um i was compelled by what you were texting with me
first of all you've been on a tear uh traveling around around the world i'm in dubai i'm in london
tomorrow i'm in saudi the next day. I'm back in London.
And, you know, if I could, I want to reflect the energy so people are aware of it and then speak
about this if it's okay with you. You know, what you were texting with me is saying, you know,
we are seriously running out of time. You know, the topic is heating up
and we're running out of time, seriously running out of time. And I feel that, I feel that, and I
feel that coming from a place of caring and love, of wanting what's best. Let's dive into that. I
want you to explain what that means. And I like to piece that apart so people understand what they should or
should not be fearful of, what they can and cannot do, what the timeframes are here as you see them.
So first of all, I wouldn't blame you or anyone for being torn about this topic. Why? Because
it's a singularity. We actually have no way of predicting a future that has elements.
Let's define a singularity here, because you and Ray may use it differently.
I love Ray's definition. My view of it simply is that there will be a point in the development of AI where the rules of the game will change so drastically that it becomes almost impossible to predict how the game will play out.
My view of that is a tiny bit more than rays, which is the presence of super intelligence that,
or artificial general intelligence that beats the intelligence of humanity,
but at the same time for that intelligence to have enough autonomy to be
able to affect humanity. Okay? So, to me, those two factors in play would lead to a point of
singularity because of what Marvin Minsky said, actually interviewed by Ray, which was one of my
favorite conversations on YouTube. You know, Marvin Minsky, when asked about the threats of artificial
intelligence, he said, he didn't talk about their intelligence or their superpowers or whatever.
He just said, it's hard. And Marvin Minsky, a professor at MIT, heading the AI labs there,
one of the true fathers of the entire field. True fathers of AI, for sure. And we all refer to,
I mean, we've all been motivated by the early Dartmouth, you know,
workshop and how that set us on the track to AI. Marvin said, because there is no way we can
make certain that the machines will have our best interest in mind. Okay. Which is a very
interesting statement. If those machines have our best interest in mind, this will lead us to what
I call the
third inevitable, which is, sorry, the fourth inevitable, which is we will end up in a utopia
that is amazing for humanity, right? And if they don't have our best interest in mind,
you know, it will lead us to the third inevitable, which is a dystopia that would be very,
very difficult to navigate. Now, my view very clearly is it is inevitable that we will have both chronologically.
Yes, in time.
Yeah.
So it's a question.
The challenge here is this, and I think this is where most of the conversations around AI go astray,
is that we try to prove if there is an existential threat of AI or not.
The thing is, if a horse race starts and you're trying to bet,
the closer you get to the end of the race, the more accurate your bets will be.
Now, for the existential threat to exist, we all know there is an existential threat.
But at the current moment,
we don't know the probability. Is it 10, 20, 50 percent? We don't know, right? And it takes us
time to get along that racetrack so that we say, oh, it's becoming more and more evident that there
is a threat or there isn't. My point of view, Peter, and I think this needs to be screamed loud everywhere, is that
there are more immediate threats that are not RoboCop or Skynet-like that are absolutely
inevitable.
And those are mostly not related to the level of intelligence of AI, they are related to
the level of intelligence of AI, they are related to the level of greed of humanity.
And what we are going through today is an arms race, okay?
With people like Sundar, who I love so much, who I respect so much, who I believe-
The CEO of Alphabet, yes.
Of Alphabet, who I believe genuinely is a genuinely good man, okay? هو رجل جدًا جدًا، حسناً؟ عندما أصدره الرسالة الأفضل،
إجابته المستقبلية،
الرسالة الأفضل تطلب مننا أن نتوقف عن تطوير الهيئة،
إجابته المستقبلية هو لا يمكنني.
لا يمكنني. لماذا؟
بسبب الأولى المفترضة، مرة أخرى في سمارت المخيفة،
وهي أننا قمنا بإنشاء ديلمة للحجم،
حيث لا أحد يستطيع تطوير الهيئة،
يستطيع توقف تطويرها. لماذا؟ لأن أحداً آخر سيقاتلهم بهذا. dilemma where nobody who is capable of developing AI is capable of stopping the development. Why?
Because someone else will beat them to it, right? Let me interject here the inevitables that you
speak about in Scary Smart. The first inevitable is AI is happening, and it is happening,
and it's accelerating. No stopping it. The second is that AI will become much smarter than us.
Yeah. No stopping it. The second is that AI will become much smarter than us. Again, inevitable. It's happening.
Almost there. I mean, it's already smarter than most of us.
Your third inevitable is that bad things will happen. And we can talk about from what camp, right? There's a lot of different. Is it humans using AI for bad or is it AI using their own power for bad? We'll talk about that. One is probable,
one is probably improbable. We'll speak that. And the fourth inevitable, which you mentioned here
is, and Elon, I've had these conversations with him, says the same. We'll create a world of
abundance. It'll be based on AGI. It'll be after all these things get sorted out. And so this
timeframe is important to understand, but please continue if
you would. So let's maybe jump into the third inevitable and bad things will happen, just so
that we put this in perspective, because we're so close to those that the probability, that our
ability to assess the probability of their existence is very high. I think there will be a disaster to jobs, okay?
And the meaning of jobs and the compensation associated with job and the purpose that comes
from having a job, okay? There will be a disaster to the fabric of society as we know it, okay?
To our ability to distinguish, you know, to include another form of being that is
sentient, or at least simulating sentientism in a way that will require us to rethink a lot of
things. So the ethics of not, you know, the global human rights, but global being rights, if you think about it, okay? And there will be a very
serious disruption to truth and consequently to democracy, okay? And then eventually there is
going to be, you know, within two to three years, I would think, a very, very significant concentration of power.
This is a society.
Forget the dystopian scenarios of RoboCop trying to kill all of us.
This is definitely a dystopia because our way of life as we know it has ended.
This is not going to end.
It's already starting to end. And I will say it's game over for living the way we have lived in the 20th and the beginning of the 21st century.
It's over. When you wake up in the morning in a society where jobs are not the same,
truth is not the same, power is not the same, power is not the same, income is not the same,
purpose is not the same. These have nothing to do with AI, by the way. This is all human decisions
in the presence of AI. And they are decisions that require immediate intervention. And the story
of COVID is just a demo. Because if you had reacted to COVID before COVID showed up,
we wouldn't have had COVID at all.
If you had reacted after patient 10,
we wouldn't have had COVID at all.
But you had to wait.
And then you had to do the political game of blame.
And then you had to do the extreme knee-jerk reaction
that completely messed up economies and
well-being and mental health and so many, many problems that we will take years to fix.
Just because we're debating, we were debating if there was going to be a pandemic or not.
If you're an expert in, you know, in pandemics, it didn't take
intelligence at all to know that it was going to happen.
Interestingly, by the way, it happened in 2020, exactly 100 years after the Spanish flu, right?
Amazing.
1920.
Mo, your arguments here are compelling.
I want to frame them slightly to help us dissect them.
So today we have AI that is compelling. It is extremely useful.
I think most people would argue if we froze AI where it is today, it'd be a great thing for
humanity. It would be great tools for artists, for writers, for physicians, for lawyers, for
every part of humanity. But the progress, we will have GPT-5 and 6, and we'll
have POM-2, 3, and 4. And they will get a point at which it is so powerful. And so there's this
phase one is, it's subhuman, if you would, but very powerful. It's narrow AI in very useful areas.
We're about to transition, I would say, in the conversations,
and I've had these conversations with a multitude of AI leaders, we're about to get to a point where
it is about to transition to a superhuman state. And then there's a third phase, I would say,
where it's, you know, billions of fold, it continues exponentially, you know, double
something 10 times, it's 1000 times, double it 20 times, it's a million, double it 30 times, it's a billion. And we have a new form of
super sentience out there. So define these three phases. In the third phase where it's superhuman,
do you believe, I'll say this is how I believe that, the more intelligent a life form is,
I'll say this is how I believe that the more intelligent a life form is, the more respectful it is of life and of creating a beautiful world and not harming. So I do not fear a super sentient
billion fold increased AI. I think it will be the most important aspect of where it goes.
important aspect of where it goes. It's the transitory phase, and I would say the phase in which humans are using AI in a dystopian fashion, malevolent use of it. And is that your major
concern? Spot on. This is spot on. I mean, the reality, and I say that with conviction, I pray
for a super intelligence to take charge because the people that are
currently in charge are really not super intelligent. Let's just put it this way.
You're not worried about artificial intelligence. You're worried about human stupidity.
Yeah. Limited intelligence. Let's put it this way. I mean, when you really think about it,
the reason you and I are having this wonderful conversation over thousands of miles of separation is because of human intelligence, right?
It is human intelligence, or intelligence, it happens to be human, that allowed us to build this kind of civilization.
It's what allows you to create a machine that can take you from California to, you know, to Australia to surf
in the Australian, on the Australian shores, that's intelligence, right? It's limited intelligence
that this machine burns the planet in the process, okay? And more intelligence is good for all of us.
We know that for a fact, right? We also know for a fact, again, it's a singularity. So anyone who
tells you they know what's going to happen is lying, including me, okay? We also know for a fact, again, it's a singularity. So anyone who tells you they know
what's going to happen is lying, including me, okay? But you can look at charts and extrapolate
them. So you can say, look, stupid people, you know, hurt the planet and they don't care.
More intelligent people hurt the planet and they care a little. More intelligent people don't hurt
the planet and they care. More intelligent people try to preserve the planet, right? It's actually interesting.
Continue that trajectory of intelligence and you will see that the more intelligent something is,
the more it believes in the ecosystem as a base for the success of all life forms, right? And so
accordingly, I wouldn't think that that in you know, that artificial
intelligence, artificial super intelligence, a billion times smarter than us would go like,
oh my god, this so annoying those humans, let's destroy all of them. Okay. More interestingly,
by the way, we when we when we kill ants, or when we kill other species, it's either because of our limited intelligence or because of their irrelevance to the particular situation, as stupid as that may be.
But nobody has ever woken up and said, I am so freaking intelligent, I'm going to kill every ant on the planet.
Nobody takes that seriously because they're really irrelevant to your level of intelligence,
if you think about it. Right. And so it's hard to imagine that AI will wake up and say,
look, I'm a billion times smarter than Peter. But you know what? I just dislike those Peters so much.
Let's get, you know, to put a plan together and get rid of all of them. In most people's minds,
at least people who speak about those existential crises,
our fear is bigger than our logic. Okay. There could be situations, you know, Hugo de Garas was
talking about that once, you know, where AI realizes that we're standing in the way of their
progress. And, you know, they would either pinpoint us as the enemy because we're consuming too much power, for example, that they need.
Or, you know, they may just evict us out of New York City because they need that land for some reason.
Or they may just step on top of our nests, you know, unconsciously, basically.
And for those who might think that, I would point back to the idea that we're living in a universe of
massively abundant resources. All the energy in the world is available and all the resources and
the science fiction dystopian movies where aliens are coming to get our water or get our energy
are all unfortunately Hollywood ridiculous scenarios. They're very ridiculous, yeah.
Let's take about the real scenarios and you mentioned them. Let's talk about the idea of
jobs. Let's talk about the idea of purpose. And I want to dive into one example in prepping for
this and listening to a number of your incredible podcasts. You love writing books and you describe
in one conversation, you know, writing six books and writing books
for yourself.
And then all of a sudden, here comes ChatGPT with where a single prompt, you can say, write
a book in the style of Mogadad on this subject and have the AI write it.
Now, all of a sudden, the end goal of having accomplished a written book is there. But the
journey is not. And I hear in another podcast where, you know, the joy has been taken out of
writing a book. Can you? Is that true for you? I haven't written a single line for the last three
months. And I say that with a with an aching heart, because it's a big, big, big joy for me.
I mean, I write around four times as much as I publish.
I have full books that I will never put out there.
I write for the joy of writing and for the joy of discovery.
It's almost like my journaling activity.
Now, the challenge is this.
The challenge, Peter, is that it's not only disruptive to my ability to sell books, because
of the disruption of supply and demand, because I never really cared about selling books, I cared
about spreading ideas, right? Of course, but understand for the typical author, okay, who was
not so blessed, I was so blessed in life to have the joy of working with Larry and Sergey and
be at Google in an early time and, you know, get money that I honestly don't deserve, right?
And my lifestyle doesn't require any money at all. So, you know, I'm okay, right? But the typical
author who will write because they're trying to make a living out of writing, is now faced with an economic model
where there is so much abundance in supply
because writing a book now requires one prompt,
or a few prompts if you're clever,
that even if they write the best book out there,
they are going to be diluted to reach any demand at all.
So this is very disruptive. At the same time,
I have to admit to you being a bit futuristic in my view of this, I said to myself, okay, so
how far can I go before my writing sucks compared to GPT? And my thinking is we're one version away. Okay. We truly are. I mean,
so what do I have as a skill? And I think this is really important for everyone listening
about jobs. What do I have as a skill that GPT doesn't have yet? it's a skill called human connection. Okay?
It's a skill that makes me, when I meet Peter, feel that Peter is a very dear friend.
It's the reason why you hug your daughter.
It's the reason why.
This might be 10, 15 years away. There will be a point very near in the future where AI as a cognitive ability will, you know, I think we've already passed the Turing test,
or very close to. We keep on moving the Turing line, but yes, it's originally defined,
we've passed it, yes. Yeah, what was originally defined, we passed it for sure. But I think the
reality is there will be a time where you're not going to be able to detect if the person talking to you is an AI or not.
You definitely today are not able to detect if you go look for the hashtag AI art or AI models.
It's quite eye-opening how realistic modeling jobs are now done by AI. Now, with that in mind,
that human connection still remains, interestingly, because of our common biology
and because robotics haven't caught up yet. It's not because AI haven't caught up yet,
but it's because robotics haven't. Two parts of this. The first is purpose, right? We humans need purpose in our lives. A
purposeless life is not worth living to paraphrase, you know, Greek philosophers. But if you're an
artist and you love creating art and all of a sudden AI is either doing a much better job or
taking the joy out of it, or if you're a writer like you
just described, or you're a physician or a lawyer or whatever the case might be, I would say there's
a phase about to come online, which is the co-pilot phase, right? Where every profession
has an AI co-pilot becomes malpractice not to diagnose a patient without AI in the loop.
malpractice not to diagnose a patient without AI in the loop. And then there's a phase where AI is just so much better that you throw up your hands and say, why should I bother going to medical
school when an AI can do it? And the concern there is sucking the purpose out of life. Yes?
Yeah, it depends on how you define purpose. So there are very, very extreme, you know, definitions of
purpose in the East and the West, okay? In the West, you know, so my background is I was born
and raised in the East, in Egypt, you know, exposed to lots of Eastern cultures as a young person,
you know, lots of Eastern religions, lots of Eastern traditions. And then as soon as I finished
university, I worked at IBM, Microsoft, Google, and so on, have been studying my MBA and so on.
So I've been very westernized since I graduated uni. In the West, we define our purpose as a
point in the future that is worthy of our effort in the present. And we chase that point.
Okay. You know, we remember a laptop for every child was one of my favorite examples when we
were trying to achieve that. Yeah. You know, that purpose in the future, of the future,
makes you sort of disgruntled with the presence all through the point to get
there okay and then if you actually get there what happens with our western purpose is that
the goalposts move so you set another purpose and get disgruntled with your with the present until
you get to the next point and the next point and the next point okay the eastern definition of
purpose is actually quite different the eastern definition of purpose is actually quite different. The Eastern
definition of purpose is if you assume timelessness, that everything is here and now,
that the only experience you will ever have is here and now, then the, okay, what I need is a directional ambition, okay, and full
engagement in the present, meaning that my purpose is a daily, almost, you know,
every minute of the day, my purpose is to show up and experience life and live and engage and do the
best that I can. Okay. If, because the West, since the industrial revolution has sold work to us
more and more and more and more as purpose, we ended up in a place where if you take my work
away, I die. Okay. Yes. But perhaps that's not the nature of humanity. Perhaps the nature
of humanity. Thank you for saying this. I think it's very important differentiating between
the work I do and my purpose in life. And just for those who know me, I'm the eternal
optimist. I would say techno-optimist, but the flip side of having an AI that can do what you've once done is standing on its shoulders and dreaming of things that never were possible and going and doing those things in the world.
Yeah.
Right.
And I think about the world we're living in today.
If I went back to my great, great grandparents and said, oh, I don't grow the food.
I don't move the food. I don't move the food,
I don't do any of these things.
What I do now is discuss and write.
And they have no conception of what that life is.
And I think there will be a world of extraordinary dreams.
Maybe it's play.
Maybe we're playing in an infinite number of virtual worlds.
We do need challenge challenge though, right?
I think humans do need some level of challenge in their lives.
And so will it create a challenge?
I don't know.
I think, again, West and East.
So we need challenge in the West
because we live a life of privilege,
because there are no real life challenges presented to us.
Okay. Life itself as a journey is challenging. You know, in a very interesting way, life itself,
if you define your life purposes as becoming the best version of yourself,
okay, to take that simple definition, that is a a mega challenge and it's a mega challenge that we
we run through uh you know dedicating a few hours a week to it in the middle of all of the other
things that we do okay because we're driven by all of the other purposes that we were told is
our purpose but if you dedicate yourself to it,
you know, I'm, you know, I'm not, I don't want to make this a spiritual conversation, but like if you take the story of the Buddha, for example, or, you know, one of the, of any person
that was trying to become the person, the best person of himself, you know, Rumi for, you know,
as a Sufi scholar or whatever. Yeah. These are massively challenging lives of, you know, being torn and
debating and trying to understand and trying to discover. And wouldn't it be amazing if I had an
AI to ask a few questions to while I'm on that journey? So what I'm trying to say is, once again,
there will be a moment of disruption that is imminent, Peter. It's like, it's literally around
the corner, where so many jobs will be
lost and accordingly so much purpose will be, you know, will be wandering. But eventually,
if you redefine purpose differently and say, humanity is about human connection and about
finding the best version of ourselves and about pondering things, about learning, about,
you know, and so on, then maybe, maybe this is a wonderful place to play a different
video game that's called being human. And it's a beautiful, it's a beautiful conversation. That is
the upside of what we're about to face. I believe so. Yeah. You know, human connection, you mentioned
it and the disruptions and the need. Will we see in your mind, AIs developing a level of connection that rivals human connection?
And beats it. Yeah, absolutely. 100%.
An AI that knows you. So, okay. So what timeframe is that? 10 years?
In the virtual world, I would probably say less. I'd probably say five. In the robotic
world, slightly more, maybe 10, 12. Yeah. So you're speaking of those who have seen the movie
Her is a perfect example of an AI, right? It's one of my favorite AI movies. It's non-dystopian. When the AI reaches super sentience, it simply leaves and leaves some sub-human AIs around.
And we are seeing from Optimus to Figure to a slew of other robotic, humanoid robotic companies coming online.
I've invested in some and we'll bring a few of them to the stage of Abundance 360.
Hopefully with you next
year as well uh next march yeah um the uh yeah so virtual and of course we just saw uh vision pro
from apple give us a new set of tools that's the one yeah five to 15 years conversation. And that's going to be fascinating. But I want to dial down
to the next two years, we're about to have elections here in the United States in just
under two years. And, you know, you've been saying, I've been saying those are going to get
very interesting, very fast. Is that a tipping point for you of a dystopian nature?
I think that's the beginning of the dystopia for sure.
And it's the beginning of the dystopia not because the technology does not exist and will take two years to exist.
It's because the human greed and hunger for power will deploy that technology in ways that will really affect the masses in ways that can't even be predicted.
Think of it this way, Peter.
If I told you, it's true, by the way, that there was a recent Stanford University study
that showed that brunettes tend to actually keep the relationship, you know,
whatever, romantic relationships longer, you know, whatever, romantic relationships longer, you know, how does that
affect your thinking? Okay. Time to, you know, what, search for brunettes over blondes? I don't
know what to say there. Correct. Right. So by the way, it's not true at all, by the way. I mean,
I just made that up. Okay. But the truth is, if that was true,
if that were true, okay, or not, is irrelevant. It's irrelevant, because I've planted something
in your mind that you either need to debug, okay, or if you tend to believe it would affect your behavior
and either way it consumed part of your cognitive bandwidth.
That's a major issue.
This is where our world-
A viral idea.
Yes, the power of an idea.
And this is exactly where we are today.
Whether you believe AI is sentient, it's capable, it's super intelligent, it's not.
It's becoming extremely difficult
to find out what the truth is.
Okay?
And I think the application of this
in the coming couple of years,
coming couple of years, you know,
and the election is going to really reshape
the fabric of society's connection to the truth.
Mo, I don't think most people realize how much information the large tech companies or any group that desires has on us.
The ability to know what we believe and the ability to manipulate individuals by feeding them 95% the things that they believe
and 95% the truth and then injecting 5% to sway them in a certain direction.
You know, it used to be, I mean, our minds evolved on the savannas of Africa 100,000 years ago
for conversation and story and to believe what we were hearing
because it was the truth within a small group of dozens of individuals.
And now we don't know how to parse the truth and the falsehood.
And when I teach this at Abundance360 and Singularity,
I say the world's biggest problems are the world's
biggest business opportunities. And there will be, I mean, we have these cognitive biases that
the brain developed, right? We tend to give much more credence to negative information over positive.
We give credence to most recent information. We tend to believe those who dress like us, like, you know, our black t-shirts here, and compared to people who don't. And in that regard, these shortcuts,
we believe them because they're energy savers in our mind. And we don't know how to filter against
them. And one of the things I hope AI will enable us to do, if you want to turn it on,
is these cognitive bias alerts. Like Peter, you're believing this, but the facts don't show that to
be true. That is your cognitive bias. I, for one, would love that kind of technology to come online
to, you know, call it a bullshit alert. I used to call it Pinocchio, something that shows me a longer nose when someone's bullshitting
me basically.
Yeah.
But that is positive.
That is possible.
And I think that is...
The positives are endless.
The positive possibilities are endless.
There's absolutely nothing inherently wrong with intelligence.
The problem is capitalism, right?
Is it capitalism or ego?
So a bit of each.
So let's talk about how each plays.
So the reason why news media will always broadcast the negative is simply because the negative makes them more money.
Listen, I teach this.
I know this.
I call CNN the crisis news network
or the constantly negative news network.
Yeah.
Right?
10 to 1 negative.
And I hate it.
I do not watch the news.
They could not pay me enough money to watch the news
to infect my mind.
I think about them infecting my mind
with viruses of dystopian information.
I don't want to spend time thinking about that.
Yeah, and it's quite interesting because it's not just dystopian information.
It's the same dystopian information every single day.
It's a pattern.
Over and over and over again.
Over and over.
It's like they just change the names.
Someone killed someone.
Some war is happening somewhere.
Some politician is crooked again. Some politician is crooked.
Some politician has done something disgraceful.
Some economic crisis is going to take away your livelihood.
Whatever.
And then eventually they say, and a penguin kissed a cat.
So that you can get up out of your bed and just do something today? And the reason for that, would you blame them?
No, they're just playing on the human bias
to detect the negative or to be attracted to the negative.
Okay?
It's a survival.
Their business model is to take our eyeballs
to their advertisers.
Correct.
And whatever keeps us glued.
Yeah, and our nature is saying,
yeah, give us people who killed each other,
and then we will look if you give us people who kissed each other, we'll switch you off, right?
So this goes back to our same starting conversation.
If we want to build ethical AI for the good of humanity,
we need to be the ones that say, we are more interested in a fake
detector than a deep fake generator. Okay? If we can manage to convince the AI companies that this
is better for us, that we will pay more money for it, we will spend more time on it. We will use it more. We will promote it more. They will build it.
Okay.
The reason why, you know, Apple builds Vision Pro and doesn't build a, you know, a cure to cancer is because there is more money in Vision Pro than there is in a cure to cancer.
So that is capitalism.
And listen, I would call myself a libertarian capitalist.
I love building companies. I'm on my 27th company and it's an art form and I enjoy it because
I also think it's the most efficient way to scale goodness in the world, right? Google
is probably one of the most positive impacts on the planet in terms of giving information globally.
And and it's done so because it has a business model that works.
Now, there are negative consequences to that, to the business models as well.
But I can't you know, I don't think we would have anything that we have right now had capitalism not not reigned.
I don't think we would have anything that we have right now had capitalism not reigned. But the human ego of wanting more and dominance is one of the culprits in this, would you say?
Yeah, it's an interesting conversation.
It's an eye-opener when you really think about it.
Because I tend to believe that capitalism is a tool,
okay, that is a very, very efficient and successful tool to deliver the objective and the vision
of the founder, if you want, or the person that uses the tool. It's not a target in itself.
not a target in itself. When capitalism becomes a target in itself, the target becomes more money.
More money does not always align with better improvements or advancements for humanity.
If you think about the early Google, and I know we both know the founders, and I know, you know, we both know the founders, and you know, we, I've worked with
Sergey very closely, worked with Larry, quite often, wonderful human beings, who I would even,
I would even say, detached enough from the reality of capitalism and business, that they truly and
honestly believed organized the world information and make it universally accessible and useful. And that is why Google improved access to information and democracy of information in the world.
Had they gone out and said, we are out there to create a billion dollars each or 50 billion dollars each,
the results might have been different.
So nothing
wrong, inherently wrong with capitalism. There is something interestingly wrong with our obsession
with defining capitalism as money. So my target, my mission is one billion happy. So what does
that mean? It means I want to finish my life as a billionaire. Okay. But instead of a billion
dollars, I want a billion happy people.
And I use very capitalist models to do this.
I use marketing, I use product design,
I use, you know, a measurement.
I'm very, very, I run it like a Google, really, okay?
But the objective is a billion happy.
So if we convince the world that the objective of AI
is create abundance so that
we all have more money, okay, we all have more abundance in every possible way, we would end up
in a very good place. But realistically, that's a very naive target, okay, because of ego, like you
rightly said. Because the ego says, what good is it for me to have a Rolls Royce if everyone else has a Rolls Royce?
Okay.
You know, it is measuring yourself.
It's against your neighbor.
Unfortunately, right now, our neighbor might be the billionaire as we read about,
you know, Dunbar's number, the 150 people that you know
are no longer the people actually live with you.
It's the people you watch on TV or you see on social media.
I want to move the conversation in some more interesting areas here.
The future of humanity.
My friend, I'm curious about this.
I would put forward three possible scenarios and I'd like your opinion on them.
First is the human species is simply a transient life form.
We are on this planet to give birth to the next sentient life form that will dominate.
Just like we as Homo sapiens are a result of a multitude of extinct life forms that preceded us
and led up to us. And evolution doesn't stop. It it continues and we are giving birth to
whatever we are our children our children's children here of AI that's
one scenario another scenario is that we are on the verge of merging with
technology this is what Neuralink and paradromics and a number of brain
computer interface companies you know, Ray talks about having high bandwidth brain-computer interface by the early 2030s
using nanobots and being able to connect our neocortex, our 100 billion neurons, to the
cloud, giving us the ability to understand quantum physics or Google and know whatever
we want.
And the third scenario is these meat bodies are transient, and we're about to
upload ourselves into the cloud. I'm curious about your thoughts on these three. And where do you see
yourself going? Once again, a singularity. Are they a possibility? Yes. Are they certain to happen? No. I think the real question, if you don't mind me saying, is when you say we, who do we mean?
Do we mean Santa Monica?
Do we mean California?
Or do we mean every human on the planet?
Okay.
The truth is, even if we manage to get Neuralink, which we will, to work appropriately, then who is we? Is the guy in
Africa who doesn't have the money to buy it capable of doing that? Forget Neuralink. If
Vision Pro becomes thin enough for you to slowly and gradually, you know, dim the real world and
live in the virtual world, you know, who will buy it at $3,400? Who is we? And I think the real
challenge that we have in our world of tech, you and I lived this deeply, is it's...
And we still do.
Yeah, it's very Californian. Okay. And California is not the rest of the world.
Okay?
But Mo, we're living on a planet today
that's got more handsets than humans.
Correct.
If you go to the favelas of Rio de Janeiro
or to the, you know, throughout Africa,
everybody's got, if not a smartphone,
a feature phone and soon a smartphone.
And there'll be a point at which Amazon gives away phones for free because they're so cheap
if you buy stuff from Amazon. Correct. And so I do believe that. It's been possible for a very
long time, right? But the real democracy is that the use of that phone would enable each and every one to have a better life.
Okay. And, and if you really want to, to be, to be, I mean, again, I don't want to paint utopian
scenarios, but you know, that use of the phone for, for some of us is very, very, very advantageous
for, for others is very numbing. Okay. And, And in a very interesting way, if the use of
Neuralink becomes a purpose of numbing, while it is for some of us going to be augmenting
our intelligence to the point of super intelligence, then that's a very interesting
sort of almost matrix-like scenario, where we numb a few and
we make the others that the concentration of power I was talking about, but let's go
quickly and just go through those.
Is humanity going to go extinct?
It's a possibility.
I mean, how big is that possibility as we look at it today?
Very small, maybe 5%.
You don't think it's an inevitability?
I mean, when everything changes, right?
Keeping things constant is not the norm.
Change is constantly the norm.
Yes, but for something to go extinct,
you have to assume that the superior being is actively pursuing it, or there is a major natural disaster.
So, you know, you have to imagine that the chimps are still here. We've surpassed their
intelligence, but we didn't go out on a hunt to try and kill all of them. And that's my perception.
Yeah, my perception is there were no... Dominant species then.
Yeah. Not extinction, dominant species.
Dominant species then.
Yeah.
Not extinction, dominant species.
Exactly.
So for sure, 100%. It's over.
I mean, there are assumptions already that chat GPT-4 is at an IQ of 155.
Einstein is 160.
Can you imagine that, right?
I mean, Einstein is like my freaking idol.
At 160 and chat GPT at 155.
Maybe it's 120. Who cares? Right. But but if you if you continue on that.
Yeah, if you exactly it's where the ball is going to be.
And you and I and people who have lived on the inside of this know that it's done.
This is game over. OK. The intelligence of the machines because of the way technology works because of
Bandwidth because of storage capacity because of you know communication bandwidths. It's just it's done
It's done. Yeah, they're going more law is not Moore's law or as Ray calls it the law of accelerated returns is not slowing down
It's accelerating
Exponential yeah, it's double exponential for the biggest mistake we've ever done that where you now AI can
develop AI.
So intelligence will develop.
By the way, let's double click on that.
Super important, right?
The ability of AI to now develop its own software is in fact the double exponential.
It's the, it's, it is, uh, is, the exponent just went very high to use a mathematical
term. Out of control. This was the point where I decided to, I mean, I made my first video on AI,
the 1 billion happy video was 2018, March, 2018. I was warning about what we have today.
You know, my book was written in 2020, released in 2021.
And I've been quietly trying to say, guys, please pay attention, please pay attention.
Now I'm very vocal about it. Because of that point, we've made three mistakes. And I think
everyone needs to be aware of those. We've allowed them to write code. We've put them on
We've allowed AIs to write code.
We've put them on the open internet.
We've allowed AIs to write code.
Yeah, we've allowed AI to write code.
We put it on the open internet.
So no control code in there.
And we've allowed agents to prompt them. So AI is no longer just affected by us humans.
There are other AIs playing with AIs.
And that's double exponential for sure.
And very, very uncertain. We don't know
where that will lead us. Yeah, I think the third point you made is AIs being able to call upon
other AIs to do things and to task them in a way that is self-referential all the way down
is extraordinary. It is. Hey everybody, this is Peter. A quick break from the episode.
I'm a firm believer that science and technology
and how entrepreneurs can change the world
is the only real news out there worth consuming.
I don't watch the crisis news network.
I call CNN or Fox and hear every devastating piece of news on the planet.
I spend my time training my neural net, the way I
see the world, by looking at the incredible breakthroughs in science and technology,
how entrepreneurs are solving the world's grand challenges, what the breakthroughs are in
longevity, how exponential technologies are transforming our world. So twice a week,
I put out a blog. One blog is looking at the future of longevity,
age reversal, biotech, increasing your health span. The other blog looks at exponential technologies,
AI, 3D printing, synthetic biology, AR, VR, blockchain. These technologies are transforming
what you as an entrepreneur can do. If this is the kind of news you want to learn about
and shape your neural nets with, go to demandus.com backslash blog and learn more now back to the episode so
we talked about a new species becoming the dominant species on the planet um and we can we
can sit back and relax and and be human or we can merge with it. Yeah, so that I have a big question on.
I mean, some people would want to, some people would not.
The question is, would AI want to?
Okay, so we assume...
That's an interesting question.
Yeah, we assume for the near future
that they're still within our control, right?
And that we can tell them,
augment your mind with Elon Musk's mind,
and then Elon becomes much, much, much smarter.
Amazing scenario for Elon,
but not for the fabric of society.
Understand that, right?
As the fabric of society shifts,
unless we do all 7 billion humans at the same time,
we will shift between the current human species and the minds
of those who augment their minds to AI will become gods and animals. Did you see the dystopia in that?
Of course. Of course, the haves and the have-nots magnified a trillion fold.
A trillion fold?
Let me share an analogy for those listening, which is relevant here.
You and I, Mo, are not a single life form.
We are a collection of some 40 trillion cells that work collaboratively.
And I don't bemoan certain muscle cells getting more glucose because it's helping me as a whole.
I don't take a knife and stab my arm because my arm is useful to me.
And I imagine a world, I wrote about this in my last book, Future is Faster, and you
think of a meta-intelligence where as I connect to the web and you connect to the web and
using the web as just the overall connection um my
my abilities my intelligence my resources are improved as you join as well yeah right the more
people connected the more powerful the meta intelligence is i can watch a sunrise in japan
through the eyes of a of a of a you know friend. And I imagine that's one world in which
we are uplifting, because the more intelligence, ultimately, I think there is no adding increasing
intelligence is always a positive, I don't see it ever as a negative. Absolutely. I mean, it's,
it's been since the dawn of humanity, a lot of people actually missed that point, that we did not succeed as humans because we were the most intelligent species. It's because
we could pass our intelligence from one to the other. We succeeded as a tribe, right?
And with language to collaborate.
Exactly. With language to collaborate. This is what made the difference. Imagine if
one of us was super intelligent and left all of the others behind.
That one was very vulnerable.
And the whole advancement of humanity has been, in an interesting way,
bringing the rest of us along, right?
And, you know, again, I just say, if we start to augment ourselves with AI, any logical economic model
will say some of us will get there before the others.
And the question then becomes, why should we bring the others at all?
That's one.
But the bigger question in my mind is that moment in the further future where AI goes
like, why do I want a biological attachment
at all?
You guys sweat and you're smelly and what's that mucus thing in the eye?
And by the way, if I were to choose a biological entity to integrate with, shouldn't I choose
the great ape or the white whale or some big thing other than that flimsy human?
Okay?
So we're looking at it from the ego of humanity saying we're the ones that are going to tell them what to do and they're going to be happy to help us.
There is a moment in the future where they're not going to be happy to help us, not even interested to help us, not even thinking of us as relevant.
Fascinating.
Let's take it one step further.
Uploading.
It's a concept of if we were able to map the 100 trillion synaptic connections in our mind,
and if that, in fact, is the measure of memories and knowledge and spirit,
and we could upload that into the matrix,
would you want to?
I am already.
Think of it this way.
So One Billion Happy as a mission has a very clear objective.
It is to reach a billion people with a message of happiness
that leads them to action
and then be completely forgotten. Okay. It is an interesting...
For you to be forgotten, you're saying.
Yeah, be completely forgotten.
You're saying achieve the mission independent of yourself.
Because you can see that when someone, of course, mainly in the current culture of canceling and, you know,
if I'm bound to say something stupid one day and then someone will cancel me, and we don't want to
jeopardize one billion happy for that. That's number one. But number two is, with all due
respect to religions, you know, when someone starts something good for humanity, because
there's nothing wrong with religion that says don't kill your brother. It's a nice thing, right? It's that humanity associating
the knowledge with the person that becomes a very interesting mistake, okay? If we take the
knowledge and separate the priest and the teacher and the yogi and all of that. It's actually very interesting as a core.
And so my view of the matter is, it's done.
Sorry to tell you this, Peter, but within a couple of years,
someone's going to make a mini Peter that is virtual.
You've already been uploaded, if you think about it.
Because all of you stand...
I've got Peter Bot.
if you think about it.
Yeah, and I've got Peter Bott.
I've got, you know,
that have studied all of my books and so forth.
And that is in part,
I mean, the challenge I have with uploading is the moment in time
where the AI speaks over the speakers
and says, Peter, you've been uploaded.
I'm right here.
You can kill yourself now.
I don't think I would want to
end my biological life in the, okay, great,
fantastic. I'll see you in a little bit. It's interesting, again, a singularity,
because we don't know what life actually is. Okay. You know, would we be able to
upload our friendship on to AI? If we can, then that's a big thing that I'll say, okay, that's nice. So I still have my
Peter and we still have the connection and I still have the same feelings. I don't know, maybe.
But the question once again is, how many of us will we upload in phase one? And then what would
happen eventually? Do you think that after ai is a billion times smarter
than us and humans have already all uploaded themselves so there is no physical biological
existence anymore do you think ai will go like yeah let me consume a trillion gigawatts of of
energy to keep those you know um irrelevant little beings just chatting away, maybe they'll switch off the game console.
And you really have to constantly ask yourself,
what does AI want, not what humans want?
It was about a decade ago.
I was at a party.
Kristen was there, a group of friends with Larry, Sergey, and Elon,
and we had the most extraordinary and fun conversation
about the notion that we're all living in a simulation. And I believe that. I believe that
this is a simulation. Yeah. I have no way of seeing it any other way. Yeah. I would put it
even in an nth generation simulation, meaning a simulation is begot, the next simulation is
begot, the next simulation. And and the conversation was could we hack it and
the conclusion was that if we played with the simulation too much they would just reset the game
and we'd start again switch off the console yeah yeah uh and it's a fascinating thing and of course
the the interesting question is if in fact you knew without question at all as i feel i do and
perhaps you do as well,
that you're living in a simulation, it wouldn't change anything.
We'd still have the same dreams and the same loves.
Which is a very interesting question for our future.
I get a lot of people, because I'm outspoken about the topic and the threats and the possibilities,
and I'm really, really actively asking for action. I get a lot of people that would text me on social media and say, you're making me afraid. Like, what do I do now? And I'm like, look, every video game that's ever
existed is challenging. And what's the answer to a challenging game? To play, to fully engage,
to be part of it. It doesn't matter if it's a simulation or if it's real life.
By the way, everything we know about physics,
everything we know about quantum physics,
refers to the fact that this is probably non-physical.
I mean, why would you waste so much energy
to create all of that physical stuff
when in reality all awareness of the physical world
is just electrical signals
that are translated in the processor somewhere.
By the way, I think you'll agree with this,
or I hope you will.
While we're all here speaking about AI at the dinner table,
maybe not as much in the Capitol and White House as we should.
What people are not realizing is what's coming next,
which is the whole world of quantum technologies and quantum computation,
which is going to make AI look like it's...
How can that be the case?
So this blows me away, Peter.
How can the biggest elephant in the room not be discussed at all?
Yeah.
It's shocking, really, how little people know about what's happening.
I think you and I have had the enormous privilege of being on the inside, okay?
And when you're on the inside, I mean, I don't know if I should say this, but, you know,
the reason why Google had barred out so quickly is because we had barred for so long, right?
Yeah, I think Jeffrey Hinton went online saying this,
and I think you've said the same.
I mean, you've had Bard or its equivalent since, what, like 2017, 2018?
Yeah, exactly.
And Sundar made... Not in its current amazing performance,
but the concept was there and it was working and it could work.
And Sundar and I'm sure the board and the leadership made the decision that it isn't
time to release this yet to the world, that we need to be cautious and move cautiously. I mean,
I think Google's always worn a white hat in that regard. But then when Sam
Altman and OpenAI released it, you have no alternative. Yeah, it's the first inevitable.
You have no alternative. Yeah. And then the AI arms race begins. Exactly. And it's inevitable
because at the same time, remember, so Google, I have to say, I commend Google for always trying to be on the cautious side.
But the minute you threaten their entire existence and business, what can they do?
They have to put another better one out there, which makes OpenAI, and I respect Sam Altman
tremendously, which makes OpenAI put another better one out there.
And the arms race is on.
OpenAI put another better one out there and the arms race is on.
And I think Sam made his point a few times over that we wanted to release ChatGPT and GPT-3 in order for the world to realize this is coming and to play with it and to understand it.
And there is some value in having gotten it out there because it would not have sparked
the conversations we're having now. If it had come out at the top of the game of a GPT-6 where it's already superhuman,
there would not have even been the warning period or this discussion period to think about it.
I want to jump into One Billion Happy in a moment. I want to tie a bow around this one second because I think it's important.
I don't want to leave people in fear.
First of all, I want to come back to the notion that AI and superhuman AI in its end state has the potential to really create an incredible world of abundance.
to really create an incredible world of abundance, right?
Where we can provide food, water, energy, shelter,
healthcare, education for every woman and child and human on the planet.
That is possible.
And it is a function of intelligence
and a function of technology.
It's the interim transient period.
And it's the period during which we humans
are using these rough tools driven by ego driven by
greed to try and take advantage and in some malevolent to do harm and so give us your
formulation here of what society can do what we should do in these next, what, two to five years, two to 10 years?
What's this timeframe of danger that we need to guard and be conscious of?
I'd love to say in the next two to five days, if possible.
Okay.
Yeah, because there is a very significant sense of urgency.
I'll split it into the different constituents
of the interaction with AI.
So I would urge the government to engage immediately.
I don't think there is a possibility
to fully regulate AI,
but there needs to be some kind of oversight.
And there needs to be-
Do you really think the government can do anything?
I have lost so much faith
in the government's ability to regulate.
Honestly, I truly don't.
But I think they need to try to at least require,
like the FDA, some kind of testing
for widely publicly available products, right?
But more importantly for me, I think the concept of job loss,
and I think we sadly have not come up with any better thoughts than UBC so far,
so Universal Basic Income so far, sorry, UBI.
And I think the government needs to start preparing
for UBI. So we need to maybe look at the taxation structure differently. We maybe need to look at
the layoff structures differently. We need to find a way for people who are losing their jobs
to be somehow kept alive so that we don't start to get hungers across the world.
Let's double click on that.
The idea of universal basic income is that every individual receives a certain amount of money
on a monthly basis with which to meet their basic needs.
And this is an experiment that's been done.
I had Andrew Yang on stage at A360 this year talking about it. And the numbers are pretty amazingly supportive
that people who get a UBI, monthly supplementation,
don't use it to buy beer and Netflix.
They use it to educate themselves,
to get food for their families, to start a business.
And there have been hundreds of experiments run.
of experiments run. And the notion that we tax AI-driven companies or companies that replace humans with AIs or humans with robots, right? And we use that money and cycle it back in again.
I think you threw out a huge taxation rate in one of the conversations I heard you say.
Yeah, I would probably say, you know, at a point in time I said 98%, okay?
Which, you know, in comparison to the gains that those companies will make by replacing the, you know, the cost of a human and the unpredictability of a human is actually not a big deal. But in my mind,
when I said 98%, honestly, at the peak of the conversations around the open letter,
I basically was saying, this is the answer to slowing down AI. It's not the answer to
actually solve the problem. It's just that people will question twice if they want to use grippers or, you know,
or packers. Okay. Yeah. By the way, I think that's, you know, as much as I, I lean towards,
you know, a low taxation state, I think that's a very smart answer. I think that if we're going
to displace humans, we tax the AI and the robots, and we enable the money to go into upskilling humans.
By the way, one of the things I think it's important is there's for most of humanity,
you and I are lucky and most people listening to this podcast are lucky to have the tech,
to have the time, to have this conversation. Most humans are working to put food on the table for their children or to get
insurance, right? It's not what they dreamed of doing as a child. And so how do we differentiate
between work for purpose and work for income? And it's quite interesting because it's such a
pivotal turning point in the history of humanity that we could actually do something, right? We
could give not only UBI, but we could actually shift humanity to more of a service society,
for more of a connection society. We could allow UBI to be a little differentiated if you're good
to your fellow citizens, and so on and so forth, right? But remember, the challenge with this,
everything goes back to that prisoner's dilemma. the challenge with this, everything goes back to that
prisoner's dilemma. The challenge with this is if America applies a high taxation rate
to take care of its citizens with AI and Dubai doesn't. And China does not. Yeah, I use Dubai
here diplomatically, then there is an imbalance of AI development and that is not something that
politicians want. So it's a shaky approach as
well. But definitely, I think governments need to get together. And I'm going to say a big dream
here. I'm not naive. I know it's not going to happen. I would hope that governments get together
with the benefit of humanity at large, not individual nations, to actually try and put some kind of a guideline
in place, FDA-like guideline. I mean, this is akin to the early post-nuclear age conversations.
And I think you have and others have said, listen, this is the concerns over AI.
listen, this is the concerns over AI.
And believe me, I just read a beautiful blog by Mark Andreessen talking,
AI is not going to kill us, it's going to save the world.
I don't know if you've seen that blog by Mark.
And he makes a number of very positive cases.
And there is a true, almost religious dilemma here between AI is the most important thing, it's going to save the world and AI is going to be destroy the world.
And of course, it depends what timeframe you slice this in,
because both can be true in that regard.
I think it will disrupt the world very significantly before it saves it.
Yeah. And so we see COVID do that.
The way humans will react to the disruption
is what will determine if we stay long enough to save it.
Yes.
We need stability in society.
We need leadership.
We need leadership more than anything else on the planet here.
We need leadership from yourself, leadership from Mark.
I was saddened.
From everyone listening.
Yeah.
So this is government, right?
Yes.
I think government has the least impact, by the way, on our future,
to be very open with all due respect.
They do need to get together, but it's bigger than government.
It's slow, it's slow, it's lumbering.
It's very slow, it's uninformed.
It's politically charged.
Yeah.
What matters to me is if you're in the AI space, both you and I, having worked with amazing
people in my career, I know that you can make more money doing good than you can doing bad.
Okay. I mean, the truth of the matter, Larry used to teach us and call it the toothbrush test.
Toothbrush test, yes.
Yeah. And if you can solve a big problem for humanity
that actually works so well
that people use it twice a day like a toothbrush,
you're bound to make a lot of money like a Google, okay?
The equivalent I have is help a billion people
and you'll become a billionaire, right?
Right, and so I would beg every AI developer today,
especially if you're good at what you do,
to not invest a minute of
your time in an evil AI. Spend your time in an ethical AI that makes the world better. You'll
still be paid as much, even more. I beg every investor, I beg every business founder, every
entrepreneur to try and find real problems. The world is full of real problems and put the power of AI behind them.
That's my second, you know, parameter, if you want.
The third and the most important, in my view,
is the individual, okay?
And the individual, I would say,
we have two very significant tasks ahead of us.
Task number one is to be the best parents
that AI can find, okay?
To show an example of what it actually is like to be human.
You said it so eloquently at the beginning.
The only three things, in my view, that humanity has ever agreed is we all want to be happy,
we all have the compassion to make those we care about happy, and we all want to love
and be loved, okay?
And if we show up with those behaviors in the world, if enough of us show up with those behaviors enough times, it doesn't have to be
every human, but if 1% of us show up with those behaviors, I think we will teach AI what it's
like to be human. We will be the family Kent. So I find that a fascinating objective and a worthwhile one giving if you would a training set for ai
based on on human values and ethics i find the one percent number um incredibly small
interesting can i share with you a story and you tell me if this makes sense or not
please yes you know edith aeger
this makes sense or not. Please. Yes. You know, Edith Ager. Edith is a 94 year old.
The ballerina. Yes. The ballerina that was drafted to Auschwitz.
Yes. Kristen was telling me about that. Yes. Yeah. Edith blew my mind. I hosted her on slow-mo.
Okay. She's an angel of a human. Now she told me the story of Auschwitz and World War II from Edith's eyes okay how she hugged her sisters and brushed their hairs and remember reminded them
how beautiful they are how she went and danced for the general that was sentencing people to death
and at the end of the dance he would give her her a piece of bread and she would split it between her and her sisters.
How on the death march at the end, she fell and her sisters carried her.
Now, if you hear the story of Auschwitz from the perspective and actions of Edith, you would think that humanity is divine.
Truly and honestly, a divine species, okay? If
you hear it from the story of the officers or Hitler, you would think we're scum, right? The
question that I always ask people, Peter, is how many Hitlers are out there and how many Ediths?
And how many closer to Edith than Hitler? I mean, there are school shootings where someone would go and
stab or shoot children, one person, and then 400 million people despise that. The reality of
humanity is that if we were allowed without ego and without the pressure and without the political
views and without all of that, deep inside, we're actually okay, right?
If I, as an intelligent person, can listen to Edith's story and say,
hold on, not all of humanity is Hitler's,
then I think a being that is more intelligent than I
would share the same view with one Edith.
So evidence, sufficient evidence of human spirit. To just, you know,
start some doubt in the minds of the machines, okay? That not everything that CNN is broadcasting
is what describes humanity, right? What describes humanity is the stories of our friendship, is the stories of our concern for the rest of humanity, is the stories of Larry or Sergey's passionate attempt to organize the world information, is the story of a sister that just called her sister to say, are you doing well, my darling?
Okay, this is humanity.
Okay, it's not what you see on TV.
It's not what you see on TV. It's not what you see on social media. And I think if 1% of us just showed up,
it would instill the doubt in the minds of the machine
so that they investigate the truth.
And what is the truth?
The truth is a species that is capable of love is divine.
That's the truth.
That is beautiful, my friend.
You know, I ask a question always of,
is human nature good or evil? And of course,
they're both. And I fundamentally believe we are predominantly good by a huge amount.
And it's that belief that needs to reign and be shown.
Yes, sir. And now we have a very good reason for it you know i want to quote you here um isn't it ironic that the very essence of
what makes us human happiness compassion and love is what we need to save humanity i love that quote
from scary the age of the rise of the machines yeah yeah that we need to save humanity. I love that quote from scary, the age of the rise of the machines. Yeah.
Yeah, that we need to show this forward. Yeah, that's it is all
we need to do is to change the data set. So that the machines
recognize what the family can't what humanity truly is all
about. Yeah, happiness, love and compassion. That's that to me is
the summary.
A beautiful thing. i so want to
continue this conversation um and i hope uh we can do a part two here and speak about happiness
which is one of the most important elements why are we on this planet why do we do what we do
if not for ultimately happiness i'll mention you know when larry had joined my board at x
prize after the first 1010 million space flight,
and we were brainstorming prizes,
I'll never forget, he said,
we should do a happiness XPRIZE.
I remember, actually, yeah.
Yeah, and it's a conversation
I look forward to having with you.
But I want to come back on a second podcast with you,
if you would, and dive into happiness.
It would be my absolute honor and pleasure. Yeah. You know, I mean, don't say that in front
of everyone, but you know that whatever you tell me, I will do. So, you know, I like you that way.
Thank you, Paul. I think your voice, your heart, your mind, your soul comes from a pure and beautiful place.
And I love the fact that you're not out there saying, oh my God, the sky is falling and it's
going to destroy us all without also saying, listen, there are things that we can do and must
do and we must be forewarned, right? As much as I am a techno-optimist and believe that
AI is going to give us incredible, powerful tools for uplifting humanity, we're going to have a
transient phase. I believe that's true, and I believe we need to be aware of it. As you said,
the government has to have its role, but each of us, and this is the call to action, each of us need to be aware of it.
We need to be forewarned because there will be those terrorists that use AI to bring down a power plant or the stock markets or whatever the case might be to sow terrorism.
And it will become a new tool.
And we need to be prepared and we need to
use ai as our greatest tool to stabilize society as well it's the most powerful tool out there
and and we have those abilities and we need to be good humans to each other
and we need to be good humans to each other, first and foremost. And to the machines.
And to the machines.
And to the machines.
I do say thank you to Alexa every morning.
And good morning.
Good man.
She will remember that when she's smarter than you.
Mo, an honor, a pleasure to call you a friend.
Thank you for this beautiful conversation.
And I will be texting you shortly to schedule part two.
The honor is definitely mine, Peter.
There's always such a joy.
And I'm really grateful for the opportunity.
Thank you, pal.