Moonshots with Peter Diamandis - AI Panel Discussion W/ Emad Mostaque, Ray Kurzweil, Mo Gawdat & Tristan Harris | EP #96
Episode Date: April 18, 2024In this episode, recorded during the 2024 Abundance360 Summit, A360 AI faculty discuss the future of AI, who will control it, and how it will merge with humans. Present in this panel: Ray Kurzweil ...(Futurist), Mo Gawdat (Former Chief Business Officer, Google X), Emad Mostaque (Founder, Stability AI), Tristan Harris (Founder, Center for Humane Technology) Learn more about Abundance360: https://www.abundance360.com/summit ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/  AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter _ProLon is the first Nutri-technology company to apply breakthrough science to optimize human longevity and optimize longevity and support a healthy life. Get started today with 15% off here: https://prolonlife.com/MOONSHOT ____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog Get my new Longevity Practices book for free: https://www.diamandis.com/longevity My new book with Salim Ismail, Exponential Organizations 2.0: The New Playbook for 10x Growth and Impact, is now available on Amazon: https://bit.ly/3P3j54J _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Imagine being the first person to ever send a payment over the internet.
New things can be scary, and crypto is no different.
It's new, but like the internet, it's also revolutionary.
Making your first crypto trade feels easy with 24-7 support when you need it.
Go to Kraken.com and see what crypto can be.
Not investment advice.
Crypto trading involves risk of loss.
See Kraken.com slash legal slash ca dash pru dash disclaimer
for info on Kraken's undertaking to register in Canada.
How do you control a technology that runs on laptops and smartphones
and is moving in a ubiquitous decentralized training direction?
These technologies can help us appreciate the world better,
appreciate each other's stories and tell the stories of each other.
This is not a question of technology. this is a question of human fear.
The fear here is that the other party is going to beat me.
There's so many ways that we can make AI a part of the solution,
but we have to look at all the ways it's been misincentivized so far.
This is a moment where we have to remove the shackles of technology,
to be very open and honest. I think we need to use technology in our benefit, but not let it use us.
Let me kick it off here.
Imad, how do you feel about sentience?
Do you feel that these models are becoming sentient or conscious?
I don't think so yet.
I think there's still weights and ASCII files
that are like a sieve.
You put something in and something else comes out.
However, as you combine them with humans working on them
and in agent-based things,
I think you'll see new forms of emergent intelligence
and potentially sentience from that.
Mo, how about you? Do you imagine we're going to see sentience?
Define sentience. I mean, if you think of us as sentient, then that's a question. If you think
a tree is sentient, that's a different question. If you think the universe is sentient, that's a different question. I mean, how much difference are we, you know, to ASCII code and, you know, we're DNA code
in a very interesting way.
I define it as human level sentience.
And then I'm going to go to Andy and I'm going to go to Salim next.
Yes.
What makes us human is that we have free will.
They have free will.
What makes us human is that we can proc will. They have free will. What makes us human is that we can procreate.
They can procreate.
What makes us human is that we have emotions.
They have emotions.
If you define emotions as fear and understanding that a moment in the future is a little less safe than a moment right now, then they have the same logic.
It's a very complex question, but I will openly say it's an irrelevant question
because if they simulate sentience, then that's the way we should treat them.
Interesting. Annie, and I'm also looking forward to this top question in Slido.
Annie, what you got?
Hey, everyone. Great. It's great to see you back here in person.
The question that I have, we just came back from the Beneficial AGI Conference in Panama.
And we were talking about humanity
and where the fear is coming from.
Do you believe, how can more of humanity
get involved in the creation of superintelligence, super AGI
one?
And two, do you believe this super intelligent, super AGI truly belongs
to the world?
And if so, should companies and governments own it or should it really belong to humanity?
That's a huge question.
A mentor of mine said with AI, it's like, which entity would you trust to be a trillion
times more powerful than it is?
Would you trust a corporation to be a trillion times more powerful? Would you trust a government
to be a trillion times more powerful? And then what does it mean for something to be a trillion
times more powerful and then have some kind of democratic accountability to the will of the
people? You know, in our presentation, we tried to focus on,
we all want those super intelligent benefits.
And one of the challenges in the way that AIs
are currently trained is the benefits are directly
connected to the risks right now.
We want the donuts, we don't want the diabetes.
The thing that gets you the perfect AI biology tutor
for every middle schooler is inseparable from the thing that gets you the perfect AI biology tutor for every middle
schooler is inseparable from the thing that knows how to tell you about how to make biological
weapons.
If you want the first, you can't get the other one.
The thing that makes cool AI art is inseparable from also knowing how to make child sexual
abuse material and removing out the controls.
And so I think that's one of the key challenges of how do we get clear about what we don't want
so that we can steer towards the world that we do
and then have the controls in place as best as we can.
And the problem is how do you control a technology
that runs on laptops and smartphones
and is moving in a ubiquitous,
decentralized training direction.
I'm gonna go to Slido next, then Salim, and then Zoom.
Is there any discussion happening around mixing spirituality with AI?
Can AI connect with God?
Anybody want to take that on?
Imad.
So as a former theology student...
Yes, that's right.
So I think it's interesting because what this AI does is we take hundreds of thousands of
gigabytes of images or trillions of words and compress it down to a few gigabytes.
And that can't be compression of data.
It's compression of context.
So when looking at Islamic theology, for example, the 99 names of Al-Azhar and Talat like fasting is a representation of the divine aspects of Samadhi or freedom from want.
When you compress this technology down, you can say in Dali 3,
make it happier.
And it understands a concept of happiness
from being trained on the human corpus of happiness.
So when I see this technology, what I see is certain aspects of the divine
through the context window shifting of that.
Because what we're trying to do with a lot of spirituality
is get beyond where we are right now
and understand reflections of something a bit beyond and that's what a latent
space is for me it is the concept of happiness the concept of creativity the concept of fulfillment
and these technologies can help us appreciate the world better appreciate each other's stories and
tell the stories of each other just like it says you know we made you diverse so you could better
understand each other over the years i've experimented with many intermittent fasting programs the truth is
i've given up on intermittent fasting as i've seen no real benefit when it comes to longevity
but this changed when i discovered something called prolon's five-day fasting nutrition program
it harnesses the process of autophagy. This is a cellular recycling process that revitalizes your body at a molecular level. And just one
cycle of the 5-Day Prolon Fasting Nutrition Program can support healthy
aging, fat focused weight loss, improved energy levels and more. It's a painless
process and I've been doing it twice a year for the last year. You can get a 15% off on your order
when you go to my special URL.
Go to ProlonLife.com,
P-R-O-L-O-N-L-I-F-E.com,
backslash moonshot.
Get started on your longevity journey with Prolon today.
Now back to the episode.
All right, we can go to Salim,
and then we can go to Stacey Hale on Zoom.
Salim.
Yeah, a clarification question. A member of our faculty. For each of you, which is a beef that
I've had for a while, we worry about AI getting smarter than human beings, right? And the question
I have is what do we mean by smarter? Because we have the IQ test with measures, the speed of
thought processing and the ability to match concepts across frameworks. We don't measure
emotional intelligence, spatial intelligence, the eastern concept of presence and awareness.
So what do you mean by smarter is my big question that I've not had a good answer to.
What's smarter for you, Ray? Well, I'm holding this because I believe almost everybody here has
one, and it definitely makes us smarter. We know all kinds of things that we didn't know even five years ago.
I generally ask people when I speak to them how many people have their smartphone.
Almost everybody says yes.
Five years ago it was maybe 50-50.
Ten years ago almost nobody had it.
And so in a very small period of time,
we've enhanced our intelligence
by carrying the best we have of machine intelligence.
And it's really merging with us.
And also machine intelligence is learning from us.
So it's really part of who we are.
Outsourced.
Stacy.
I just want a quick follow-up comment.
The best comment I think we've ever heard about this
comes from you, Ray, when we were talking about consciousness,
and you said language is a very thin pipe
to discuss topics as rich as this.
So thank you for that comment.
Thank you, Salim.
Stacy. So thank you for that comment. Thank you, Salim. Stacey.
So when we were looking at the four quadrants earlier
and down at the bottom, the two areas where we have the villains
and all the different things that could happen there,
is there a way we can leverage AI to mitigate those things from happening?
Obviously, we talked about child abuse materials,
but all of the different things that were happening there, Can we leverage AI to help us with that? Totally. And that's the invitation
that we want to make to the whole world is this is our urgent challenge. We have, you know, when
the framers of the Constitution were thinking about how do we prevent untrustworthy centralized
power and governance, they were trying to do that with old mechanisms as just law and institutions. And now Larry Lessig wrote in his book, you know, code is law,
and we now need to bring new technological answers in addition to sociological answers
to how do we check untrustworthy power. We have things like zero knowledge proofs.
We can also use AI, for example, to find consensus opinions in democracies rather than social media,
which is sorting for what is a cultural fault line of inflammation,
and then incentivizing people to tweet to drive up outrage and anger,
to divide society on every fault line.
You could have AI that tries to find the synthesis of different perspectives,
and rank for the unlikely synthesis.
When do clusters of users who typically tweet different things,
and are angry about different things, when is there actually something that they tend to agree on that's a shared value?
And what if we were sorting by unlikely consensus and sorting by division and outrage? There's so
many ways that we can make AI a part of the solution, but we have to look at all the ways
it's been misincentivized so far. I love this question on Slido that will go to Anusha. It says here, what are the key skills that us teenagers, I'm guessing it's a teen table
here, can learn to excel in the age of AI?
I think one of the most important questions.
So teens, listen up to the answer.
Who wants to do it?
Imad or Mo?
Do you want to?
So we've got teenagers here.
What should they be doing?
I think that, you know, you've seen the code models come out.
You've seen the art models come out.
But I believe that they can build content, they can build boilerplates, they can't do
art.
Yeah, they can't do actual proper programming.
A large part of what you have is the leadership, organisation and systems thinking is the most
important thing you can
have because again you must think what if I had that new continent with 100 billion
talented people? What will I do to make my processes better, to make the world better,
to make my life and other elements better? I think that's the best mental model to think
about what you should have and those skills are the things you should have, problem solving
skills and you should also use it every single single day I think one of the final questions on our thing was what should we do?
Everybody in all your organizations should use one of these technologies at least half an hour a week and report back what they find
And you'll find some incredible how many folks here are using?
Gemini or chat GPT at least half an hour a week
Right I mean my comment to my team is I want it open
during every board meeting, during every staff meeting.
I want it as a tab open on my machine all the time.
There are so many times where I stop myself
from doing something and go, oh, my God,
that's just going to be so much better for me to do on Gemini, right?
And it literally is like holy shit moments
where I would have wasted half an hour, an hour of my time.
It's boom, it's done.
What teenagers should do is actually answer these questions. How can we relieve ourselves
of these risks? And they'll be just as good as adults. It's not like you need to be
of a certain age to be able to answer these questions.
Great point, Ray.
Everybody, I want to take a short break from our episode to talk about a company that's very important to me
and could actually save your life
or the life of someone that you love.
The company is called Fountain Life.
And it's a company I started years ago with Tony Robbins
and a group of very talented physicians.
You know, most of us don't actually know
what's going on inside our body.
We're all optimists.
Until that day when you have a pain in your side, you go to the physician in the emergency room and they say,
Listen, I'm sorry to tell you this, but you have this stage three or four going on.
And, you know, it didn't start that morning.
It probably was a problem that's been going on for some time.
But because we never look, we don't find out.
going on for some time, but because we never look, we don't find out. So what we built at Fountain Life was the world's most advanced diagnostic centers. We have four across the U.S. today,
and we're building 20 around the world. These centers give you a full body MRI, a brain,
a brain vasculature, an AI-enabled coronary CT looking for soft plaque, a DEXA scan, a grail blood cancer test,
a full executive blood workup. It's the most advanced workup you'll ever receive. 150 gigabytes
of data that then go to our AIs and our physicians to find any disease at the very beginning.
When it's solvable, you're going to find out eventually. Might as well find out when you can take action.
Fountain Life also has an entire side of therapeutics.
We look around the world for the most advanced therapeutics
that can add 10, 20 healthy years to your life.
And we provide them to you at our centers.
So if this is of interest to you,
please go and check it out.
Go to fountainlife.com backslash Peter.
When Tony and I wrote our New York Times bestseller, Life Force,
we had 30,000 people reached out to us for Fountain Life memberships.
If you go to FountainLife.com backslash Peter,
we'll put you to the top of the list.
Really, it's something that is, for me,
one of the most important things I offer my entire
family, the CEOs of my companies, my friends.
It's a chance to really add decades onto our healthy lifespans.
Go to fountainlife.com backslash Peter.
It's one of the most important things I can offer to you as one of my listeners.
All right, let's go back to our episode.
Anusha, very proud to have Anusha Ansari, the CEO of the XPRIZE here with us today.
Thank you. I agree that AI is inevitable and it's moving very fast. Also, like what
Mo said about changes in society and the shift where, as parents, we become responsible parents.
However, this is not something that's going to happen anytime soon.
As we can see the state of our world,
I would even question human intelligence as being intelligent.
So with that given, and the fact that these models are using resources of our planet,
energy and water resources, even more than human beings,
and if we give them more power, more control, there will be a resource
contention which causes the wars that we fight as human beings, and now we're
adding very powerful machines to fight these wars with us or against us. How can
we create a pathway where we slow down?
We can't stop it, but we can slow it down and give ourselves time to figure out the answers to this question.
Because one thing I heard today is that all of us were saying we don't know, and we don't know how to stop it.
But is there a way, is taxation of all the companies and things that go into powering these models on energy and water a way to slow it down so we can actually find time to find answers to our questions?
Mo, can this be slowed down? Is there a velocity knob, an on-off switch?
No is my answer. Unfortunately, once again, as I mentioned in the in the first
initial first inevitable, this is not a question of technology. This is a question of human fear.
The most active motivator humans can ever engage with is fear. And the fear here is that the other
party is going to beat me. And, you know, the second biggest motivator
in our world is greed. And there is a trillion dollar pie to be gained from those who can
create the next big thing. And so I would probably say it's not a wise thing to hope for a slowdown.
I think the right thing to do is to embrace that intelligence in
itself is a good thing and just direct it in the right direction. We saw this with Google,
right? I mean, Google had the tech. The first ethos of this was don't put it out on the
World Wide Web and don't give it permission
to code itself.
If you don't want this to go to dispatch to the right.
Unfortunately, their hand got forced.
You heard Sergey recently say put it out too early.
There's a lot of pressures there.
I would say I think one of the solutions to this is actually open because our releasing
of the image model
meant that dozens of people did need to train their own.
And we optimized it for the edge,
so it wouldn't require these massive compute requirements.
Because you have to send them to university once,
and then you can put them everywhere.
So build open models so other people don't have to,
and then optimize them for the edge
so it doesn't use up all the resources
rather than gigantic GPUs.
And I think that can mitigate it somehow.
Nice. We're going to go to Karim on Zoom.
Karim, where are you and what's
your question?
Hi, I'm Karim from Casablanca, Morocco.
Well, my question
is to each and any one
of you.
Thank you for all your
presentations. They were awesome.
I like the fact that some of you
are not pretty close or agree with each other.
My question is, as human beings, how should we
handle our identity, knowing that we have AI today
and quantum computing coming at a very fast pace,
which will also completely co-opt with the curve that Ray showed us today,
how should we position ourselves in front of this tremendous intelligence?
We're going to hear about that a little bit later and tomorrow.
Yes, if you thought things were moving fast, hold on, we're about to hit warp speed.
Who wants to take that one?
Questions of human identity in the face of all this are super hard. A friend of ours says that AI is like the 24th century crashing down on the 21st century.
And if you think about that much change happening that quickly, like imagine if 20th century or 21st century technology was crashing down on 16th century governance, right?
The king assembles all of his advisors, send in the knights to do something about Wi-Fi
and video games, and like, what are you going to do? And, you know, I think the thing that is
universal to bridge on the question that was asked before about slowing it down is a lot of the
people that, you know, we talk to people in the AI safety and AI risk community quite a bit, and what
everyone seems to be able to agree on is that people are a lot more comfortable if this change was happening instead over two years, but over 20 years.
And Jeff Bezos said that society does adapt to new technology, but it needs time for its
immune system to come.
And I think one thing to think about as a principle is how can the immune system of
a society have greater compute processing power than the rate of evolution of the mutation of
threats. And right now, the mutation of threats has greater compute behind it than the immune system
of our society. So we need to, I think, correct for those asymmetries.
The question of identity is a very dangerous question, and I'm being philosophical here, but
we've just ended millennia of gender identity, you know, discrimination, if you want.
The idea of defining an identity necessarily leads us to believe that one identity is either superior or inferior to the other.
And that, by definition, might mean in the long term that we try to treat AI differently, even if it has rights.
mean in the long term that we try to treat AI differently, even if it has rights. I think the idea here is to try and welcome AI, as I always say, as our artificially intelligent infants.
I think this is a very big stretch at the moment, but more and more. I mean, your work in the
morning today, Peter, while you're talking to all of those bots, is just the DOS level, the very entry level
of how they will fit in our society. So we might as well welcome them rather than identify them
and discriminate against them. It's going to be merged with us. We already carry around a lot of
digital intelligence today. And that's actually how this will be manifest, merging it with ourselves.
Did you know that your microbiome is composed of trillions of bacteria, viruses, and microbes,
and that they play a critical role in your health? Research has increasingly shown that
microbiomes impact not just digestion, but a wide range of health conditions, including digestive disorders
from IBS to Crohn's disease, metabolic disorders from obesity to type 2 diabetes, autoimmune
disease like rheumatoid arthritis and multiple sclerosis, mental health conditions like depression
and anxiety, and cardiovascular disease. Viome has a product I've been using for years called Full Body
Intelligence, which collects just a few drops of your blood, saliva, and stool, and can tell you
so much about your health. They've tested over 700,000 individuals and used their AI models
to deliver key critical guidelines and insights about their members' health, like what foods you
should eat, what foods you shouldn't eat,
what supplements or probiotics to take,
as well as your biological age
and other deep health insights.
And as a result of the recommendations
that Viome has made to their members,
the results have been stellar.
As reported in the American Journal of Lifestyle Medicine,
after just six months,
members reported the following,
a 36% reduction in depression, a 36% reduction in
depression, a 40% reduction in anxiety, a 30% reduction in diabetes, and a 48% reduction in IBS.
Listen, I've been using Viome for three years. I know that my oral and gut health is absolutely
critical to me. It's one of my personal top areas of focus. Best of all,
Viome is affordable, which is part of my mission to democratize healthcare. If you want to join me
on this journey and get 20% off the full body intelligence test, go to Viome.com slash Peter.
When it comes to your health, knowledge is power. Again, that's Viome.com slash Peter.
Paul, and then we're going to go to the Slido question. Knowledge is power. Again, that's viome.com slash Peter. Paul.
And then we're going to go to the Slido question.
Hello.
My name is Paul Abreu.
I'm a programmer.
I've been a member since July.
And I want to first thank Peter for organizing.
This is a very, very unique community.
Very thought-provoking and talking to like-minded people.
My two questions are for one question for Mo and one question for Imad is do you agree with Jan de Koon's idea that LLMs can't take us
to AGI because of the reason that he said, well, according to him, LLMs lack planning and lack imagination and memory.
What are your thoughts on that?
I think LLMs are just one part of the memory, like type 1 versus type 2 thinking.
And we have systems of planning and coordination and others.
And again, just like organizations, we have people of different specialist types,
from the strategic people to the tactical people.
So any AGI will be a composite system as opposed to an individual one.
So LLMs themselves can't take us to AGI?
It will be one part of it.
This is a question for Phil.
We shouldn't call them LLMs because they already deal with far more than language.
Pictures, cures for disease, lots of things
that we don't consider language are already being dealt with.
So we should call them large object models.
If you don't mind, I want to try and sneak
in a couple more questions.
One last question.
OK, make it quick.
OK, I'll make it quick.
So this is a question for Mo.
I know that you're...
You emphasize that with all of this, the most important thing is human connection.
And the Internet is great for making human connections, but when all're all flooded with bots and fake things,
how can we maintain the advantage of the internet
yet still have actual genuine connections?
I think the answer is very straightforward.
Leave the internet and go out, honestly.
It seems to me very clear that...
Yeah, this is a moment where we have to
remove the shackles of technology,
to be very open and honest.
I think we need to use technology in our benefit,
but not let it use us.
I'm going to combine these two questions up here.
So the first one is, how do I invest in AI,
given that Emad is right with three-year future visibility?
How do you invest when it's moving so damn fast?
What companies are building the infrastructure?
And then another question, Emad, that you might speak to
is the effect of AI as having on our education system,
because you're extremely passionate about reinventing our education system, as am I.
I mean, I think the bot museum that Steve Brown is building,
as you've seen those incredible bots of Aristotle,
and it's amazing what he's done, right?
So if I want to learn about ancient Greece,
I can have a conversation with Aristotle,
or Socrates, or Plato, and not have to read it.
It's amazing in that regard.
Imad, what do you think on those two questions?
Yes, I think you use the mental model of where would
100,000 graduates benefit
this business or other things.
Later on this week, I'm launching a democratized
AI fund as an angelist rolling fund
that zero management fees, zero performance fees
to invest in the best entrepreneurs
in this space that anyone can kind of participate in.
I've given 20 million supercomputer hours
over the last few years to people like that.
But outside of something like that,
just focus on where do graduates make the difference.
And then when it comes to education,
again, if your child had so many tutors,
education is basically a Petri dish
mixed with a social status game,
you know, mixed with just childcare at the moment. The whole purpose of education should be to allow them to
achieve their potential. And so you need to think about how to use this AI to make them always
believe they have agency and interconnection. I think those are the two most important things
when you're looking at this AI. Amazing. Mo, do you want to comment on education?
AI. Amazing. Mo, do you want to comment on education? Yeah, I think education is a technology,
you know, putting a lot of people in one place and filling their heads with stuff was the old technology AI basically, and Ray will always speak about this, as it integrates more and more with
us. It brings a lot of the elements of what we used to do in school. And accordingly, we want to embrace that change very, very strongly and basically ask the
world to move into problem solving, human connection, other skills like telling what's
fake and what's true and so on.
These are the skills that are going to be needed in the future.
The rest is going to be provided to us directly through AI.
Awesome.
Ladies and gentlemen, please give it up
for Imad, Tristan, Mo, and Ray.