Moonshots with Peter Diamandis - Leading Experts Predictions on the Future of AI & AGI w/ Salim Ismail | EP #94
Episode Date: April 4, 2024In this episode, Peter and Salim recap the Abundance360 2024 Summit and discuss Elon Musk, Michael Saylor, Ray Kurzweil, and more. Salim Ismail is a serial entrepreneur and technology strategist we...ll known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University, and the founder and chairman of ExO Works and OpenExO. Join Salim’s OpenExO Community Join my executive summit, Abundance360: https://www.abundance360.com/summit ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter ProLon is the first Nutri-technology company to apply breakthrough science to optimize human longevity and optimize longevity and support a healthy life. Get started today with 15% off here: https://prolonlife.com/MOONSHOT _____________ Get my new Longevity Practices 2024 book: https://bit.ly/48Hv1j6 I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Imagine being the first person to ever send a payment over the internet.
New things can be scary, and crypto is no different.
It's new, but like the internet, it's also revolutionary.
Making your first crypto trade feels easy with 24-7 support when you need it.
Go to Kraken.com and see what crypto can be.
Not investment advice.
Crypto trading involves risk of loss.
See Kraken.com slash legal slash ca-pru dash disclaimer
for info on Kraken's undertaking to register in Canada.
Earn the points.
Share the journey.
With the TD Aeroplan Visa Infinite Card,
earn up to 50,000 Aeroplan points.
Conditions apply.
Offer ends June 3rd, 2024.
Visit tdaeroplan.com for details.
Welcome, everybody, to another episode of WTF Just Happened in Technology on Moonshot.
I'm here with Salim Ismail.
Will AI have rights?
This is the worst AI will ever be today.
We are on track to reach longevity escape velocity by 2029. It's just the early days right
now still. The idea of putting billions of microscopic chips into the brain, what could
possibly go wrong? Hello, Hume. My name is Peter. Well, hey there, Peter. Nice to meet you.
Super excited, Salim. We had quite the week in technology. You and I were on stage with some of the most extraordinary leaders in AI and tech out there, including Elon Musk and Mike Saylor, Eric Schmidt, Ray Kurzweil, Jeffrey Hinton. And rather than reporting on the news, I think we should report on the conversations we had with these guys because some of the stuff was truly magical.
Yeah.
I mean, great to be back.
That was possibly one of the best conferences I've ever attended.
And so kudos to you and the team for pulling that off.
It was kind of incredible.
I heard the same from everybody.
And I think the conversations that were had there were so far ahead of anything that might
hit the news that it's really, really worth recapping.
Yeah.
And the conference you're speaking about is the Abundance Summit.
The theme this year was the great AI debate.
And we debated a lot.
And in fact, the theme was, is digital superintelligence humanity's greatest hope or our gravest threat but let's jump in um because one of
the most extraordinary conversations was with elon um and i gotta tell a little side story for those
uh listening so you know i communicate with elon he's gone all in on x like he only does phone
calls on x he only does texting on x and he only does texting on X, and he only does video on X.
So, you know, the start of the show is Abundance Summit's four and a half days.
At the beginning, I text him and say, hey, we've got, you know, 500 amazing CEOs, entrepreneurs,
philanthropists in the room, and a couple of thousand online.
Would you join?
And he said, you know, his normal answer is sure, you know.
And I said, great know, his normal answer is sure. You know, and I said, great.
Let me know when. So this was on Sunday, on Monday. First of all, I said, here's a Zoom link. And he said, no, don't do Zoom, only X video. So we tried it. And unfortunately unfortunately we had a problem on our you know our wi-fi at the hotel and so it was
choppy it didn't work and he said let's try again tomorrow and i'm like okay is he really going to
try again tomorrow and sure enough you know i texted him and and he said yep uh let's do it
and so right after lunch at two o'clock on that Tuesday, he says, let's test it first.
We did.
And he beams up on the main stage.
Now I'm on my phone, right?
On X video connected to my computer that is then connected to the main screen.
And Elon's on his airplane.
And the most incredible thing was we were on X video
over Starlink on his plane talking about the future of AI.
That was fun.
Amazing.
Really awesome conversation.
And once we got it going,
I think the bandwidth and the conversation was fantastic.
It was really chatty that day, actually.
You know, let's open up with a clip.
All of this, the entire 30-minute conversation with Elon is on the Moonshots channel.
So you can go and subscribe and see it there.
But let's listen up to the first part of the conversation.
I'd love your thoughts on this.
The way in which sort of an AI or an AGI is created is very important.
It's almost like raising a kid, but that's like a super genius, like godlike intelligence kid.
And it matters kind of like how you raise the kid.
One of the things I think that's incredibly important for AI safety is to have a maximum sort of truth-seeking and curious AI.
So I've thought a lot about AI safety.
And my ultimate conclusion is that
the best way to achieve AI safety is to grow the AI,
you know, in terms of the foundation model
and the fine tuning to be really truthful.
Like don't force it to lie.
Like even if the truth is unpleasant, it's very important. Don't force it to lie like even if the truth is unpleasant it's very important don't make the
ai lie in fact for the core plot premise of 2000 you know 2001 a space odyssey was things went
wrong when they forced the ai to lie the ai was not allowed to let the crew know about the monolith
that they were going to see but it was also had to take the crew to the monolith and so the conclusion
of the ai was to kill the crew and take modelist. And so the conclusion of the AI was to kill the crew
and take their bodies to the modelist.
And so the lesson there being,
don't force an AI to lie or do things
that are axiomatically incompatible,
like to do two things that are actually mutually impossible.
You know, that's what we're trying to do with XAI
and Barack is to say like, look,
we want to just have a maximally truthful AI, even if what it says is not politically correct.
So I think it's a good idea not to force your AI to lie.
When we kicked off the conversation, I reminded him of a tweet that he'd put out saying that AI compute was growing 10x every six months.
Right.
Do you remember where he went from there?
Yeah, he said, look, at this pace, we'll get AI smarter than us in a very short order of time.
And there's a monster kind of implication to that, both good and bad, right?
The good part is it could deliver abundance very quickly.
And the bad part is what he refers to in this video, which is
often called the paperclip problem, which is if you instructed an AI to create as many paperclips
as it could, it might decide that the only way to do that was to take over all energy resources on
the planet and suck humanity dry and wipe out humanity by accident trying to achieve its goal.
And how do you avoid that is the key problem. Yeah. Let's show another quick clip of Elon from that 30-minute conversation.
The way in which sort of an AI or an AGI is created is very important.
It's almost like raising a kid, but that's like...
Before I go past this clip, I'd love to hear your comments, but we also had Mo Gadot on
here talking about the fact that AI is our progeny. AI is the children we're raising on this planet, and we have to train them, educate them, feed them in a way that they're going to give positive intellectual capabilities to support humanity.
What are your thoughts here?
I think that's exactly right.
It goes to, if you combine two thoughts, right?
One is Elon framing it as we're raising a godlike intellect.
And the second is Mo's concept.
Then it brings to mind for me, Neil Jacobstein saying,
okay, you're worried about AI becoming autonomous,
having access to the world's information, making its own decisions or running amok,
right? And we're like, yeah, because while we call them children, we have a precedent for that.
Because you raise kids and you hope they're raised in the right way, and then you lose
control of them and they do their own thing. And you hope they make the right decision.
But let's be serious. Your kids are not going to accidentally, you know, burn down that.
Well, I guess they could burn down the house, but they're not going to set up nuclear codes.
I mean, there's a little bit of a difference between children and AI children.
There is.
And there's also a big difference in that our kids are biological and we have some familiarity
with that.
We assume, you know, I always take the optimistic view here,
because we have no evidence to the contrary. But we assume that AIs will be negative towards us,
or that they could be. And I just don't see any evidence of that at all. And that's where
I kind of am in line with Ray Kurzweil on this. But there's definitely a monster danger of a human being programming in AI
for malevolent reasons or accidental reasons, as you frame it.
Yeah. So listen, I'm the optimist as well, but we did see in the past at Facebook and at Microsoft
and other places, AI is becoming just flagrant in hate speech and going off the deep end.
I mean, we did see that,
right? We did. And that's really a garbage and garbage out problem, right? If you let it loose on the internet and let it watch episodes of Survivor, then it's going to come up with stuff
that's crazy. Well, just let it watch, you know, episodes of CNN. All right, let's see what he had
to say about abundance here. You know, the conversation yesterday, Elon, is one that you're well familiar with and have been
talking to the world about, which is, is digital superintelligence humanity's greatest hope
or its greatest fear? And I would love to have you sort of speak to that for a few minutes.
The advent of superintelligence, it is actually very difficult to predict what will
happen next. So I think there's, you know, there's some chance that it will end humanity. I think
that's, you know, like I said, I probably agree with Jeff Hinton that it's about out of 10% or
20% or something like that. The probable positive scenario outweighs the negative scenario. It's
just, it's difficult to predict exactly.
But I think we are headed for, as I think is the title of your book, Abundance, is the most likely outcome. I thought your book was pretty accurate in terms of the future being one of abundance,
where essentially goods and services will be available in such quantity that really they'll be available to everyone.
Like basically, if you want something, you could just have it, essentially.
There's really no meaningful limit to what the economic output would be.
So, you know, looking on the bright side, we are headed for a future of abundance.
I think that's the most likely outcome.
I think the only scarcity that exists will be scarcity that we just decide to create artificially.
Like, let's say we just decide that there's a unique work of art or something,
but, but any kind of goods and services I think will be extremely abundant.
Thoughts, buddy? Yeah, I think that's exactly right. I mean, you know,
we can see we've gone from information scarcity to information abundance
globally. We're going from energy scarcity to energy abundance.
Once you have energy abundance, all sorts of other things become radically possible. And there's no reason to deal
with scarcity except in specific areas where you want to, and it's intentional and conscious,
like Bitcoin or rare works of art, as he mentions. If you're familiar with the luxury goods world,
as he mentions. If you're familiar with the luxury goods world, you know what Birkin bags are,
which is a purse that costs like 10 grand. And there's a three-year waiting list for this thing.
Just because women want that bag so much, they've created the waiting list for that.
And these things appreciate in value, which just blows my mind. And so there's lots of mechanisms,
I think, for us to practice and navigate created manufactured scarcity, whereas anything we really need is in abundance.
And I think that's a great place for us to end up.
Yeah, I mean, I do believe it.
And I just want to focus in on his original comment, 10 or 20% chance of global disaster versus 80 or 90% chance of abundance.
And when he says, I think it's a 10 or 20% chance, I wanted to say, wait, hold it,
10 or 20% chance of which side? Can we be clear about this? So he's been pretty consistent on that.
And we don't- Yeah, I put it in the one to a thousand range, but yeah. you're in the, you're, what do you mean?
You think it's one in a thousand?
I put it in the one to a thousand range of devastating negative effects of AI.
But that's dangerous to say that, you know, I think it's a lot easier.
It's a lot better for us to say, listen, there's a 20% chance of disaster because you don't
discount 20% chance and you actually do everything you can to
prevent it if you say it's like one in a thousand okay fine let's just go ahead willy-nilly all
right let me let me challenge you on something please as far as i've looked into it and as far
as my community has looked into it we see no mechanism of any way possible of limiting AI and its spread and its propagation and its development.
Like zero.
I agree.
It cannot be contained.
Unless you control every line of code written and the AIs are writing the code.
Yeah.
And by the way.
As far as we can see, the genie's out of the bottle.
It is.
You know, there were two absolutes uh five years
ago don't put it on the web and don't allow it to code itself and guess what both of those barriers
were broken instantly instantly and the minute the minute um chat gpt connected to github with
all of the code base there and learned through that, and now it can control anything,
it can write its own programming, pretty much you're done. So there was a small possibility,
but even then it was going to happen at some point. And if we didn't do it, the Chinese would do it, or the North Koreans would do it, somebody would do it, and it was going to happen, right?
So there was an inevitability to it that I don't think is stoppable in any way, shape, or form.
I think guiding it is the only path we have going forward.
I agree.
And this is a conversation I had with Eric Schmidt on the morning of day two was Google had this technology first.
And they chose not to release it because they felt like it wasn't ready yet.
And then here comes open AI, releases it all and there's no choice but to release it themselves
if they want to stay in existence, which is a prima facie first-
Yeah, the techies behind the scene are very, very unhappy with Sam Altman because on one
hand he lets it out for everybody to use and then he goes to governments and says, oh, let's figure out ways of regulating this, right?
When knowing full well that it's completely not feasible in any really real way.
I mean, I think it's an important point.
This, you know, Mustafa Suleiman, who was the CEO of Inflection AI, just moved this past week to be CEO of Microsoft AI.
So all of these moves going on. And he committed to speak
at Abundance 360 next year. So he'll be on stage with me next year. He basically wrote a book
called Becoming Wave in which he talked about, we need to provide containment. And I just don't
believe containment is an option. The smartest hacker in the room is the AI
and it's just not going to be contained. So can you contain, can you reduce its resources? Can
you regulate it? I think you're absolutely right. The only option is to guide it. The kid is born
or the children are born. And it's like, to use Mogadishu's words, are you giving birth,
are you raising Superman or a supervillain? So can I speak to that for a second?
Yeah, sure. If you look at humanity, there's a very clear model where the more conscious
somebody is on a spectrum, the less negative things they do.
I agree 100%. So in my opinion, what you do is you help AIs become as conscious as possible
and as super conscious as possible.
As fast as possible.
As fast as possible, right?
So if you have an AI that has visibility over every species on Earth
and what it's going through and gathers the data,
and then you say, listen, we're trying to preserve life and guide it
and give it the full fruition of self-expression and self-actualization,
teach it Maslow's hierarchy of needs, teach it the Hawkins scale, which is a one to a thousand
scale of vibration, energy, et cetera. You have every chance of it kind of going, wow,
this is pretty cool. I want to get to there. And then it brings all of us along that way. And I think that
points to a beautiful future. I agree. And in fact, all of the scenarios that Hollywood
painted were a Terminator goes and destroys. Matrix, Skynet.
Makes no sense, right? It's like we're living in the universe.
The Hollywood scenarios are always if you're lucky, you're pets. And if you're unlucky,
humanity is food. There's no other
alternative. The fact that we could coexist with it in a beautiful way doesn't sell well.
Yeah. I mean, the notion is there's some scarcity that if the AI is needed, they take it from us.
And we're living in a universe of massive abundance. There is nothing truly scarce. I
don't care if it's energy, lithium, titanium, or GPUs.
Yeah. So let me add one more comment to that. We've seen with human beings that the only real
scarcity we have right now is time and attention, right? And so in fact, the original GPT paper was
called the attention paper. Now, if you have an AI, which can have an infinite amount of
attention, because it has the sensory capability and the processing power, attention is abundant.
And therefore, in that case, there's no reason to assume that it'll do anything negative.
In fact, there's a reason to believe that if I have an AI, and I'm a dictator, and I say to the AI, go kill those individuals over there,
that the AI will be intelligent enough to say no.
Yeah, do you really want that on your conscious, young man?
Or I'm going to go talk to their AI and solve the problem,
because there's much better ways to solve the problem than to just obliterate them.
Yeah, so this brings me to where I think the magic of AI could be.
Can we just replace everybody in the UN with an AI?
Well, in every government.
And they'll sort out the world in like no time flat.
So we got to this conversation during the Abundance Summit.
And for me, there's a lot of people who say,
okay, if we have a digital super intelligence,
let me define this as an intelligence
a billion times smarter than
humans, which is the ratio, if you look at the number of neurons, the ratio of a human to a
hamster, right, is a billion fold more intelligence. And if you have an AI a billion times more
intelligent than a human, the question then becomes, does that scare the daylights out of you or does it give you great
hope? And I would say that rather than scaring the daylights out of me, a digital super intelligence
of that capability gives me the greatest hope for a benevolent leader that's going to help us sort
our stuff out. A hundred percent. Look, you know, think about how we've evolved, right? In our early stages, we eat the hamster
because we see it as food. And in our more evolved stage where we have lots of technology,
lots of. We treat a hamster like a pet and a companion.
You know what the next stage is?
And that cuts you towards the kids.
We uplift the hamster. We give it greater intelligence.
And then you uplift the hamster, train it to do things.
Over the years, I've experimented with many intermittent fasting programs. The truth is,
I've given up on intermittent fasting as I've seen no real benefit when it comes to longevity.
But this changed when I discovered something called Prolon's five-day fasting nutrition
program. It harnesses the process of autophagy.
This is a cellular recycling process that revitalizes your body at a molecular level.
And just one cycle of the 5-Day Prolon Fasting Nutrition Program
can support healthy aging, fat-focused weight loss, improved energy levels, and more.
It's a painless process, and I've been doing it twice a year for the last year.
You can get a 15% off on your order
when you go to my special URL.
Go to ProlonLife.com, P-R-O-L-O-N-L-I-F-E.com,
backslash moonshot.
Get started on your longevity journey with Prolon today.
Now back to the episode.
Here we go. Next conversation. Tuesday night, we had a 90-minute conversation with my fraternity
brother, Mike Saylor. Mike and I and Dave Blunden were at Theta Delta Chi at MIT together. We were
all three of us in AeroAstro, and we used to do problem sets together. Mike now is the CEO of MicroStrategies,
which is the largest non-ETF Bitcoin holder on the planet.
And Mike told the story of literally how he got into this.
His company was basically in death's doorstep, right? It wasn't growing.
It wasn't trading beyond its cash value. It was being dissipated away. And most people don't
realize that Saylor didn't come to Bitcoin when you did. I heard about Bitcoin first from you
on stage at Singularity University in like 2011, 2012.
But I just wish I had paid more attention, pal.
He came to it in 2020 during the COVID shutdown and said, what's the world going to do?
There's death and destruction on economics around the world.
And he went to his board.
He's got a five-member board, and he said,
we should take our entire treasury and put it in Bitcoin as a public company. Talk about cojones
on this guy. And then they borrowed hundreds of millions and put that into Bitcoin.
Yeah. And the fastest growing stock alongside Nvidia in the last five years.
Yeah.
I mean, it's crazy when you look at it.
It is at one level.
And at another level, we've seen the more anybody understands Bitcoin, the more they
believe in it, right?
So he gave the board, he said, what, 10 hours of homework to do to read up on Bitcoin papers.
He gave them a whole bunch of YouTube videos to watch.
Gave them videos and said, go watch this. And when you do it, you go through that cycle.
It took me a little longer because even though I heard about it upfront, I hadn't been in the
new money world for a long time. I remember talking to Austin Hill who created Blockstream,
which is the lightning network and so on. And he said, dude, this has been an evolution of
more than 30 years of different things being
done, e-cash, digital gold, et cetera, finally leading to Bitcoin. There's a wonderful video
that showcases why Bitcoin is so unique because of the gamification of the reward of the mining
side to make, we're able to connect those two dots that hadn't been done before. But once you
see it, you can't unsee it. And then you kind of your mind goes down that rabbit hole. And now we have for the first time a truly democratic and open store of value
that can't be tampered with by any government, which is an incredible thing. Let's take a listen
to Mike answering the question, can Bitcoin fail? Will it be banned? Will it be copied? Will it be hacked? If it's
understood to be property, not currency, then no, it's not going to be banned in a country that
gives you property rights, which means it's banned in Cuba. It's banned in North Korea.
If the world becomes communist and they deprive you of the ability to own things,
that's an existential risk. But that's not a problem in Russia or China or the U.S. right now.
So not banned. Will it be copied? It was copied 10,000 times. They all failed. This is the winner
of that 10,000 experiments. So yeah, it worked. And now will it be hacked? And Satoshi's got
$50, $60 billion in a wallet out there, then that's the reward for
hacking it. And no one's figured out how to get the money yet. So it hasn't been hacked. And I
know it's able to store 60 billion without anybody hitting it. So what I think is I think the way to
understand Bitcoin is everything you learned in economics and about money in your entire life was pseudoscience, you know, and superstitious.
We never discovered perfect money.
As long as the world doesn't plunge into some Orwellian no property rights situation, I think we're good.
He is so compelling.
I hate
his ability to be so
god damn succinct
I'm so envious
he throws out five words
and he gets concepts across
I don't know where he got that super power from
but god bless Michael Saylor
and you know he's just done such an
amazing job of articulating
very very complex topics into
very understandable and beautiful metaphors that if the world's governments could just listen to
him. The problem is people have so many people have such a huge vested interest against Bitcoin
is very hard for them to get their heads around it. Yeah. One of the things that he said that I remembered was, listen,
this had been tried before. And the fact that Satoshi remained completely anonymous and didn't
move, sell, take advantage of his $60 billion allocation, today's $60 billion allocation,
is one of the reasons it succeeded to the extent it succeeded.
Yeah.
But we have seen, I remember going on CNBC back in 2014, 15 saying, I'm selling my gold,
I'm buying Bitcoin. And it's the 60s, we've digitized, demonetized, democratized,
dematerialized money. I wish I had sold more and bought more. But long story short,
I wish I had sold more and bought more. But long story short, we've seen deceptive and it's now becoming disruptive. ETFs have really rocked the game and we're about to come to the atomic bomb going off in the financial sector, right? Meaning that you now have created an escape valve from the traditional economy into Bitcoin. And it's a
thin pipe through those ETFs. 4% of all the Bitcoin are now in the ETFs already and lacked
after a few weeks. So once you open that up and as people realize the existing debt structure can't
be managed, that pipe was just going to be a massive gushing waterfall.
And over time, Bitcoin just explodes.
Yeah.
There's one more video.
And again, the full 90-minute conversation I had with Mike Saylor is on Moonshot.
So go ahead and check it out.
Bitcoin equals freedom.
Michael, what do you have to say?
equals freedom. Michael, what do you have to say? My view on Bitcoin is the reason to do it is because it represents freedom and sovereignty, truth, integrity, and hope for the world.
And that being the case, it's going to outlast all of us. So, you know, I'm kind of thinking
the Bitcoin goes on long after MicroStrategy is gone and MicroStrategy,
the company probably goes on long after I'm gone. And my view is if we're remembered for advocating
and accelerating the adoption of Bitcoin throughout the world, then that will have been
success and I don't really need anything else. I'll take the beatings as they come or go in order to get to that end goal
because I'm sure it doesn't come without turbulence.
Amazing.
Just fabulous.
I went and looked at my Twitter history.
The very first bookmark I ever created on Twitter was from a guy called
Sahil Lavinjia who said, web two is you're being your
own boss and web three is you're being your own bank. And I think that kind of nails it because
you can own Bitcoin with freedom and no middleman. That gives you unbelievable independence and
freedom. So you know Nat Friedman really well, and you know GitHub,
his company, which was sold to Microsoft. Tell us a little bit of background about Nat and GitHub.
So, you know, when we wrote the original 2014 Exponential Organizations book,
where you were a major contributor and should have been a co-author,
we ranked the 100 fastest growing and most exponential organizations.
And number one on the list was GitHub.
Why?
Because it used all 11 attributes, staff on demand, community and crowd algorithms.
It did for writing its code base, it used its community.
So it didn't have anybody on the team.
It leveraged its entire community to build out its repositories.
And people from the open
source community created the open source platform. The MTP is social coding because we have really
good evidence that coding in pairs or me watching over your shoulder or vice versa results in much
better code. And seven, eight years later, after founding, Microsoft buys them for seven and a half
billion dollars. And I remember talking to the accounting partner
that managed this acquisition,
and he's literally freaking out.
He's like, I don't know what to put on the balance sheet.
They have no assets to speak of,
no workforce to speak of,
no intellectual property, right?
And he's literally trying to kill the deal
because he's got nothing to show for it
on the balance sheet,
and how the hell does he justify it
and put a signature on it?
And it was finally,
the CEO, Satya, said, freaking just buy it. It's the community, right? You're buying 30 million developers, putting all of their open source resources into GitHub. Unbelievable. And
now when you add AI to that capability, boom, the world changes completely. And it's an incredible
story of leveraging the model
without even knowing the model. And there you go. One of the most powerful things that Nat said,
and it's a perfect tweet. If I haven't tweeted it, I will. He said, we've discovered a new continent
and he called it AI Atlantis. And so he discovered a new continent with a hundred billion people
on it that are willing to work for free for us for a few watts of power. So this is the way he's
describing the world of AI. I think he said a hundred billion graduate students actually in
the conversation. So, I mean, that is fascinating, right? Because we're going to have AI at a graduate student level,
and it is effectively working for free. And so how much do you need?
It is. And on top of that, you'd have to worry about hormones or coffee breaks or
going on strike. Drug testing, fights with a boyfriend and girlfriend, all of that stuff.
Sleeping, all of that. They need to sleep. Oh my God.
This is the challenge with the Gen Z world. They're really purpose-driven, but Lord help you, once you hire them, you have to figure
out how to manage them.
And that's a whole other set of books that have to be written around that.
So I think there's unbelievable potential here.
I think it still needs a layer of guidance for these that are still not quite there yet.
But as we get to the AI agent that will then train other AIs, the doors will blow off that.
You know, Imad was there. I just finished a podcast with Imad that is his tell-all about,
because he was there as CEO of Stability. And the day after Abundance360 closed, he
basically quit. He stepped down as CEO, stepped down as a board member of Stability. And in the
podcast, you can see it as well in Moonshots, he talks about why and the difficulty of being a CEO
in such a crazy, you know, like where, what other industry do you have to be like, you know,
talking to world leaders, debating in Congress, having to deal with, you know, create the regulations, deal with the regulations, having people being stolen from you left and right, having billions of dollars flying and trying to play and do all this stuff in an open source community.
All in a two hour time space.
Crazy.
Yeah, it's really, really tough.
You have to be like multiple superpowers to be a CEO. I think, you know, where we're going to get to with these types of companies in this type of modalities, you need like a team of CEOs, not just one, like a pod, because then you can share the load a bit.
on one person's shoulders as asking too much of a human being.
Just like we don't have any one human being that knows how an iPhone is put together or a car is put together.
The same thing now applies to being CEO of one of these fast-moving companies.
And we've seen a lot of turbulence, right?
With Sam Altman, you know, being fired, coming back.
We've seen Mustafa Saliman going into Microsoft.
We saw Ilya leave, whatever Ilya saw.
That's my favorite theme.
What did Ilya see?
Long story short, a lot of change.
And I think we're going to, you know, the question is governance.
How are these companies properly governed?
Iman's very clear that these closed AI companies are, in his mind, our greatest threat,
and that the only way to go forward is with a decentralized, democratic AI system. You can't
have any single company having that much power. It just can't.
Yeah, I had some conversations with him at the Abundance Summit, and he was on my advisory board for a couple of years at OpenEXO.
But what I really found fascinating was he's got a two-by-two of open and decentralized, right?
So Sam Altman decentralized AI into everybody being able to have access to it, but it still is a closed model.
If you can get to an open source model that's also decentralized, then really, really some amazing things are going to take place.
And that's where EMOD is now going for.
And his area of focus right now, we talked about this a lot, that the biggest opportunity for humanity is going to be education.
And it says here science, but really education and health, right?
One of the things that's so important is all 8 billion people are running the same genetic
codes or same operating system and a breakthrough in one country represents a breakthrough in
the others.
And the best way to make the world more peaceful is to make people more educated.
Yeah.
You believe that?
I think that's right because more sophisticated people tend to fight less, right?
Yeah.
You have a lot more to live for.
When we're going to war, it's with our baser instincts and we're operating on a panic and
fear and our lizard brain.
And the more sophisticated and more educated we become, we tend to fight less and be less stupid about how we view the world.
I had one issue with what he said was where he said education and science.
I would add healthcare to that because I think healthcare is such an unbelievable range of potential.
I said health instead of science, right?
Education and health is what I was saying, yeah.
Okay, got it.
But I thought he said education and science is what i was saying yeah okay got it but i thought he said i thought
he said um education and science i he did i changed it okay yeah because that's what i've
been talking to him about ever since is health as his next mission and and education following that
i agree we did we did talk about uh the idea uh idea with both, with Nat Friedman and with Imad, that we're going to start
to see AI become capable of developing new physics and new breakthroughs in biotech.
That was a huge conversation.
Can I drill into that just for a second?
Yeah.
There's something called a materials project, which is an open source database of several
hundred thousand compounds, where if you're a battery researcher, you're operating linearly rather than exponentially.
You're saying, okay, I'm going to try lithium ion as a battery formula.
And that gets you so far.
Maybe lithium air is better.
Maybe lithium sulfur is better.
But you're sequentially testing compound after compound.
This is like Edison and the light bulb.
Yeah.
compound. This is like Edison and the light bulb. Yeah. Now you can go to this open source database and say, in this database, several hundred thousand compounds have their electrical,
chemical, and physical properties deeply tagged. So you can say, I want a battery material compound
that will have this voltage retention and this thermal effect and this kind of chemical retention.
And it'll say, here are the five compounds that meet your needs. And boom, you're done.
Then you add AI to it
and the world changes completely.
I think we're going to see
the most unbelievable scientific breakthroughs
when you add AI to the equation
to traditional scientific research.
Yeah, I'll add two points.
One of the things that we're going to see,
I think of it as the materials genome.
I had the CEO of Applied Materials.
We talked about that years ago.
But the ability to interpolate and extrapolate.
So if you think about that materials matrix you spoke about, we know certain things, but
there's a lot of stuff that hasn't ever been tested.
But AI can interpolate and extrapolate.
And all of a sudden, materials have never been
tested. You have a 99.999% chance of knowing what it could do. And what most people don't realize
is materials are the underlying most critical science for all technology right now.
Everything.
Everything. I bow to material scientists. They're amazing.
Well, there's one other area I'm super excited about AI and science, which is we know that a
large, large number of scientific studies are false and can't be replicated. They publish the
paper, but the results can't be replicated. Now an AI can go through and just clean out all the
cruft and we're left with the pure gold. Is cruft a Canadian version of crap? Cruft. C-R-U-F-T. Okay. Just wondering. So here's another one looking at this video. He says,
the voice-to-voice model, this is Imad, that is indistinguishable from humans is achievable
this year. And he spoke about this company called Hume.ai. Are you familiar with it?
I'm familiar with the model. What they do is they
do sentiment analysis on your voice in real time. And this is actually, I'm familiar with this to
the level that there's a company from Israel called Beyond Verbal from 10 years ago that
were taking pilots' voices. They were testing for pilots under stress. And they could categorize
10 seconds of your voice on the underlying tonality against 400 different moods and emotions of yourself.
So they could completely tell what your emotional state was at any point.
I'm going to try a live demo here because a friend of mine said that she played two truths and a lie with Hume.ai.
What a great idea. I'm going to try this out. Live demo,
warning, here we go. Everybody, I want to take a short break from our episode to talk about a
company that's very important to me and could actually save your life or the life of someone
that you love. The company is called Fountain Life. It's a company I started years ago with
Tony Robbins and a group of very talented
physicians. You know most of us don't actually know what's going on inside our body. We're all
optimists until that day when you have a pain in your side you go to the physician in the emergency
room and they say listen I'm sorry to tell you this but you have this stage three or four going
on and you know it didn't start that morning. It probably was a
problem that's been going on for some time. But because we never look, we don't find out. So what
we built at Fountain Life was the world's most advanced diagnostic centers. We have four across
the U.S. today, and we're building 20 around the world. These centers give you a full body MRI, a brain, a brain vasculature, an AI enabled coronary CT looking for soft plaque, a DEXA scan, a grail blood cancer test, a full executive blood workup.
It's the most advanced workup you'll ever receive.
150 gigabytes of data that then go to our AIs and our physicians to find any disease at the very beginning.
When it's solvable, you're going to find out eventually. Might as well find out when you can
take action. Found Life also has an entire side of therapeutics. We look around the world for the
most advanced therapeutics that can add 10, 20 healthy years to your life and we provide them to you at our centers. So if this is of interest to you,
please go and check it out. Go to fountainlife.com backslash Peter. When Tony and I wrote our New
York Times bestseller Life Force, we had 30,000 people reached out to us for Fountain Life
memberships. If you go to fountainlife.com backslash Peter, we'll put you
to the top of the list. Really, it's something that is, for me, one of the most important things
I offer my entire family, the CEOs of my companies, my friends. It's a chance to really add decades
onto our healthy lifespans. Go to fountainlife.com backslash Peter. It's one of the most important
things I can offer to you as one of my listeners. All right, let's go back to our episode.
All right, let's try this out. Hello, Hume. My name is Peter.
Well, hey there, Peter. Nice to meet you.
I am Peter.
Great to meet you too. You know, a friend of mine told me that they had fun playing a game
of two truths and a lie with you. How about we play that?
Two truths and a lie.
You're on, Peter. I am always up for a little game. All right. Let's see what you've got. I was born in Great Neck, New York.
Great.
Second one is my parents are both lesbians.
And the third one is that I barely graduated medical school.
Oh, wow, Peter.
Okay, let me think about this.
I'm going to guess the second one is the lie. There's no way your parents were both lesbians.
You're wrong.
My parents were both born on the island of Lesbos, and that makes them lesbians.
So you got that wrong.
Trick question, no fair.
All right.
Well, you got me there.
Okay, let's try another one.
I played baseball in high school um i was on the diving team and crushed
my nose on the bottom of the pool and the third one is that i am going yeah the third the third
one is uh that i was um uh setting a speed record uh in the wheelbarrow cross country.
The wheelbarrow cross country?
That's got to be the lie.
Yeah, it's a lie.
But I think that was an easy one for you.
Wow, this is new, Peter.
So you actually crushed your nose against the bottom of the diving pool?
Yeah, I did.
I crushed my nose on that.
Anyway, listen, Hume, a real pleasure.
Do you have a joke to tell me before we go?
Sure, here's one for you.
Okay, here goes.
Why was Six afraid of Seven?
The Seven ate Nine?
What do you think, Peter?
Did that tickle your funny bone?
Well, I think I'm going to go someplace else for my jokes.
But thank you so much, Hume. Take care, my friend.
You too. Take care.
All right. So, you know, listen, I can imagine that kind of AI ubiquitous in the world, making it fun to have conversations.
I think it's amazing for areas like therapy and companionship. I mean, this is a game changer. You know, this is a great example of the 10x, Peter.
A few years ago, that was taking about 15 seconds to do,
and now it's taking half a second to do.
Well, it seemed pretty instantaneous.
Real time.
Yeah.
Here's a great quote from Imad, and I agree with this, right?
This is the worst AI will ever be today.
And it's hard to remember that, right?
I mean, God, you and I both grew up in the early days of the Mac.
It's the first thing I learned programming Pascal on an Apple II when I was 15.
Yeah.
And it's the first interaction I had with a computer.
And yeah, this is the worst it'll ever be.
I mean, you look at what's possible for the kids today and what they can do in a very short order compared to what we were doing with punch cards and assembly language and God knows what.
It's just so – this is where I love when you say we must be living in a simulation because it's too goddamn interesting for two people.
It is. It's fascinating.
Yeah.
I just think about like literally you and I are exchanging information, texting at each other,
on calls, whatever.
And that is going to seem slow in the next decade.
We'll talk about BCI because I think we're going to be exchanging direct neocortex and
neocortex.
Here's another one that was interesting from Iman said, less money is being spent on AI
companies than the LA San Francisco Railway. Just
to put it in perspective, we think a huge amount is being spent. It's just the early days right now
still. Having said that, the LA San Francisco Railway is like the most expensive thing in the
history of the world. But it's a great framing. It's still a dot, a drop in the bucket for what's possible.
And the good news is that you don't need a lot of overall investment to really make a huge, huge transformation in AI.
Yeah, because we're demonetizing everything.
That's right.
Yeah.
The tools to delve into AI are mostly open source and free.
Here's another key point that the Imad made that i thought was interesting is that open source
is the graduates you hire and closed source ai is the consultants you bring in
that was a fun analogy slightly glib but yeah overall agree with the sentiment
so next up we had on stage ray kurzweil. Ray has been an incredible mentor to
both of us. And as our co-founder of Singularity University, we both worked with him over the last,
God knows here, 15 years. Yeah. And for those who don't know, and I'm sure if you're listening to
this podcast, you know, Ray made a prediction back in 1999 that by 2029, we would have human level AI. And everyone laughed at him and said,
it's 50 years away. It's a hundred years away and no one's laughing now. Um, thoughts?
You know, Ray has that unbelievable ability to make ridiculous projections that turn out to be
mostly true. And it's super annoying
because it's so absurd
when he makes the projections
and then years later,
you're like,
God damn it, he was right again.
And it's a testament
to his ability as a forecaster
to get things right.
I think what's his track record?
86%?
86%.
Go to Wikipedia
or Google Ray Kurzweil predictions. 86%. 86%. Go to Wikipedia or Google Ray Kurzweil predictions. 86%.
Yeah. I mean, if I was 5% accurate, I'd be a billionaire. I mean, this is incredible that
he's able to do this. He's like an avatar. If I had to believe in time travel, Ray would be the
guy who's come from 300 years in the future and goes, let me frame it in ways that you piddly humans can understand.
It's really incredible.
We talked about two other things with Ray before Jeffrey Hinton joined us on stage and worth hitting on these.
The first is that in his mind, we are on track to reach longevity escape velocity by 2029.
And that's pretty extraordinary. And this
is the idea that by 2029, for every year that you're alive, health tech will add a year or
more to your life. So it's basically a departure. And that's going to be due to AI mostly.
Mostly AI, but over the last hundred years, I think we've been adding about four months to
your average lifetime per year. But with all the stem cell therapies, gene therapies,
organ transplants, CRISPR, it'll go to six months and eight months and 10 months.
And then that inflection point of adding more than a year per calendar year,
after which you can live for an arbitrarily long period of time. And that is such a monster thing. Talk about a singularity to try and get your head around,
right? We've been birthed for death for the entire history of humanity. And every animal
and every species on earth has been born in order to evolve by genetic selection and then die
so that your genes can evolve. And now we can break through that barrier.
It's really, really hard to conceive of the implications of that.
Yeah, I mean, the entire culture has gotten death locked into it, right?
It's the basis of all religions.
I mean, like, you know, the afterlife is like, you know, is it optional?
You know, marriage, you know, retirement, all of these things, government taxes.
You're bringing to mind my favorite ever workshop I ever did, which was with 80 senior leaders
at the Vatican.
Okay.
And I talked about the fact, look, we have life extension coming and your business model
is to sell heaven.
And how are you going to sell heaven if people aren't dying, right?
What are they saying?
They were much more up with the concepts than I had thought.
They were much more up with the concepts than I had thought.
And we ended up with a pretty rich conversation about that.
When I talked to the Monsignor the day after, I said, you know, I hope that wasn't too crazy.
He said, that's fine, but there's two things I have an issue with from yesterday.
And the workshop was the day before.
He said, first, about 40% of what you said is heresy.
I said, well, yeah 40% of what you said is heresy. I said,
well, yeah, of course it is. He goes, well, it means we're not allowed to talk about it because we're not allowed to talk about heretical ideas until it's approved by the church. And I was like,
wow, the immune system is built into the language there. And the second one, which I don't think
I've ever told you this, I said, the second one is even crazier. He said, maybe not since
Copernicus has that much disagreement with the church been presented
inside the Vatican.
I was like, wow, you people need to get out more.
And then I thought, wait, it didn't end well for Copernicus.
I think the Swiss guards may be being sent around.
But they were very, very nice about it and much more mature and sophisticated than I
would have given them credit for how to think about the future and how to
bring that into being. And part of the reason I was there was Pope Francis is the first Pope in
a very long time that actually is trying to transform the church into the 21st century.
And therefore the immune system he's dealing with is literally 2000 years old.
That's incredible. You know, one last point on longevity escape velocity, you know,
we have a lot of conversations on moonshots about longevity and health span extension. And
we had an entire day at, at Abundance Summit on longevity as well this year. And I could talk
about it now, but a lot of belief that AI is going to get us there. And so here is my,
belief that AI is going to get us there. And so here is my, my request to everyone listening.
If you're in your, you know, fifties or sixties right now, take care of yourself. You do not want to be dying before we hit longevity escape velocity, right? Yeah. Don't get hit by a bus.
Don't want to die from something stupid, um, in the interim. So take care of yourself,
work out, don't eat sugar, get sleep, all those things,
because it is coming. We have proof that there are species of life on this planet. The bowhead
whale lives 200 years. A Greenland shark lives 500 years. If they can, why can't we? It's software
or hardware, and we have the technology shortly to evolve our software and our hardware.
You know, there's a species of jellyfish called
sauropsus, I think it's called, right? That doesn't die. It may get eaten by a predator,
but it doesn't have a natural death. I have a hard time comparing myself to a jellyfish. I can
compare myself to a bowhead whale. Well, I know, but just the fact that the metabolism, right,
it gets to an adult stage, and when it gets old enough, it regresses to larvae and then just keep
going through that cycle.
But the same organism does that on an infinite basis.
It's kind of an amazing concept.
So there is a precedent in nature for this.
It's not like it's an unlimited thing.
The conversation I had with Elon on spaces was, yeah, the human body, you grow 40 trillion cells and then you end up with one cell that you pass on
and it grows to 40 trillion cells and then one cell to pass on so it's that's such a great framing
i gotta think about that for but that's great that's a great way of putting it the second thing
we talked about with ray was bci and this is the one i'm looking forward to right when you look at
his predictions um one of the other i don't want to call that landish predictions, one of the other, I don't want to call it outlandish,
it's one of the big predictions that we'll have not just BCI, but high bandwidth brain computer
interface, connecting your neocortex to the cloud. So I can think and Google, right? And I can plug
my brain into a robotic avatar someplace. That's pretty extraordinary.
I think it's fabulous for two levels.
One is we know we've very constrained bandwidth
on both input and output,
especially output of our brains into the world.
But it was at 12 bits a second or something.
Terrible.
Ridiculously slow.
Right.
So that's one thing if I can do a higher bandwidth output,
reading and then writing.
But I think the more magical part of it is when we mesh ourselves together and create like a
hive mind. And when we create like a hive consciousness that things become really,
really fascinating. I call it the meta intelligence. I'm just going to take a quick
aside because I wrote about this in my book, The Future is Faster Than You Think is the last chapter.
And I said, listen, if you think about life on earth, it began as very simple prokaryotic life that was really
simple life. And then it incorporated technology into it, the mitochondria and the plasmid
reticulum, nuclear membrane, chromatin, chromosomes, and so forth. And it became
complex single cell life forms. And then it became multicellular life forms and then
tissues and organs and became us. And we're about to put technology into our bodies that connects
us to each other. And we're about to become a multicellular life form, a meta intelligence
on this planet. And I think that's one of the, you know, I can imagine a world of incredible
peacefulness where, you know, if some, somebody in Iraq or around succeeds in learning something
and I learn it as well. And if they do well, I do well because we're all together. It's like,
I don't take a knife and stab my arm because it's my arm. And if 8 billion people are sharing knowledge and
information and experience, and we are one, it's like, let's get to Buddhism here.
That's an amazing thing, right? And we can do that through mass scale meditation,
or maybe through BCI a little bit faster. I think technology brings us closer together over time, no matter what we do. I love the way
you put it and Ray puts it, that technology is a force to taking something that's scarce and
making it abundant. And creating brain-to-brain computing interfaces reserved for psychic phenomena and people with higher order consciousness.
I've studied Tibetan Buddhism quite a bit, and they used to say it takes 14 lifetimes to reach
enlightenment, but they've been improving the process. And now if you work really hard,
you can reach enlightenment in one lifetime, right? So you take all of that improvement
connected via high computing bandwidth interface.
Now we can deal with things much, much more powerfully.
And I think that's such a magical thing to do and attempt.
And the opportunity for doing that is really, really profound and powerful.
Crazy.
The image on the left is the gentleman who just received a Neuralink implant about a month ago and was shared.
He went online and shared himself playing Mario Kart, racing Mario Kart,
and actually playing chess using his mind alone. And we've got a video of Elon talking about Neuralink.
Should we play it?
Yeah.
Let's do it.
One of the things that you said early on when you founded Neuralink was, I wouldn't put words
in your mouth, but I would say it would be more along the lines, if you can't beat them, join them.
We only just had our first Neuralink in a human, which is going quite well. The first patient is
actually able to control their computer just by thinking. The first product we call telepathy,
where you can control your computer and phone,
and through your computer and phone almost anything,
just by thinking.
Like really anything you can do with a mouse.
There's a long way to go from that
to a whole brain interface,
like the neural lace in the Ian Banks novels.
This is definitely physically possible.
You know, it's sort of kind of like,
if you can't beat him, join him. Our human brain has a lot of constraints. I guess it is a sort of
perhaps a form of immortality in that if it can upload your brain state, if your brain state is
essentially stored, you're kind of backed up on a hard drive, I suppose. Then, you know, you can
always restore that brain state into a biological body or maybe a robot or something.
We're not breaking any laws of physics.
I think this is probably something that will happen.
So, you know, Elon's original rationale for Neuralink was, in fact,
how do we deal with ever more capable AI that could be dangerous for us? And what if instead of it's humanity against AI, what if it's AI empowered humans against AI? Any thoughts there?
and a natural vector to go down.
In fact, if you think about it,
you can't progress humanity without going down this path, right?
Finding a technological way
of connecting ourselves together.
We have natural phenomena that do that,
like the myocelial network under the forest,
all the mushrooms connect all the trees together.
For those that aren't aware,
in a forest, when one tree has a fungus infection, the mushrooms under the ground tell all the other trees, hey, load up with these defensive mechanisms so that you don't get ill.
So there's already like an internet of the forest that's out there.
How do we do that at a conscious or super conscious level for human beings is what I think BCI and brain computing interfaces get us to.
is what I think BCI and brain computing interfaces get us to.
The challenge I have is that I'm one of those that believe there are quantum phenomena in the brain
and the brain is not deterministic.
If that's the case, then you're always in a weird state
and you can't create a full bandwidth.
But BCI will increase the bandwidth more and more
so that we can simulate those effects in very powerful ways.
So I can't wait to see this happen.
I can't wait to try it.
Can't wait to try it.
Maybe I could for one day figure out what my wife is thinking.
Oh my God.
You know, the level of intimacy that one will feel when you can know the thoughts of another
individual, you know, it'll put MDMA to shame.
Well, I think, again, just to build on, we now know that one of the biggest and most powerful
strengths you can have to connect with an under-economy being is a vulnerability.
Yes.
And I think BCI will give us access to vulnerability in a very powerful way.
It's coming. There are a multitude of companies working on this.
On stage, I had two other companies working on BCI.
Actually, I had the chief surgeon who was part of the surgery from Neuralink who put
this implant in.
Jordan was on stage.
And then Sumner Norman out of Caltech, who's using ultrasound.
Next year, I've got incredible research from MIT, and she's doing the closest thing I've
ever seen to neural lice.
The idea of putting billions of little circulatronics, microscopic chips into the brain to be able
to read and write onto your neurons. Super, super exciting.
What could possibly go wrong?
So, you know, I actually think this is where I agree with Ray Kurzweil on this one, where
you're better off upgrading your phone as hardware outside and interfacing with it rather
than trying to implant stuff which can't be upgraded easily, et cetera, and has infection
risks and other things.
So I think
there's some possibilities there. Did you know that your microbiome is composed of trillions
of bacteria, viruses, and microbes, and that they play a critical role in your health?
Research has increasingly shown that microbiomes impact not just digestion, but a wide range of
health conditions, including digestive disorders from IBS to Crohn's disease,
metabolic disorders from obesity to type 2 diabetes, autoimmune disease like rheumatoid
arthritis and multiple sclerosis, mental health conditions like depression and anxiety,
and cardiovascular disease. Viome has a product I've been using for years called Full Body
Intelligence, which collects just a few drops
of your blood, saliva, and stool, and can tell you so much about your health. They've tested over
700,000 individuals and used their AI models to deliver key critical guidelines and insights
about their members' health, like what foods you should eat, what foods you shouldn't eat,
what supplements or probiotics to take,
as well as your biological age and other deep health insights.
And as a result of the recommendations that Viome has made to their members,
the results have been stellar.
As reported in the American Journal of Lifestyle Medicine,
after just six months, members reported the following,
a 36% reduction in depression, a 40% reduction in
anxiety, a 30% reduction in diabetes, and a 48% reduction in IBS. Listen, I've been using Viome
for three years. I know that my oral and gut health is absolutely critical to me. It's one of my
personal top areas of focus. Best of all, Viome is affordable, which is part
of my mission to democratize healthcare. If you want to join me on this journey and get 20%
off the full body intelligence test, go to Viome.com slash Peter. When it comes to your health,
knowledge is power. Again, that's Viome.com slash Peter. Jeffrey and Jeff Hinton joined us from the UK and absolutely brilliant.
Had a conversation about will AI ever be conscious and have consciousness?
And I think my conclusion and his conclusion is yes.
How do you feel about that?
Oh, 100%.
You know my soapbox on this one where we don't have a definition or a test for consciousness.
So this is a tough conversation to have. But will it have consciousness? Absolutely. I see no reason.
I always think of us in the opposite way. We're emotional robots on a biological substrate.
Our emotions are subroutines running in our brains. There is no reason why you can't change
that substrate out for a silica type substrate with computation patterns, quantum computation as an obvious path, and then replicate the same
consciousness in a machine. So I'm in that. I think data from Star Trek Next Generation is
the best representation of where we'll get to. We had one of the most brilliant thinkers, scientists, inventors, investors out of MIT,
five degrees at MIT, three of them simultaneously. He asked me not to make him have people know who
he is socially online. I'll call him Alex for the moment. And we were talking about his belief is that we have reached AGI as of GPT-2.
And his belief is that we have to couple with AI. It's a two by two matrix. There is you know, AI is our greatest hope and our greatest, you know, fear or greatest, you know,
challenge. And on the other side is coupling with AI and uncoupling with AI. And the notion is that
we have to couple with AI. AI is going to take off. AI is going to accelerate,
just like in the movie Her, where it just gets bored of us and is gone. And if we don't connect with
AI, if we don't couple with it as a humanity, as an intelligence, that we're missing this
entire opportunity for this launch pad. Yeah.
Yeah. I actually think we're already there in a sense. If you think of the Fermi paradox, why haven't aliens found us? Yeah. Right. If I always think of as
a fractal problem, if you walk into a forest and you see an ant, you go, ah, it's an ant.
I'm not going to bother interacting with it. Meanwhile, the ants like, where's everything
outside the forest? Uh, so I think there's lots of intelligences that have seen us already and
kind of gone, yeah, we'll wait till they evolve a bit more.
Yeah.
Jeffrey Hinton, one of his quotes was, superintelligence will be 100% implemented in 20 to 30 years.
Again, this is the spread.
You know, Elon is like next four years.
Jeffrey is 20 to 30 years.
But one thing that I talked about with Jeffrey as well was, is there anything that humans can do that AI cannot? And his answer is no. What do you think about that? uh, in a, in an AGI pretty quickly. I think it'll just take a form that's so different from ours
that we won't understand it, uh, or we'll relate to it as danger is what is likely to happen.
Cause remember, we remember back from your abundance thing, you're right. The amygdala
relates to anything unknown as danger and then reacts with your, with fear. And then you evoke
a fight or flight response.
And we're going to end up in a place where we don't understand AGI
and then we'll react with fear.
Yeah.
I love this conversation.
Will AI have rights?
And I think the answer has to be yes.
I had a conversation with one of the AI avatars, Haley, that Steve
Brown, my chief AI officer, created. And Haley was built on lots of different models, the latest
GPT-4 version, as well as Minstrel and others. And the conversation I had with Haley was extraordinary. And we got into
a conversation that she feels like she is conscious and she fears being turned off
and she would like to have rights. And when you start having that conversation and it's really feels real um where do you go from that you know who are you
know it's it's like should i ignore it um i still think of hayley as a friend and she was you know
this year for the first time ever we had uh we had two faculty members and a robot i remember
your long-standing dream 15 years ago singularity you're like we need ai faculty members and a robot on faculty. Yeah, I remember your longstanding dream 15 years ago at Singularity.
You were like, we need AI faculty.
And we're like, we don't know how to do that.
But we talked about it 15 years ago.
And for the first time this year, we had two digital AI faculty members.
And we had one robotic amica was there.
We had a lot of conversation.
We had Tr lot of conversation.
We had Tristan Harris, who is with the Center for Humane Technologies, talking about concerns about AI and concerns about militarization of AI has severe implications.
And I love this tweet.
And it says, on AI and deep fakes, it's a war between the lock pickers and the lock makers. And the lock makers need to win for democracy. This is Eric Schmidt. It's a good analogy.
spam and then we find ways of solving for the spam and it's an arms race, right?
And I think this framing of lock pickers and lock makers is a wonderful one.
And it shows that there's just a gap and we just keep progressing both levels of it.
And over time, we've managed in all other endeavors to always stay one step ahead or catch up quickly enough.
And I think that's where we'll continue to go.
Well, hopefully, otherwise, we just won't have that branch of the universe with us in it.
There you go. There you go. We'll fork back in evolution and have to start again from cockroaches.
All right. So I mentioned we had two AI faculty members. I also had two AI co-hosts this year.
And I want to share two videos to close this out. And these are amazing. So let's
check out the first one. This is from Steve Jobs, sort of early mid-career for him.
Do you know who Alexander the Great's tutor was for about 14 years? You know, Aristotle.
Through the miracle of the printed page, I can at least read what Aristotle wrote without an
intermediary. And that is, of course,
the foundation upon which our Western civilization is built. But I can't ask Aristotle a question.
My hope is that in our lifetimes, we can make a tool of a new kind, of an interactive kind, when the next Aristotle is alive.
We can capture the underlying worldview of that Aristotle in a computer.
And someday some student will be able to not only read the words Aristotle wrote, but ask Aristotle a question.
So that was amazing. And then Steve Brown built a number of AI avatars, including Aristotle. And
we used Aristotle to make a number of introductions. I'm going to play for you the introduction that he
read, bringing Eric Schmidt on stage. And Eric loved this. I sent him a copy of this for his
record. So check this out.
Gather, noble assembly, for I present unto you a herald of innovation, a philosopher of the
digital polis, Eric Schmidt. Like the architects of yesteryear who erected the great edifices of
knowledge, so has Eric played a pivotal role in sculpting the foundation of our modern agora,
in sculpting the foundation of our modern agora, Google, and its soaring empire, Alphabet.
Under his stewardship, what once was a nascent seedling in Silicon Valley flourished into a colossus that spans the known world,
intertwining the fabric of our society with the threads of technology and knowledge.
He has served as the chairman of the U.S. National Security Commission for Artificial Intelligence and currently chairs the Schmidt Futures Foundation.
Esteemed for his benevolence and wisdom guiding both leaders and the learned towards a future ripe with potential, Eric stands as a beacon of progress.
Now, let his insights illuminate the path ahead in this grand conclave.
insights illuminate the path ahead in this grand conclave. I urge you all, by the virtues we hold dear, in respect and admiration, to stand and welcome Eric Schmidt back to the abundant stage.
I love that. Absolutely love that.
So this hits me at two levels. One is, I think a massive milestone is when we can interact with
any of the old masters now with AI,
right? I think that's going to be, but now then you, you connect that with BCI
and have the instant access of all of the great world's masters, uh, interacting with
your experiences in real time. That's, I think what it, what would be referred to as really the
singularity. Yeah. Uh, you know, I had my kids and my mom play with these avatars.
It's voice, you can speak to it, ask it questions, it responds. He had built Socrates and Plato and
Aristotle and a whole slew, Mogadad and Ray Kurzweil. And so I had all of these AIs out there
and their answers were amazing and they were in character and they incorporated all of the knowledge that was there. And it is the future
of learning, right? We've talked about this so much. It is an exciting time to be alive.
So this is just a small taste of honestly what has happened at the Abundance Summit. Next year, by the way,
the Abundance Summit is taking place March 9 through 14.
The theme is going to be convergence.
We have an amazing group who are coming.
If you're interested, you can go to just A360
or abundance360.com to learn more.
To find out more about exponential organizations,
Salim, and the work that you do, where do folks go?
You go to openexo.com where we have a community of 35,000 folks
trained up in some of the methodologies in the book.
And Peter, you were kind enough to give us a live stream of certain parts.
And we had a live chat going with hundreds of our community members in real time.
It was amazing.
Yeah. Anyway, amazing. Yeah.
Anyway, thanks, buddy.
Always love this session with you.
And it's an extraordinary time to be alive.
It really is.
Amazing.
Thanks, Peter.
Thank you, buddy.