Moonshots with Peter Diamandis - Humanoid Robots, the Job Market & Mass Automation - The Current State of AI w/ Emad Mostaque | EP #114
Episode Date: August 13, 2024In this episode, Emad and Peter discuss Emad’s new paper, “How to Think About AI,” and the necessary steps the world needs to take to ensure AI safety. Recorded on August 5th, 2024 Views are ...my own thoughts, not Financial, Medical, or Legal Advice. 03:24 | Understanding the Complexity of AI 55:51 | Preparing for the AI Revolution 01:20:26 | Open Source AI for All Emad is the founder of Schelling AI and the former CEO and Co-Founder of Stability AI, a company funding the development of open-source music- and image-generating systems such as Dance Diffusion, Stable Diffusion, and Stable Video 3D. He is the current Read Emad’s paper: https://x.com/SchellingAI/status/1818600200232927721 Follow Schelling AI: https://x.com/SchellingAI Follow Emad on X: https://x.com/EMostaque ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Reverse the age of your skin with Oneskin; 30% here: http://oneskin.co/PETER Get real-time feedback on how diet impacts your health with levels.link/peter _____________ Get my new Longevity Practices 2024 book: https://bit.ly/48Hv1j6 I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Transcript
Discussion (0)
AI's capabilities will soon accelerate at an unfathomable rate, making today's tech
look like stone age tools.
AI is complicated and hard and scary.
Intelligence is like electricity, it's like clean water.
It becomes available to everyone.
How do you quell the fears or should we have fears about jobs?
Are you embracing that technology to do more and create more value or not?
What's it going to feel like when you're seeing humanoid robots walking around
the streets all around you all the time?
Actually, this is the real existential threat to humanity.
10 billion robots and a bad firmware upgrade.
Everybody welcome to Moonshots. Today, my guest is Imaad Mustaq,
the previous CEO of Stability AI, the founder of Shelling Point and the author of a recent white paper called How to Think About AI.
This is a paper I found extraordinarily compelling, concise and really something that I encourage you to read.
We're going to dive into it. We're going to talk about investing in AI, talk about international cooperation, talk about something called Centaurs, which is AI human collaboration, talk about the growth
and the speed.
Are we getting to AGI and when?
One of my favorite conversations, it runs two hours because it was just an incredible
master class in thinking about AI.
And if you love conversations like this, please subscribe.
By the way, at the time of this recording, my blood glucose level is 97, which is exactly where I
want it. I like keeping it low and flat below 105 if I can. I use the levels app
to help me measure that to understand what food causes my blood glucose and my
insulin to spike and what doesn't. If you're interested, my team is going to
link to levels in the description below.
All right, onwards to the episode.
Hey, Ahmad, good to have you back on Moonshots.
It's been some time.
Are you in London today?
Yeah, I'm in London.
Thanks for having me back.
It's always a pleasure.
Yeah, last time we spoke, you had just stepped down as the CEO of Stability.
And I know that you've been busy.
And in particular, the paper you just
put out on how to think about AI, I think,
is an incredibly cogent and beautifully written paper.
I'd like to discuss that today.
And I want to start by quoting some
of the elements in the paper here for folks who have not yet
gotten to it.
You say, let's be clear.
Humanity is standing at the edge
of the most transformative events in history.
It's a wake-up call, a roadmap,
a vision of our imminent future rolled into one.
Said there will be an AI explosion,
100 billion AI agents and a billion robots.
The current models, they're just the appetizer.
AI's capabilities will soon accelerate
at an unfathomable rate, making today's tech look like Stone Age tools. No industry is
safe. I love this last one here. The cost of intelligence is plummeting. The value of
human ingenuity in applying these new resources will skyrocket. It will shortly become one
of the most important skills you can have. It's time for urgent, decisive action to harness
the benefits and mitigate the risks of this AI tsunami. So that's what your paper is about
and anyone who hasn't read it yet, I highly commend it to you. We're going to dive into
all of these elements. Let's begin. Imad, why did you write this paper?
I wrote this paper because AI is complicated
and hard and scary.
And I thought it would be great to have a framing
that the average person could understand
and how it applies to them and their lives
and their part within it.
Because I think, you know, one of the things
I mentioned in the paper is that there's a lot of talk about AI agents and all this,
but not our own agency within this.
It can be far and remote where everyone's talking
about supercomputers and national state things
and super geniuses running it.
Whereas this is a technology that's coming into every one
of our lives every single day.
And there seems to be a disconnect there.
So I wanted to create this piece as an initial framing
to try and bridge that I love
That and I know this is the prelude to a much deeper paper that you're working on
That you'll release sometime this fall which I'm excited about talking about on our next conversation we get together
I think AI the subject of AI is pervasive
I mean
It's the topic of dinner tables and boardrooms and executive suites in the financial markets.
And I think you're right.
People really have very little context on how to think about it other than OpenAI and Google and Anthropic and just the names NVIDIA.
But how it affects them, you know, Ray Kurzweil talks about this idea of the singularity, which he puts out into the 2040s.
But there really is an AI singularity, which he puts out into the 2040s, but there
really is an AI singularity coming too, right?
It's a point at which the tech is moving so fast, it's super hard to understand what's
going to happen, what life will be like on the back end of that.
Do you agree with that?
Yeah, I agree completely because it's a pervasive systemized technology, just like writing, just like reading, but it's
got context suddenly that emerges.
So taking this podcast and translating it into dozens of languages is an amazing thing,
but in our own voices that could be done now.
You know, having technology that has real world impact in being as good as any call
center worker, or being allowing you to create any
song on any topic within seconds, it's something we've never really seen before. But it isn't
just constrained to these giant supercomputers and companies and things like that, because
we're actually seeing this technology emerging on our consumer laptops. We're seeing this
more technology being able to used by people in emerging markets and others to build brand new experiences around
And I think that's something crazy
that
Is leveraging the existing infrastructure and again, we just set the early stages of that impact
Yeah, how big is ai today and how big is it going to get in terms of either financial or?
Uh, you know whatever parameters you want to give it.
I think if you kind of look today, the total amount spent training models of all types
in generative AI, this new type of AI that we figured out, is probably in the order of
$10 billion.
The total amount that's been spent on self-driving cars is in the order of magnitude of $100
billion, and the total amount that was spent on 5G is a trillion dollars. So even though
we hear these numbers going around of giant training runs and big investment
and there have been estimates of a hundred billion of GPUs bought for training and
running these models, it still feels like early innings for what is technology
that can really amplify and accelerate the capabilities of any individual
because they can direct these swarms, these armies, these groups of AI agents and soon
physically embodied robots, which is insane to think about. Yeah, I want to get into what
does the average person think about this and how do you use it besides, you know,
person think about this and how do you use it besides, you know, chat GPT or Gemini,
you know, and is it too late to invest in this, right? We have a lot of folks who are
asking, you know, how do I get involved in this? But let's paint the picture still as you do in your paper about where it's going. One of the big ideas that you speak about, and actually you are on stage at Abundance
360 speaking about this, is the explosion of AI agents that are coming.
This concept of the AI Atlantis.
I find that compelling.
I find that enticing and somewhat scary if you don't understand the implications. What does that all mean? What are these billions of AI agents in AI Atlantis?
Yeah,
I think AI Atlantis was coined by Nat Friedman on stage, uh,
former CEO of GitHub when we had that panel together, Peter. And it's,
we've discovered this new continent and the best way to think about these AI's
is they're like interns. They're like grads.
We keep treating them like they should be professors or PhDs, but they're just in the early stages of coming out of high school.
These supercomputers are high schools, right? But there's a hundred billion of them and
we can get them to do all these little tasks. But anyone knows who's had grads and interns,
we have to teach them before they can do anything. And once they learn the processes, that's
when you have agents that can go out and do jobs for you. That can translate something in different languages so you can reach a
bigger audience in seconds, or that can paint pictures for you or do SEO. That can be your
call center worker. And so the first step was to teach them their liberal arts degree.
Now we're specializing them. We're optimizing them. And the cost, I mean, I think OpenAI just showed that the cost of GPT-3 originally versus their latest version
GPT-4 has dropped a thousand times, literally a thousand times. It's almost too cheap now.
So we have this new continent and all these workers are coming that can do jobs as reasonably
as any graduate. What does that mean for your personal life,
your company, your country?
Well, it's simple economics.
There's a massive amount of supply of intelligence
and capability and rule following coming,
because these don't sleep.
All they need to be fed is a few flops of,
you know, computer and electricity.
How would your life change
if you had a really good group of graduates?
And then they will only get smarter, they'll only get better at following ingredients,
and soon they'll have physical bodies as well.
When you look at Optimus, when you look at Unitree and the other robotics companies coming.
Let me dive in this a little bit deeper.
The idea of this AI Atlantis, this continent of 100 billion AI graduate students,
AI agents that can do your bidding for you
Like you said, you're you're feeding them
You're feeding them power and data instead of pizza
How
How much agency will they have how will we be able to drop them into jobs?
To drop them in and say hey, you're part of my marketing team,
you're my admin, you're my finance person.
How far are we away from having competence there and how will we see that develop?
I think that we're not far at all.
We just had to get to a certain level of performance on the base models of the base degrees.
So again, I think the best way to think about these are liberal arts graduates.
They've had a diverse education because we've literally showed
them the whole internet, right? Those are the trillions of words that go into the
GPTs or the billions of images that go into the image models or hundreds of
millions of songs. And now we're specializing them and these are the
workflows that we see. Because like when I started as a programmer 20, God, three years ago, we didn't have GitHub
and libraries and the way programming is now, which is like you pull from pre-made building
blocks and you put it together.
We wrote directly to the computer.
We had these very low level building blocks that now people have made into houses and
you know, Lego blocks that come together.
Similar is with AI.
We had to get a certain level of performance
of the base models, and now people are chaining
the base models together in repeated processes.
They're integrating it with lookups for databases.
They're learning from how people are experiencing things.
If you compare ChatGPT now to ChatGPT a year ago,
well, it's just two years old now, right?
There's a massive difference
because it's learned from how people have used it.
And they're actually paying people
for specialist use cases.
So I think that you see hundreds,
thousands of companies deploying these pipelines right now,
and they'll standardize very, very quickly.
I've heard described you might bring in an AI agent and when they plug into your
business they'll read all of your past email traffic, they'll read all your
slack channels, they'll be up to speed on everything ever said in your
company in the marketing domain and then there'll be a point where you're slacking them
and you're speaking to them and maybe even zooming with them
as if they were a member of your team.
Is that a good description?
Yeah, so apps like Lindy.ai do that pretty much already.
But as we get more integrated, that will be even more
because, for instance,
your data all sits in Amazon bucket.
Amazon will roll out services within the next year
that allow you to take literally a snapshot of all of that
and give you your own personalized agent that you own,
that you can talk to on Slack
and it will retrieve all this information instantly.
We're seeing Snowflake and Databricks,
again, build this type of technology and implement it. And Microsoft with its agents in Office, these are all coming through literally in the next year.
So again, at a graduate level, we shouldn't expect more than that. And again, sometimes they make
mistakes, you know, sometimes they try too hard. We will have these agents coming through. The
single-shot thing is very interesting as well, we're making it a little bit superhuman.
So Google, with its Gemini model now that's just outperformed OpenAI's model for the first time,
so it's the top-performing model, which is very interesting,
has what's known as a 2 million to 10 million token context window.
A token is like 0.7 words, So let's say it's a million words
of instructions you can give it.
When we first started, we can only give two to 4,000 words.
Now we can upload whole movies.
We can upload the entirety of the entire bookshelf
and everything you've ever said, Peter.
And we don't have to tune the model anymore.
It can look at it all at the same time
and say, this is how your speaking style has evolved over time.
You know, or these are the most impactful things you've said
and point to the different parts of that.
This is where you seem like maybe you've been traveling
a bit too much at times.
I don't know.
You know, I think one of the interesting things though,
is if you do train up an AI agent,
having had to consume all of your corporate knowledge and all of
the previous phone conversations your sales agent or marketing person would have had and
know all the context and you like what they're what that AI agent is doing, you can replicate
10 of them, 100 of them, 1,000 of them instantly.
So the implications there, we'll get to conversation about jobs.
I think it's really important.
But we also, I think as an entrepreneur,
we have a lot of entrepreneurs listening to moonshots,
the idea that there will be the one, two, or three person
unicorn that comes online.
How do you think about that potential?
I think it's 100% gonna happen in the next year or two.
Because I mean, ultimately,
anyone who's an entrepreneur knows
that the key bottleneck is simply talent.
And there is the chef talent
that comes up with creative things
and brand new recipes and other cooks.
And you just want them to do their jobs
and you want them to do their jobs well.
Like there's a sales function that humans
are still gonna be good at for maybe a period of time,
but all of the day-to-day stuff,
like the content, shall we say, versus the creativity,
the AI can follow instructions much better,
which allows you to scale much better.
Again, as an example, media.
There is no reason in a year from now that every piece of media isn't translated into every major language
the next day
With your voice in there and a transcript. I know we're starting to do that with this podcast, but you're right
It's it's every language eventually almost instantly
And then but that what does that do to your audience? We can even do it live now.
Your audience suddenly goes from just the English speakers to every language, right?
And then that increases your funnel.
Every business is a repeating process where it's all about cost going in versus value
going out.
And as you said, if you can replicate this agent 100 times, it isn't like hiring 100
graduates or 100
sales specialists.
You push a button and you get them for cents or dollars.
By the way, it can be, you know, increase the number during your peak season and drop
them off without cost during your off season.
Let's talk about jobs for a second.
You know, in the paper you say, you know, AI automation puts 300 million full-time jobs at risk.
And then you go on to say the adoption of AI is going to create 97 million jobs by 2025.
So I think one of the biggest fears people have is about the job market.
And what most of us don't realize is that most of the jobs that we all do today
didn't exist even 10, 20 years ago.
And we've completely disrupted and reinvented
what humans have done for millennia.
But the pace of change here is a lot faster.
What are your thoughts?
How do you quell the fears?
Or should we have fears about jobs?
I think it's a pace thing.
As you said, we've always responded.
You know, it's been said that there's only two types of jobs,
you know, or things.
There are things that you need for living
and there's things you need for entertainment in a way, right?
The survive and thrive elements here.
Like, we have had such an improvement
in our standard of living and even a reduction
in the number of hours worked,
and we've created brand new industries
and service sectors and more.
But the pace of this is insane.
But if we look at it from a localized perspective,
what's a danger initially is the outsource jobs
and the entry level jobs.
Because this is where that technology,
we've discovered this new continent
and they work for electrons, right?
They work for GPU hours.
In the intermediate, middle level, you could be so much more productive, which means that
you can have more supply and more demand of that stuff.
And then if you're a leader that can really leverage this technology, bring it together
and think about how am I creating value, you become a multiplier.
So I think this is an aggregate massively impactful for economies,
but we have to really look at what does our generation that has been taught to be a programmer do
when this technology is as good as any graduate programmer.
What does it do when again any graduate job in that you
can do via a screen, the AI can probably do better in a year or outsource jobs as
well for the global south. I get it but still they're gonna be fierce. I remember
you're on the abundant stage with me I think it was two years ago and you said
look we're not gonna have any programmers in five years and that was like front page across India
you know and it caused fear and of course we're not gonna have the old
fashion programmers which are just humans on their own it will be AI human
partnerships they're doing the programming but let's talk about where you think should people have some level
of concern about the job market or do you think as we have done decade after
decade century after century we've invented brand new approaches to jobs
100% I mean again technology is an enabler. And so it's just about, are you embracing that technology to do more and create more
value or not?
Like, what would an accountant be who didn't use Excel or a spreadsheet?
You know?
Like, it doesn't make sense.
Or a salesperson doesn't use a phone.
This is just technology it is.
And it's up to us to embrace it and create even more value that way.
And the nature of jobs will change.
In your talk, generalized computer science terms, I think again, we're two years into
our five-year prediction, right?
So you've got three more years.
I don't think you need to learn.
Even now, if you use Claude and the artifacts feature from Anthropic, it will create programs
on the fly for you.
And you just talk to it like a human and it adjusts them.
Devon is another one that's similar.
Do you need to know the specific magical spells of a programming language anymore?
No, why would you?
You have this assistant.
But what are you trying to do?
You're trying to build software to do a job.
That's all that matters.
And again, when I first did it 23 years ago, we didn't have GitHub or any of these
things where you could reuse code you almost coded up from scratch
Every time every time directly on the thing and this is how we do it
We build houses from scratch, but then we learn how to build houses, you know in our company
We do something brand new, but it goes to our institutional knowledge. It's just now we can localize this knowledge
so I think that
the key thing here is just expect the floor will raise and
If you don't embrace this technology, then you're going to be left behind you're gonna be left high, you know low and dry
I think it was one of the points that we we spoke about at length
You know the way I put it bluntly is there gonna be two kinds of companies at the end of this decade
Those that are fully utilizing AI and those that are at a business.
Do you agree it's that black and white?
I think it is that black and white. I don't think we've got to the competitive stage yet because it
was build the basic building blocks first, now start to bring them together and then proliferate
the technology. But in the next few years, we will see that competitive tension where
companies can outperform because their core loops are
embracing AI so they can do it better faster cheaper and
You know the global economy doesn't look too hot today. So it's gonna be highly competitive
If you can reduce costs and have higher revenue, you'll outperform.
Real quick. I've been getting the most unusual compliments lately on my skin
Real quick, I've been getting the most unusual compliments lately on my skin. Truth is, I use a lotion every morning and every night, religiously, called One Skin.
It was developed by four PhD women who determined a 10-amino acid sequence that is a synolytic
that kills senile cells in your skin.
This literally reverses the age of your skin, and I think it's one of the most incredible
products I use it all the time. If I think it's one of the most incredible products.
I use it all the time.
If you're interested, check out the show notes.
I've asked my team to link to it below.
All right, let's get back to the episode.
I'm always looking for analogies that are understandable.
I think 100 years ago, when we started to electrify
the nation, people took every possible mechanical process
clothes washing, dishwashing, drills and added electricity to it and if you didn't have that
electrical adjunct you're out of out of out of the leadership. I'm not sure you had a job,
you had a leadership and then it was the dot- com revolution that did it again. Any other analogies that seem appropriate other than those for you?
Well, again, this is a, it's the nature of technology. You go from having something that
is single generalized to more and more specific automated optimized. You can ride a horse
so you can travel more distances. You know, you don't need telegrams anymore because you've got the phone.
We're a constant species where we bring this technology
to enhance our individual capabilities.
Someone from a couple of hundred years ago
would die of shock at what we're capable of today
on an individual basis.
But again, we adapt, we improve,
and this is where our GDP has gone up so much.
Our standard of living has gone up so much.
Have we solved everything? No,
but everything from the agrarian to the industrial to the dot-com revolution has embraced that. And the AI revolution, I think, is one of the biggest indicators of that as well, because
intelligence is like electricity, it's like clean water, it becomes available to everyone.
You made this point and I love it.
You said, listen, intelligence needs to become national infrastructure, global infrastructure.
Can you describe what you mean by that?
So the internet is this information superhighway.
But now this intelligence and these agents and this technology, If you've only got a few capable of using it and it's not,
and it represents only a few outputs and a certain few viewpoints, then you could have an entire
hundreds of millions, billions of people that don't have access to that technology. They're
left behind even more. It'll exacerbate these gaps. However, if we make sure it's distributed and open,
then it raises all boats
because it isn't centralized and controlled.
So for example, OpenAI has done many great things,
but they banned all Ukrainians from using Dali2,
their image generation software for nine months
and all Ukrainian imagery.
Imagine if they were the only image creation software.
It would mean that there'd be a whole nation
that could not create images quicker.
It was, they'd never applied to me,
but I think it was caution around sanctions
and other elements and this technology
is now being weaponized in terms of politically.
You know, like is it like,
is this a nuclear weapon?
Is this a deep fake creation machine?
But again, who's deciding on this technology? Is it a right to have access to augmented
intelligence? Is it a right to have access to electricity, water? We're not doing great as a
species. We're doing better. You know, hundreds of millions of people are still malnourished.
A bit over a billion people still don't have internet access. But I think that if we can make intelligence available
to everyone, there's no problem you can't solve.
But more than that, people will be able to solve it
themselves, which I think is amazing.
Yeah, I love that.
Define an entrepreneur as someone who finds a problem,
solves a problem, and the more empowered entrepreneurs,
the better the world is, and the more problems
that are solved.
You know, there's an interesting point I've tweeted.
Well, actually, Peter, I think, you know, it's a force multiplier, right?
Because anyone can change the world if they can convince other people to follow them, right?
That's how you create companies, you create movements and more.
And again, we now have this new continent where the people will follow you,
these entities will follow you to do things.
So anyone can be a force multiplier.
And that's crazy to think about because if you are in an underprivileged area, you don't
have the people around you that are skilled.
How will you hire them and get them to come to your company, to come to your movement?
Whereas now they're available to everyone. Yeah, it used to be where you were born, whether your town had a library or a phone or even
transportation, the color of your skin, your gender, all those things determined your ability,
your agency to create a purposeful and meaningful life. And this is that Google came first
and cellular phones came next,
and now this is a major force multiplier.
There's an interesting point that I've noted
and it still blows me away.
I think of generative AI today
as the world's most powerful technology
and I think the right word is flabbergasted that it's free that effectively the world's most powerful
Technology is available to for free to anyone with a smartphone
you know if I come back 20 years ago and and described what we'd have today with Gemini and
And chat GPT and all of that and said how much do you think it's going to cost per month to use this technology?
I would have never guessed
zero
Does that surprise you? That's astonishing. I mean what is
We have this concept of hey, I want to make a picture. I want to make music
I want to write an essay
And it costs x number of man woman hours, right?
Yeah.
Yeah. All of a sudden it costs nothing. The thoughts to the creation process, sure you
can spend a week and make it even better, but to get to an 80-20 now is almost instant
for any modality. Like again, we're not talking about perfection, but you don't expect your
graduates to be perfect on their output from drawing to coding to anything. That rapid
iteration is just something that's beyond belief because again, when you look at these
models, it's just a few gigabytes and they can run on your smartphone. And it does seem
something's wrong with that energy equation, right? Energy is that much down to nothing.
The economic equation is what really?
Bugs me the value we're extracting. I mean Google did that and it monetized on advertising
But it also uplifted humanity in such an extraordinary world
You know, I just came back from India where?
Geo their 5g network has demonetized data and communications and extraordinary rate
And so we're we are seeing this this abundance equation of
digitization
Dematerialization demonetization democratization over and over and over again, and this is the ultimate one
and I'm just you know, it feels like we haven't been able to even understand a fraction of
1% of the implications of this so you think about the energy that's taken to put
Doctor through school right and the cost of that millions of dollars in the US. Yeah, and that expertise concern
Decade that expertise could suddenly be available and replicable at the push of a button
A decade. That expertise can suddenly be available and replicable at the push of a button through
a digital AI and soon an AI doctor and surgeon that outperforms human doctors and empathy.
There's this relationship between energy and GDP per capita.
So that's pretty much a straight line.
And it just feels that all of a sudden-
Energy is output and power available to the society.
Per person.
Yeah. Per person, yeah. Per person, yeah.
So the more energy you have per person, the higher your GDP
because you could build stuff, but then more than that,
you can support service industries and things like that.
You can support the million dollar doctors
with a decade of work.
Yet all of a sudden, the energy equation for intelligence
has collapsed to nothing.
Google and the previous internet
reduce the cost of consumption of content collapsed to nothing. Google and the previous internet reduced the cost
of consumption of content down to nothing.
And that's why they could have ad and other models
to have that go.
They want your attention, shall we say,
and you're purchasing on the other side.
Now the cost of creation almost has dropped to nothing,
not creativity necessarily,
but the creation of information of knowledge.
And again, not novel knowledge, it's just how can we distribute this because we never have enough doctors.
We never have enough good doctors. And again, we're not talking about great doctors. I think
all these discussions of AGI are like super humans and the top 0.1%. I just want an average
doctor to be available to everyone, an average teacher to be available to everyone and average teacher to be available to everyone
But the average level to be really much better than it is right now. Well, I I get it but I
Believe, you know, I mean spent a lot of my time in the intersection of AI and health as do you we have that in common
that
that
there is no way that any human doctor can integrate the
The gigabits of data that come out of a current, you know a digital upload
understanding all of the blood chemistry is your genomics your imaging data
But in AI can it can contextualize all of that and then give you some root cause analysis. So I think that we're not far away from having the average AI doctor be better than most
all physicians out there.
I think we're there pretty much now.
It just hasn't been integrated.
If you look at the benchmarks from radiology to this to that and the long context window
work, but again, this is the exciting thing as you go from individual models
to pulling them together so you have your own medical team.
You're not just calling on one doctor. And we see this with the technology as well.
We had gigantic models and we still got them with the Binyu Llama,
but now we're moving to a mixture of experts and routing
to expert models that experts.
So these are agents and sub-agents.
There's an AI radiology, you know, x-ray radiologist agent and one for MRIs and one for CTs and
one for blood chemistries and one for genomics and all of them are being pulled together
as your medical team.
Yeah.
And they have common language to talk to each other and you know the outputs are checked as well. Like in medicine before we get
AI doctors making diagnoses, every diagnosis by a human doctor should be
checked by AI. They're not arguing against that. It's gonna well I think it's gonna be
malpractice to not have AI in the loop in some number of single digit years because it will save lives.
Humans make mistakes.
They just can't deal with the amount of data.
It's not the way it used to be.
If you can know because you get access to the data, I think there's an obligation to
do that.
Yeah.
I mean, it's because we could we can only have I mean
we all know that physician's scroll is unintelligible. AI is just catching up
with that right now right? The information flow from that hour long...
They teach you that in medical school by the way. It's just a few lines right? We lose all that
context and all that rich context so So is anywhere in the world, maybe found Fountain Health, that you can go and see your lung radiology over time and how it
progresses. We just saw I think a paper come out that showed that you could
detect breast cancer up to five years before because they just dumped it and it
just analyzed every smallest piece of things. You can't see how the world
evolves because the lack of context from black and white scrawl made it so our medical systems assume ergodicity.
A thousand tosses of the coin are the same as a single coin costs a thousand times, but humans are individual.
We're complex systems, but everyone gets 500 milligrams of paracetamol.
We don't care about things like cytochrome p450 abnormalities that mean you
Metabolize codeine into morphine or fentanyl into death, you know
Yeah, I mean there there really needs to be a genomics agent before you take anything other other thing
Which people don't realize is you feel like when you get prescribed a particular medicine for a particular situation
That you assume it's gonna work
But the fact of the matter is it works in about a third of the individuals,
if you're lucky. It was enough for the FDA to approve it, uh,
but no guarantee it works for you. Um, you know,
the other thing that you, that you wrote in your paper,
and I don't want to let it go unsaid is besides this, uh,
this exponential rise of AI, AGI, ASI, whatever we're going to call it, is the emergence
of humanoid robots.
We've got probably on the order of 20 plus well-funded humanoid robot companies.
You mentioned Unitree in China.
We've got Tesla building Optimus here.
We've got Figure, which just is releasing figure two.
And if you know, Elon's not necessarily been correct on his timing all the time, but he's
been correct on pricing.
And it's, you know, sort of a dollar per kilogram for a certain level of complexity.
And his projection for Optimus is $10,000 a unit, let's double it, give them 100% margin to
$20,000 a unit. That's something that you could rent for a hundred bucks a month, having
a humanoid robot running the highest level AI model accessible 24-7 and that's going
to change the world.
Yeah, I think that's the thing. The embodied agents and the advances we've seen there
now with on-dev device and cloud AI now,
being able to do segmentation, analysis, robotics, like it's here all of a sudden,
all at the same time. We see this actually in history a lot where the same breakthrough
just happens in different places almost. And we're seeing this all the time in AI where you see
literally disparate breakthroughs because the control problem in robotics is pretty much there now for 90% of use cases.
Again, not 100%, but these robots can manipulate.
They can read a recipe now, we've seen, and they can just make the recipe from the instructions
with full autonomic control.
And they can do autoregressive learning as well in swabs.
So Unitree have that and others are implementing that
where one robot learns from the tasks of others.
Which is...
And that's interesting but...
Insanely powerful.
You know, bringing it back to health, you know,
the best surgeons in the world will be humanoid robots eventually
that can see an ultraviolet infrared,
didn't have too much caffeine that morning,
no fights with a boyfriend or girlfriend the night before. And it's the capex and the cost of electricity,
and then all of a sudden this robot is a surgeon in multiple distant villages in India and Africa with the highest surgical skills. And like you said, if people should know this,
the number one question you ask your surgeon
when you're interviewing them
is how many times did you do the surgery?
And if they're all operating on a single operating system,
then the robots all share the same millions
of operations they've done.
And this is again our collective knowledge
where it's taught through medical school,
but AI will obviously be able to teach other AIs
much better.
And if we can standardize that,
then that's when it can go massive.
And it comes then down to just a production line process.
It's like Xiaomi, you know,
the smartphone manufacturer in China.
They thought they were going to be added to US sanctions.
So they decided to diversify into electric vehicles.
And then China's industrial
powers means they now make really good electrical vehicles. Like, you will see these industrial
giants move forward and produce millions, tens of millions, hundreds of millions of
robots. And the really fascinating thing is that helps with the demographic issues of
China and Japan and places like that at exactly the right time. Exactly the right time. And, you know, the predictions we've seen from from from Elon and from the CEO of
figure is, you know, as much as 10 billion humanoid robots by in the 2040s, more as
many as are humans.
I asked a friend of mine, Imad, and I'll ask you, what's it going to feel like when you're seeing humanoid robots walking around the streets all around you
all the time?
It's going to be weird, isn't it? Like we already see these little ones going around
that are not humanoid, but there's no reason you can't make them indistinguishable by 2040.
It's really sci-fi. I mean, actually, this is the real existential threat to humanity.
Ten billion robots and a bad firmware upgrade, you know
All of the all of the like scenarios literally that's the one that can get us all I
Think there should be some checks extra checks about is there
Yeah, you know my friend when I asked my other friend that question, what's it gonna feel like
He said normal and I think you know, we're gonna have these moments of freakout I remember I got a model X with the gullwing doors and in the first month everybody's like, you know staring at it
Like oh my god, that's amazing. And then you know, it becomes normal. It becomes normal. I mean and again like
There's different things in different cultures in Japan,
the robots are very much side by side or you become part of a Gundam. Whereas we have this
Terminator concept of robots or AI God in the West. But ultimately, as you said, like people are used
to being able to type in words and make magical images in two seconds now, or using chat GPT for
your homework. Like you just talk to some of these kids and they're like, yeah, you just use it, you know?
Like there's an initial shock
and then all of a sudden the magical becomes mundane.
I think we're very good at adapting to that with our filters.
But 10 billion robots means that there isn't,
I think one of the ways to really think about this is
how much energy do you need to put in?
Your own physical mental energy for a job
to be done and it's collapsing everywhere the amount of energy that we need to do productive
things which means we can do so much more productive things. Yeah I think the the scarcest
resource in the world is time you know I'm working on it from the longevity standpoint of how do we add more years. AI will be the most impactful agent in how we create, we reach longevity escape velocity. But I think people
need to realize that we've been buying back time with technology. Google saved us hours
of going to libraries looking for that book and Zoom is know normally you and I would have to a flop flow meet to
London or you to LA to have this conversation instead it's microseconds
of you know click and we're together so it's about buying back time which is our
most scarce resource well I think there's two things there's time but
there's also focus and flow which I think of the other two interesting parts of that. So we know how useful an amazing chief of staff or EA is, and everyone will have their
own one. So like, how do you live longer? Well, first of all, exercise and eat well.
But you know, you don't have that person, that entity that you trust telling you and
advising you sometimes. So you cheat, you fill down,
etc. No one will ever need to be alone again and can have a persuasive agent next to them,
which is something insane to think about, that knows exactly how to talk to you to put you into
good habits. And it might be physical embodied as well, it might cook your breakfast and everything.
Like how difficult is it to have healthy meals all the time that are fresh?
It's hard.
With robots it's easy.
$100 a month as you said.
And that's kind of crazy to think about
from that longevity perspective.
But then it's about our own biggest enemies are ourselves.
You know, like I took off a few months
because I needed to clear my head
and then read everything and now I'm compressing it
and putting it out and off we go again.
If I'd had all the help that I needed and it's not like the people around me didn't
help, I would have done even better.
But everyone now can have access to that, which is going to be amazing for global mental
health if we do this correct and the capability of people to just achieve so much more by
having that focus and flow.
Talk about flow a little bit more.
How do you think about flow and do you think humans can enter flow with AI partners, AI
agents?
I think 100%.
When I talk to all the creatives, you know, Stability is one of the, is generative media
leader.
How do the top artists use the AI?
Do they use it to just make content?
No, they're like, we like to jam with it. You know, it helps us enter that flow state, especially when we customize the
AIs for their catalog and things like that.
And you can use it to discriminate to do like, you should not use
chapter PT to write your assets.
You should use it as a sparring partner to get you to think outside the box
and look at things in different ways.
That's the best way to use this.
You know, like a very smart precocious graduate that occasionally gets things wrong.
I think this AI is original creative, but we haven't built the systems to keep that flow state going. And the flow state is the one where everything is just kind of coming and there's
no barriers between ideation, creation, and iteration. You're looping.
Yeah. One of the things I like to speak to entrepreneurs about is, you don't know how to
think other than the way you know how to think. And that's where generative AI models can come in.
You could describe to it, this is the problem I'm trying to solve.
How would Steve Jobs take a shot at this? How would Iman Moustak take a shot at this?
You can actually have it digest and present to you different approaches that break you
out of your predetermined mindsets. And that's pretty powerful. But then also understand you.
Again, having a good sparring partner is amazing.
And there's this base level, you know,
like when we grew up with our parents
and they supported us to go to school,
you didn't have to worry about all of that.
But then you grow up and you have to worry
about all of that.
You won't have to anymore.
And this other level where it is the sparring partner,
where it does look from different perspectives,
where it knows exactly how to talk to you,
cajoling or criticizing or otherwise.
But if you own that AI,
you know that it's on your side versus someone else's side.
Like what is the objective function of that AI?
Is it trying to sell you something
or is it looking out for you?
And this is gonna be one of the most important things
because we're inviting them already into our everyday lives.
Like Apple Intelligence is now going live
on 10 hundreds of millions of smartphones.
And it is smart and it's persuasive already.
And it's only going to get more smarter and more persuasive.
But is it looking out for you or someone else?
Depends on how they build it.
That's a really important point of who is the AI loyal to.
Is it your version of Jarvis?
I use Jarvis as the context thought for this AI.
And I guess what you're saying is, listen, Jarvis,
I want to be more productive today.
So if you don't mind, keep me inspired and keep me focused
and keep me productive to reach these goals.
And having a muse almost in one sense that plays the right music,
tells you the right joke, gives you the right data points at the right time.
Well, one of the things we discussed in the paper is this concept of
we're going to move away from files, so PDFs or Word documents or images to flows of kind
of, you know, points in time and all the things you pull together so you can rework outputs.
Your life is like that.
And if you have AIs that understand your context, is that AI provided by Google that's trying
to sell you ads?
And so it'll be like, well, you can do this by buying Gatorade, you know, and this and that.
Or is it something which is like, no, like keep away from this social media junk, you know,
you're clearly getting stressed by debating over who's going to be vice president or something like that.
You know, I've got you, you know, these are all various interesting things because people have agendas.
And one of the things internet to the previous internet court was our focus to
sell us ads. And that's fundamentally about manipulation.
And so if we make these AIs that are massively persuasive,
know everything in our flow,
you can inject has like nothing else with incredibly high CPM and it'll be free
Because again, if you're not paying you're probably the product, right?
I am curious and I've written about this that once we have these deep AI
Model integration into our life. I'm not going to be buying anything myself
My AI is gonna be buying stuff
It knows when I run out of toothpaste
It knows when I run out of shampoo and rather than me having some smiling person
You know shampooing their hair brushing their teeth and being influenced by that ad
My AI knows my genetics and knows which toothpaste or which hair which shampoo is best for me and is
Probably buying it without me ever asking.
Is that going to spell sort of a disruption of the whole ad industry?
I think so. I think the ad industry as it kind of exists is already under siege
because a lot of the numbers were made up.
And we've kind of seen that with the CPM discussions and others.
There's been advertising power when you look at the amounts
every single year that the big companies have to increase
their ad spend to keep the same thing.
And now it's being really questioned.
When it's in disintermediated through independent agents,
the value of the consumer brand drops.
That's kind of understandable.
And then again, it becomes about just the
stuff that's background and the stuff that status, the stuff that's living and the stuff
that's entertainment. So I think there'll still be massive brands and entertainment
and consumer and discretionary. But certainly on the other side, like as long as it's good
quality and you trust it, does it matter? You know, yeah, you get commoditization you mentioned in the white paper
The work of Daniel Kahneman who just passed recently and system one and system two thinking and how AI
Is injecting into that you might taking a second to recount that so, you know, you've got two types of thinking like oh my god
There's a tiger in the bush and I'm seeing a little bit your instinctive thinking.
And then you're thinking that's very logical step by step chain of thought.
In fact, that's what it's kind of called.
And the previous era of AI was very much chain of thought or extrapolating.
So what that means is that we had these giant big data sets, we call it big data, and then Meta ran these huge regressions,
Facebook, and then with 13 points about you or me, Peter, it knew you better than your best friends, and they could target much instinctive. So right now they don't tend to do reasoning
immensely well. This is why the challenges people put against them are like, well, it
can't do mathematics and they got better and better at mathematics. But then you saw
DeepMind just got an IMO gold, silver medal level, international math and the silver medal.
It wasn't through these LM type models, it was through the more genetic models
that look at stuff and chain reasoning together.
We gave it calculators.
Yeah, we gave it calculators.
Because these models that we saw have the breakthrough
that are like spotting a tiger in the wood
in stick to your models, they guess the next token,
the next word in a sentence
by having all these contexts together
or in
diffusion models, which do the image, they learn how to reconstruct and deconstruct
images and that process architecture there. So it's all about process flows, as
it were, and instinctiveness. And that's what was missing. This understanding of
context, like make it happier, make it sadder, explain it like I'm five.
Classical AI could not do that,
but it could extrapolate really well.
And that was worth hundreds of billions of dollars.
But now we have the missing part,
the other hemisphere of our brain.
I'm amazed how AI has actually taken up
the mantle of empathy.
You know, if you had asked me years ago,
what will AI not be able to do in the next decade?
Where are we humans going to be better off? I would have said it's in being empathic and connected with an individual.
And it's actually now for the last year been proven far more empathic than humans in most cases.
Thoughts about that?
Well, I think it's because we don't have the time
to establish connection and it's hard
because we're all so busy with our lives,
like our doctors, our teachers.
How many people do they have to see?
And the empathy from our core organizations
is drilled out of us.
We're mistrusting.
And AI doesn't need to be mistrusting.
And AI has the energy to be
empathic. It learns empathy through the process of the AI training. Because the best way to get
someone to trust you is to help them. The best way to establish empathy and rapport is what's
my common context and my common framework. And because it understands these frameworks,
they're instinctive in terms
of it can map them. That's why I can communicate to you on your level. Like you, the best way
to again, deal with this AI is, is people tend to say like little short prompts. If
instead you say, hi, I'm Peter and I love longevity and you know, I'm the first principles
thinker and I believe in exponential things
and this and that and that.
And you write your own pre-prompt.
In fact, that might be a subject that's taught in school,
write your own pre-prompt.
You know?
That's sure.
Can actualize yourself, give yourself,
help the system understand who you are.
If you write a short intro to ChatGPT Anthropic,
it will communicate with you so much better if you tell it how you like to be communicated to.
And then you can even tell it, communicate with me better, and then it will be even more empathetic and it will understand you even better.
Which is again kind of crazy to think about because we've embedded these into context.
Bob Schiller has a great book called Narrative, the Nobel Prize winning laureate for economics has a great book called narrative economics about storytelling and how it impacts the whole of society on the economic and social level.
And we're made up of these stories, these contexts. And now we've encoded these contexts Kahneman style, and we can mix and match them them and that's what drives the empathy and the connection and that's also where like I said it's not just
about the robots walking around it's we will put more and more trust in these
robots and these agents because they will be more empathetic and they will
never get tired of us. And they also I think the thing you said earlier which
is so true they will take the time
because they can run in parallel. I just rewatched the movie Her, which was still fantastic the second time. And there's a point at which the star is asking this AI agent, are you speaking
to someone else at the same time? And she says yes and she goes and he goes how many people?
And she goes something like eight thousand seven hundred and thirty six
He's taken aback but of course she's so incredibly patient and loving and empathic the entire time that
That it feels like you have their full undivided attention
But I mean that can happen because now we can have one AI per person, you know,
or we can have a hundred or a thousand eyes per person, given how cheap they are.
So you won't have that thing in her words, or how many people you're talking to,
just you and your special, you know, and you're unique. Like,
again, people like these affirmations.
They want to be stabilized because they're constantly filtering to get through the
world. And this will understand our filters. If you look at her it's very
interesting because there is no visual representation you know a lot of these
things are like robots or avatars it's all done through the earpiece and the
air pod and really humans are wired for voice. You know, it's the best way to convey.
Like if you look at it on extreme basis,
like again, using tools like HeyGen and others,
they took for example, Millet's speech at the UN
and translated it instantly into English.
You're like, that really connects better.
But then you can take something like Hitler's speeches
and you're like, oh, he does
not sound like a crazy ranting German anymore, which is kind of crazy. You can take that
Olympian from Turkey who just won the silver, you know, with his glasses without any technology.
And they translated that into English. You're like, he seems like a proper dude. These are kind of,
kind of crazy. And again, we have to think what is the effect when we have individualized agents talking
to us constantly that are our best friends and will never betray us and are looking out
for us or they're working for someone else?
Who are we inviting into our inner circle?
You know, and that's really interesting.
We're going to get into the conversation of open source and, and closed source and how we prepare for this future.
In just a little bit. There's a bolded quote you have here.
Let me read what comes up to it says,
we're going to be processing information and generating new ideas faster than
ever before. We're going to see a dramatic increase in the velocity of knowledge. And then you say the question isn't whether they'll change the world. The
question is are you ready for the world they'll create? And I want to dive into
this a little bit because we're in this super exponential period of growth. We all used to Moore's law
for the last 50 years where computational power was doubling roughly 18 to 24 months.
We've seen that spike up to 10x per year. Elon, when he was on stage with us last year at Abundance
360 said he's never seen anything as fast as this. It's now at 100x per year. How does anybody possibly keep up with
this? What's your advice to that, that CEO, that entrepreneur,
that 20 something? How should they think about AI accelerating
at this speed?
I think exponentials are a bitch.
I think that's the thing.
We're not really naturally good at these things.
Is it true exponential because of the distributed way
that it's emerging, right?
And again, nobody's quite sure what the lower bound
of a cost of a piece of intelligence is.
This has broken our mental models,
even for us that are right in this industry. What we can say now is it's too cheap to measure already and it's just going to get cheaper.
It's not done yet. Maybe it will slow down the pace of it.
But even now, if it stopped today, it's changed society because it's just in that proliferation phase.
Like the seeds have been planted everywhere.
You do not need to hire call center workers anymore.
Every single essay written
is likely to have chat GPT somewhere in the process, right? At school. These are fundamental
one-way doors to society. So the way to think about it is Jeff Bezos has this great thing,
which is the inevitable and the unchanging. So he started Amazon as kind of a bookstore because he
knew it would be an everything store because the internet
was inevitable. The distribution of information and the logistics that would enable people to do
that and books were a great way to do it because building that you could do it but people always
want faster, cheaper, better customer service. When you're building businesses the mental models
are like the ones that I lay out in the paper. I have infinite graduates, and soon they'll have bodies as well.
I have a concept where I can go from static files
to flows of knowledge,
and I wanna increase the agency
of every single one of my human workers
because they're gonna be the highest marginal cost.
What does that look like in a few years
when this technology is inevitable,
and we've created these reproducible processes and modules.
Because if you start digging into the individual parts
of the technology and the minutiae,
it does get incredibly complex incredibly quickly,
but already we are hitting a bit of a wall,
but not in that this technology will stop
in that it's good enough.
It's satisfied. All of the open source and closed source are catching up with each other. And we've
put a level of performance where it doesn't matter if it gets better anymore. And that's where you
tie it back to what you're doing on an individual company, country basis and think, okay, the world
has changed. This technology will proliferate. The cost is cheaper than we thought, because I haven't seen anyone say, oh, well, that's
a lot more expensive than I thought it would be.
People usually shock in the other direction and it's available.
Everybody, I want to take a short break from our episode to talk about a company that's
very important to me and could actually save your life or the life of someone that you
love.
The company is called Fountain Life and it's a company I started years ago with Tony Robbins
and a group of very talented physicians.
Most of us don't actually know
what's going on inside our body.
We're all optimists.
Until that day when you have a pain in your side,
you go to the physician in the emergency room
and they say, listen, I'm sorry to tell you this,
but you have this stage three or four going on.
And you know, it didn't start that morning.
It probably was a problem that's been going on for some time, but because we never look,
we don't find out.
So what we built at Fountain Life was the world's most advanced diagnostic centers.
We have four across the US today and we're building 20 around the world.
These centers give you a full body MRI, a brain, a brain vasculature, an AI enabled coronary CT
looking for soft plaque, a DEXA scan, a grail blood cancer test, a full executive blood workup.
It's the most advanced workup you'll ever receive. 150 gigabytes of data that then go to our AIs and our physicians
to find any disease at the very beginning when it's solvable. You're going to find out eventually.
You might as well find out when you can take action. FountainLife also has an entire side
of therapeutics. We look around the world for the most advanced therapeutics that can add 10, 20 healthy years to your life and we provide them to you at our centers.
So if this is of interest to you, please go and check it out.
Go to fountainlife.com backslash Peter.
When Tony and I wrote our New York Times bestseller Life Force, we had 30,000 people reached out
to us for Fountain Life memberships.
If you go to fountainlife.com backslash Peter, we'll put you to the top of the list.
Really, it's something that is for me one of the most important things I offer my entire family,
the CEOs of my companies, my friends. It's a chance to really add decades onto our healthy lifespans go to fountain life.com
Backslash Peter. It's one of the most important things I can offer to you as one of my listeners
All right, let's go back to our episode
The next thing you talk about in the paper is a concept
I I love and I've written about and you do a beautiful eloquent job speaking to it, which is
It's not AI versus humans humans it's AI versus humans plus
AI which has been defined as centaurs let's let's talk about that how should we
be thinking about AI as our partner and the definition of centaurs and the
benefits of being a centaur so AI, AI, we just got little building blocks
of various things.
And again, if we look at the individual models,
Tesla had 300,000 lines of autopilot code
replaced by a single model, like a sieve of weights.
Visual data goes in and driving instructions come out.
That's kind of insane, but they're just sitting there.
They haven't been tied together yet.
It's human ingenuity that ties them together
and they can repurpose and refactor them,
just like when you're leading an organization or a team.
It's about up to you how you deploy them.
And right now humans are better than AI at doing that.
That's why humans can do the creativity part.
They can tell their own story.
They have that agency.
Whereas AI is really good at content. It's really good at following instructions. Again, it's gone to Liberal
Arts College, maybe it goes a bit weird sometimes, you know, and creative. It'll get better and
better at following instructions and more and more specialized. And this is why the
Cent or the human plus AI beats the AI by itself. We saw that in chess. The AI beat
the human, but then the AI's plus the human beat
the computer by itself. We've seen that with things like Go as well whereby
Lysadol was beaten by the AI and then he got so much better and the entire level
of Go improved across the top level because you could see more interesting
ways and looking at it that broke the norm. So I think you know in the world
in which you're going,
you've got to embrace the technology and use it
because it can increase your focus, fun and productivity.
Like, question is, why wouldn't you?
It's like sabotaging yourself
and tying one hand behind your back.
You know, it's like saying,
I don't want to have good quality teammates.
Well, you know, it's like being a farmer and not using tools.
It's using your hands only.
Why would you not want to use tools?
You identify what you call a triple threat on these centaurs, just to recount them for
you and let's talk about each one.
Efficiency on steroids, time, the ultimate luxury and choices galore.
So efficiency on steroids. Well, I mean, like like what is the capability of any individual one of us?
It's a function of our team and now we all have teams to increase our
Absolute efficiency and our team's efficiency as well because no one will say no to high quality
Talent but what team would say no to that as long as they don't get in the way like obviously we see what happens when
Teams get too big. We have to make sure not to overcomplexify
this AI or put it in unnaturally. But we have to really understand the capabilities. We
don't need to understand the technology, just the capabilities. That's what enables efficiency
per capita to go up dramatically on individual, company, and even country basis. If we kind of what was the next one again?
The next one was time.
You know, AI frees up humanity's most precious resource.
And I get that fundamentally.
Time is the one thing we always are struggling with.
We always have a lot more of it than we think, though, right?
In terms of quality time.
And that's a function of like,
we always feel that we're on this treadmill
and we fill our time with junk,
but how much quality time do we have?
Like there were studies shown that when you go on a holiday,
the most important thing is the highlight
and the end of the holiday.
You know?
Because of the way that we kind of look at it.
So make sure that, you know,
you send your bagers ahead of time
and you've got a luxury pickup and stuff like that.
Because not all time is equal.
We can use this AI to decrease the amount of content
we have to make and rote work
and increase the quality of what we're doing.
Have content that entertains us all the time
versus watching junk.
You know, have the stabilization and focus.
When I get up in the morning, I take a moment to sort of prioritize my day.
Like, what's the most important thing I'm going to do today?
One of them was, you know, sit down with you and do this podcast.
And there's another three or four.
And I love the idea in the future, my version of Jarvis is going to say,
okay, listen, you do that one, I'll take these three and report back at the idea in the future my version of Jarvis is gonna say okay listen you do that one
I'll take these three and report back into the day again. Just like a good team
I am and it's like with that team you can be so much more it can free up the time
But then it can also improve the quality of your time and how you spend it now
This is so important as well because we tend to just default to this
so important as well because we tend to just default to this
autonomic kind of treadmill of just consumption of fast food,
not only input, but content as well.
It's very tricksome because this is the way our internet and other things are done.
Like I spend far too much time on Twitter relative to the quality or X of kind of
the output, right? Because it's so engaging.
But what if I had my customized feeds that came to me
and just gave me impactful stuff that both fit my worldview
but then also challenged my worldview
because that's what I wanted.
My time would be so much richer.
Yeah, I built a company called Daily.ai
that searches the world's news in longevity or tech or whatever it is
It customizes a newsletter
For me because I only want validated scientific technological content that is
Got a positive semantic. I don't want to hear about disaster scenarios
And it generates that but you can you can have your ai ultimately feed you
Like you said if you want to be challenged, feed me the most valid content challenging my worldview. But then also I think
one of the things underappreciated here is that the previous AI that we saw, this old school AI,
deliberately siloed us into our groups because it was better for advertising. Like in the multiple studies of the,
yeah, Facebook kind of echo chambers.
I think one of the really interesting things here
when you look at education and other use cases, social,
is allowing us to meet others within the same context
and bridging the context.
Again, this podcast is going out into multiple languages
in our voices, which means that people can understand us in a different way that they could have understood us before.
We're breaking down barriers with this. And I think fundamentally, most of the meaning of life and the quality of time is in our human connection with each other.
Like when you're reading the classics, you're listening to the stories of these people.
And if you suddenly can talk to Archimedes
or Richard Feynman or any of these people, that's great.
But you know what's even better?
Discussing it with other people.
I think the AI can really help with that.
Well, the third one you said is choices galore.
You say you want to code but never have,
AI's got your back, need to understand complex data,
AI can break it down for you.
And I get that, It's basically complimenting you in areas where you don't have expertise
or even where you do have expertise but want different perspectives.
I think a large part of this is the question that there's memes that go out sometimes of
you can just do things, you know?
Okay.
Like what's stopping you?
Most people are like, well, I can never ballroom dance.
And they go and learn to ballroom dance, you know?
I can never code.
And they learn how to code and they actually really love it.
The bar and barrier for creating and engaging with this technology, the floor is risen.
Yeah.
What can't you do?
Yeah, the bar has dropped, is what you want to say.
Yeah, the bar has dropped, the floor has risen. But everyone's uplifted by this because the
barriers to entry have dropped, you know? The barriers to capability have dropped with
this. So you have increased agency, you have increased capability, because you're always
constrained by finding that teacher.
Now the teacher is here.
This is about companies and governments in particular.
In companies, there was a study done recently in which it showed your lowest 50% of your employee base has gotten the biggest rise out of using generative ai versus your top 50 so it's like leveling the playing field
Which is pretty extraordinary
And I can very easily imagine how ai's, you know companies are going to use
Centaurs and agents and such but
I want to understand your thoughts on government because government is the most sublinear in most cases inefficient
because government is the most sub-linear, in most cases, inefficient structures out there on the planet and are going to have the biggest challenge, I think, in the decade ahead.
I'm not sure if you agree with that, but how do you think about governments utilizing these resources capabilities?
I think there's a mixture of hope and fear
on the government side.
I mean, like, I think for me it's clear that AI can do better
than governments and resource allocation,
even with what we have now.
Because again, the bar is very low there.
But one of the things is we've lost trust in our governments
and that's a function of, you know,
the lack of transparency that
occurs. Like, you know, you're in LA now, right? I believe Peter. Yeah, I am. $600 million
on the homelessness problem or something is being spent this year. Like, you look at the
LA to San Francisco railway, like we all know that there's a conundrum going in there. And
AI can deconstruct every single thing around that
and show what's really going on.
But does the government really want that?
This is the thing.
Does it really represent the people
or is it a continuous entity
that's only about self preservation?
And this is one of the challenges we're gonna have
as a people because there's riots in London right now,
Bangladesh, they're proliferating.
People don't really believe
in their governments and systems anymore. Can we use AI to increase transparency and trust in
organizations and be a co-pilot for every single individual engaging? Can we do things like direct
democracy, whereby you have groups of people and then you inform them on the topics and capture
all the context of their conversations and feed it up
To be representative and new forms of democratic engagement
I'm not sure but I think we have to try because I'm very worried about where things are going now with just these
Massive echo chambers the lack of transparency increased
Graft and the feeling that the government's not represent us. I think we have to try
I also think governments can do everything their power to resist
Because it's it's loss of control. It's it's and and the challenge isn't just governments by themselves
it's big governments in large corporations and
the two of them together are
likely if AI is going to disrupt the system, it's going to
disrupt that unholy marriage.
Well, I think, again, these are local maxima conditions whereby the unholy marriage works
because things were positive some for a while, now things are a bit more negative some.
The corporations and governments have no option but to embrace AI because Because otherwise you'd be left behind by things that are not.
And there'll be huge pressure for that.
I think it was Taylor, Nicholas Tandor that said that when 12% of our population revolt,
that's usually enough to get anything done.
Realistically, right?
And people will be required to embrace this technology or they'll fall behind their peers.
And they'll demand transparency and other things. But governments kind of like the technology, if it's from trusted
parties like your IBMs and others of the world because it makes them look cool. And this is
separate to something like cryptocurrency. You know, government control of money,
censorship resistant. It was a system outside the existing system. Whereas this is already
institutional. It took how many years for existing system. Whereas this is already institutional.
It took how many years for Bitcoin
and Ethereum to have ETFs.
You know, whereas AI is already institutional,
it's already there and people are already embracing it
and bringing it into government.
But this is why I think like,
there are things that directly challenge the graft
and other things.
And there's just things that we can do
to make the general thing
better, like the representative democracy stuff, like the analysis of policy positions, making that
available to everyone, increasing the ability of people to speak, you know, intelligently and
understand the topics. Like these types of things should exist and I really hope they do get built
because there's no reason that everyone shouldn't be fully informed of every
policy decision before every single election in a way that is directly
aligned and impactful for their community for example and be able to do
scenario analysis. I agree. And that doesn't challenge. You walk into a voting booth today and you're
asked to vote on all of these issues and
first of all the way they're written is confusing and secondly, you know, most of us haven't had the chance to
actually
Actually consume
Sufficient data and a lot of people are actually just influenced by what the signs they read on the polling booths as they walked in to take it.
But having an AI, you know,
you're gerrymandering of who can vote.
Well, the thing is, you no longer need permission with the technology.
Leveraging a technology like Lamar, Mistral, some of these language models, you know, even Gemini or
chat GPT, you can build a system probably for tens of millions of dollars that would
deconstruct every policy position of every single candidate with the publicly available
data and then you could input your own details and it would tell you how it would affect
you with full transparency.
I hope someone is listening and will build that because I think it's so massively needed
for every political election out there.
Yeah.
And the beauty of it, like I said, is it can be fully open source or I think that's the
best way to do it because there's very little argument that that shouldn't be built.
But again, this is one of these just do it things. I think you can even organize a hackathon and literally have that built. Maybe
we should do a government generative AI hackathon that's all about increasing transparency and using
this technology to do so. And again, it doesn't require permission. It doesn't affect the way
things are. It just affects the capabilities of people, which I think again, this is the empowerment thing.
That's all the upside of AI for governance and I agree with all of it and would love this to
materialize. You and I have had a different conversation probably a year ago where the
ability for AI to be persuasive and persuade individuals. Can you speak to that?
Yeah, I mean like if an AI can be empathetic, it can be manipulative. persuasive and persuade individuals. Can you speak to that?
Yeah, I mean, like if an AI can be empathetic,
it can be manipulative.
And fundamentally, the business models of Google and Meta
are manipulation.
So if you have an AI whispering in your ear
every single day, and it knows exactly everything about you,
then it can sell you that toothbrush.
You know, it can tell you, hey, it you know, the sort of thing. Sell me this pen.
I can definitely sell you that pen, right?
Favorite marketing question.
But then you think about again, elections and things like that.
You can have a personalized call for every single voter. You know,
it doesn't even have to be deep fakes. It can just be really persuasive.
I can layer on Baraka Balmer at his best combined with Winston Churchill combined with Oprah,
you know for the sublingual frequencies and that will be a convincing call from any political candidate and it'll be an exactly local candidate
Knowing your position
Knowing how many kids you have where they go to school
uh, you know what kind of business you're in, and then giving you
a spiel that's hyper-personalized to guide you
towards their direction.
Yeah.
I mean, you can see an example of this.
There was an advertising campaign,
just when this was all starting out,
by Cadburys in India, where the Indian superstar Shahrukh Khan.
And you could go in your local kind of organization and you could just enter your business name
type and it create a customized Shah Rukh Khan ad where he talks about, you know, buy
this sweet shop, buy these shoes.
Check it out, Cadbury's India campaign, Shah Rukh Khan.
But again, like we can expect that the level of convincingness of these ads and higher
personalization will go through the roof.
And then the political side, it'll go through the roof as well.
But also the personal side,
like if I'm going to have a zoom call with a prospective client,
I'm a look damn good and I'm going to be so convincing and not have a stutter
or anything like that on the call. Now, when you meet me in real life,
you'd be like, Oh my God, he's not as good as that.
Why wouldn't you use that technology? Because we can do it real time
now. Like we have these small filters, but what if your voice, you can dial up the
level of emotion on that voice live on a voice call, you know, it's going to be
crazy what we're entering right now. Um, I have to say, and it's again, all real
time. This is the other crazy part about it Be able to translate your voice real time have real-time avatars that look like humans with there
It's not a future thing. All right next up in your paper. Imaad. We talked about crossroads and what to do
I think that's one of the most important thing
This is here. This is coming. This is accelerating, this is faster and more powerful than anybody thought.
It's not stoppable.
I think that's an important point, right?
There's no on-off switch, there's no velocity knob.
It is moving as rapidly as the largest,
most powerful companies in the world.
And by the way, all of the AI companies are the largest
and most powerful companies in the world can push it. So
You state here and and we're gonna talk a little about shelling which is a new company and that you've been
You've been in
You've been pregnant for nine months and giving birth and you say shelling believes in democratizing AI
birth. And you say Schelling believes in democratizing AI.
It's not just preferable, it's essential. And then you have
five consider this diverse perspectives ensure AI caters to the social, all societal needs. Open source development builds
trust. Collaboration speeds up innovation, and broad
participation keeps ethical considerations at the forefront.
So let's dive into what do we have to do as a society?
Because people are asking, you know, I chair the AI committee at FII,
the Future Investment Initiative, and the conversation is always,
okay, what should large corporations, what should governments,
what should the public do, what should be our guidelines, you know, give us direction.
And I love the fact that in this paper, and again, I really commend this, you know, how
to think about AI, and we'll put the link in the show notes for your sub stack and your
and on on X for people to read it.
It's not a long paper and it's beautifully written
Let's talk about what should people do?
What's your advice my friend? Yeah, thank you for that. I think again understanding ai is infrastructure just like roads
You know just like ports and others
Um clinton christianson the famous harvard business school professor came up with disruptive innovation, said infrastructure is the most efficient means
by which the society stores and distributes value.
And if a level of AI capability is barred from people,
then you will have an unequal infrastructure. You will have private roads to knowledge. I think it's bad.
When you look at the actual cost of building this and the coordination required, because the output of this is literally GPT-4 is like a 100 megabyte model, that's it, it's a file.
We can gather together and build this technology together in an interoperable way.
And when we look at the inevitable future, will our governments, our schools, our hospitals,
be run on black box models that we don't know
what the data inside is, because you are what you eat.
The outputs are affected by the input.
Or should we work together to create high quality
national, societal, sectorial data sets?
If we have a model that's available
for every single cancer patient and their families
to guide them through that process.
Should that be a privatized model
or should it be open infrastructure for everyone?
Yeah, and I think there's this class of,
in every regulated industry,
and industries are sometimes over-regulated,
but they're there also to protect people,
we should have an open class of models
where the data sets, the models themselves are open.
And you can do things like,
Anthropic have a series of papers called sleeper agents,
where you can poison data sets of models turn evil with just a few crew heads.
This also mitigates things like that.
Um, and it's impossible to tune out or find, which also is crazy when they start being
decision-making processes.
So my view, and this is what we're building at Schelling is that every nation is their
own sovereign AI that represents them. We need to make our collective common knowledge open from creativity to science
to health to education. And that should be open infrastructure for all. And then that
makes the world safer as well. Because right now our AI systems are being trained on snapshots
of the internet with all its imperfections versus a high quality curriculum. Again, you
are what you eat.
If you don't want it to exhibit some of this bad stuff, have high quality,
but you need diversity as well as quality.
Yeah.
And this is what people are kind of going through on.
So I think, you know, deliberately building the future where there's an
inevitability of open infrastructure for industry by industry and standards around that is going to be very important.
I think this is where governments and private sector should come together, you know, and then that can proliferate if it's open because you don't need permission then as well. So stable diffusion and the other image and video models and medical models we did at Stability AI when I was CEO,
they ended up having 300 million downloads by developers. 300 million because you could
just take it and you could build on it. And that's just the start. That was like over
the last few years before the vast majority of the world even knew about this technology.
Again, if we build an open cancer model that is about empathy and guiding people through that process,
how to talk, but also comprehensive, authoritative, and up-to-date on our existing knowledge
and the latest knowledge on cancer and groups and connections,
that can be in every language and used by every single individual afflicted by cancer
in the world.
Completely demonetized and democratized and the billionaire and the poorest child have
access to the same information.
Same information.
Again, it can proliferate.
I think this is the key thing.
But we should do that in a way, and that's why I call it shelling, that there is trusted
entities and good governance on that data.
What is the word shelling come from for your company Shelling Point?
It comes from this concept in game theory called a shelling point,
which is a focal point, a point of agreement.
Cause if you create high quality data sets for nations and for sectors,
cause you have generalized knowledge and we need to have a corpus of that and
then localize and specialize knowledge,
then people will use that because it
becomes a go-to point. If you create that cancer model and you invite everyone who has a stake in
cancer to participate in that, it will proliferate around the world and be the standard as a good.
Bitcoin uses this a lot because... For simplicity, is this effectively what Wikipedia did?
It created a standard in a way of objective truth, but wikipedia could capture
The nuance because the technology wasn't there
So something like cancer you have your known knowledge you have your standards and you've got your stuff. We kind of know works
But it isn't in the individual corpus and then we protect people by not showing them
But it isn't in the individual corpus and then we protect people by not showing them
All of the other stuff and they go to quacks instead or they do their own self research and they repeat that process over and over again
What if we could create a system that was comprehensive authority up to date and respect to the individual and their ability to do that research But then also told you how do you engage with your family?
You know, how do you follow through this process connect connect you with the resources that you need? That should be an open, distributed intelligence system. That's
the way that I'm trying to look at it.
Yeah, open fast. I mean, the number of people I get saying, my friend has this kind of cancer,
what should they do, where should they go, how do they start? It's sad and your heart
goes out to them. And there is no one standard, but there can be.
But it's also this loss of agency
that you feel with cancer, autism, multiple sclerosis.
And so that's why I was like, let's go and build those models
as open infrastructure.
Use this massive compute, you know?
And think about generative AI first health care,
what should be open?
And that's a base.
And then people will build on top of that.
They'll take it. They'll implement it. it'll make so much money off doing that. But it can then all use
great healthcare records and others because we have a chance to reimagine healthcare, reimagine
education, reimagine creativity, reimagine just about anything using this technology.
So that's why I was like, let's set a framework first and then let's start building and gathering
these communities together. And part will be open and part will be proprietary and that's why I was like, let's set a framework first and then let's start building and gathering these communities together.
And part will be open and part will be proprietary and that's absolutely fine.
But if we can make an open base, this can really proliferate, I think.
What do you think about Zuck's llama 3.1 and his push on open source?
I think it makes a lot of sense.
If they can decrease costs by 10%, it pays for itself and they have. And again, I think Meta has been an open source champion
with PyTorch and other things and it's not where he makes money, so he's commoditizing complement.
But I also think it only goes so far and you need an entity to be able to build this stuff
deliberately because Zuck, I don't think will ever build the models that run governments and healthcare
and education and finance because that's not his company's objective function. His company's
objective function is to connect people with an advertising-based business model. And there's
a positive and negative side of that. And he's amazing at his job, you know, but it
shouldn't be on him to create these standards. And in fact, you came back from India, internet.org, for example, they wanted to give free internet
to everyone in India and others.
They had massive regulatory and other issues.
So I think we need to, again, create a complement to that because you need open-based models
that are open weights, as we call them.
We don't know what's inside them, but we need that class of fully open models as well.
And we need the class of models that are fully private as experts, because people will make the
best models there. And so you'll have a continuum. And then we have all the tools we need to have any
type of graduate or consultant to solve our jobs and get things done. Did you see the movie
Oppenheimer? If you did, did you know that besides building the atomic bomb at Los Alamos National Labs,
that they spent billions on bio-defense weapons, the ability to accurately detect viruses and
microbes by reading their RNA?
Well, a company called Viome exclusively licensed the technology from Los Alamos Labs to build
a platform that
can measure your microbiome and the RNA in your blood.
Now, Viome has a product that I've personally used for years called Full Body Intelligence,
which collects a few drops of your blood, spit, and stool and can tell you so much about
your health.
They've tested over 700,000 individuals and used their AI models to deliver members'
critical health
guidance like what foods you should eat, what foods you shouldn't eat, as well as your
supplements and probiotics, your biological age, and other deep health insights.
And the results of the recommendations are nothing short of stellar.
As reported in the American Journal of Lifestyle Medicine, after just six months of following
Biome's recommendations, members reported the following a
36% reduction in depression, a 40% reduction in anxiety, a
30% reduction in diabetes, and a 48% reduction in IBS. Listen, I've been using Viom for three years.
I know that my oral and gut health is one of my highest priorities.
Best of all, Viom is affordable, health is one of my highest priorities.
Best of all, Viome is affordable, which is part of my mission to democratize health.
If you want to join me on this journey, go to Viome.com slash Peter.
I've asked Naveen Jain, a friend of mine who's the founder and CEO of Viome, to give my listeners
a special discount.
You'll find it at Viom.com slash Peter. You list in the list of what we must do as rethink our economic models.
And I mean, one of the biggest challenges is that our historic economic models aren't
viable.
They don't make sense anymore with what's coming given AI and humanoid robotics.
So how do you think about rethinking our economic models?
Where do you imagine it going?
That's a nice simple question, Peter.
Yes, if we could just give us your point of view
on the world economy for the next century,
that'd be great.
I mean, there we go.
I've always said if I had
more time, I'd write an economics book.
We're at the terminal point of our current economic structure. We see it with Japan today
as we're kind of on this call, largest stock market drawdown ever because they raised 25
basis points because they're functionally bankrupt as an economy. You know, we're the
500% debt to GDP. We borrowed as much as we can from the future and we built this whole infrastructure
that's about to have a brand new continent of
incredibly cheap workers hit it all at once. In the West
we have the service-based economy. Like, at least with robots you're constricted
with the amount of robots you can produce and the financing of those robots.
But robots will build robots and AIs will build more AIs.
It will go there, but the agent stuff, literally like a travel agent, as an example,
once we get a really good travel agent, you push a button, it replicates, you know,
and it could call you and it could do everything like that.
So we have to question what is money? And this is happening at a time when, you know,
the petrodollar has shifted.
So the energy equation is kind of done and others.
And we have to consider if we live in this world
of infinite abundance with billions of robots
and trillions of agents, what is money?
Because money was this-
We are heading towards a post-capitalist society
in the long run, right?
In the long run.
And you know, just like in foundation or whatever, that interim period
can be very, very messy because it's a zero sum game with people scrabbling over that.
But when anyone can live like a king, you know, what is human progress?
I think it unlocks a massive amount of creativity.
It unlocks a massive amount of other things.
We can eliminate hunger, we can eliminate disease, we can do all of this, we can come together and create that human colossus. But
again, money is there because we need an intersubjective thing as a unit of account.
But when you have AI agents, the combater, that changes things. When you have people embracing
this and out competing on a corporate basis, that changes things.
AIs themselves and robots, our corporations are technically people, you know, in some senses. Like they have same rights. So AIs already have rights if they can be a corporation.
And in Wyoming, they passed Dow legislation, decentralized autonomous organization.
So you may have AIs that go out and have rights, which is kind of crazy to think about.
But against this all, I think our monetary base has always been tied to energy and entropy of
that energy to create things, physical and knowledge-based. I think that fundamental
equation is challenged. And so we need to rethink how our economic flows go and the rights around that
as well and that's difficult and it's hard. That's why I think every nation needs their own
AI experts, AI teams. We need to get the smartest economists in the world to think about this and
think outside the box and say is it universal basic compute, universal basic income, universal basic
jobs? Do we need a, I think we need a trillion dollar jobs program
for the graduates today.
I felt this when I was in India, right?
You know, the IITs are graduating,
it's like 12 million graduates a year,
and you're seeing the number, percentage of them getting jobs.
And in youthful nations like India and Africa,
if you are getting your degree and you don't have a future,
that's a lot of, you know, let me name it, testosterone that is unchanneled and
that becomes a very difficult future. It's a social challenge, like the start of
a couple of months ago is that 38% of the current IIT batch top-notch in
India didn't have job placements.
It'll be 50% next year.
But then even in the West, we have all these programs
to repurpose people into STEM education.
Like aside programming jobs, programming will change.
You know, what do you do with truckers
when you can have trucks as autonomous vehicles?
I mean, I think the truckers,
the truckers buy the autonomous trucks
and have them work for them.
The Uber driver buys it.
Yeah, this is a dividend type equation, right?
Whereby the nature of capital flows in our society
when you have this influx of supply,
will the demand catch up eventually?
It will, I think.
But it'll be very messy going through that.
And on the other side, it's about what access do I have
as an individual to capability?
And then how can I create value from that?
And I think it's massively positive sum,
but it's really tricky to see the transition.
Like I struggle to see how in 50 years time,
shall we say, going a bit further than normal,
because you know, we tend to underestimate this.
But if we even go to 50 years or 2040, just before the singularity, right?
As Ray kind of said, like what is money?
Cause you're extrapolating the 40, like the U S can't borrow another 500% of
GDP, you know, but at the same time, intelligence is abundant,
but also skilled manual labor is abundant.
What it's crazy to even think about.
Yeah.
Yeah.
GDP is undefined.
So therefore we must instead optimize for happiness,
contentment, social cohesion.
What do jobs look like?
Is it taking the cancer empathy model and going out to your community and supporting families through that?
That's a great job, you know, but it's complicated.
We are going to go from, you know, most people have a job because they need to put food on the table or get insurance for their family.
It's not the job they dreamed about.
So I think we need to start to disintermediate employment for income and employment for personal fulfillment.
I think that split is going to happen sooner rather than later.
And again, you've got this things either for living or entertainment, right?
And so we need to standardize the living aspects.
Everyone can reach that basic level.
And again, West may be a bit easier.
But again, when you actually look at what you can build, it's amazing to think about.
And then you need to think about
how do you manage that transition?
Cause there's vested interests and others
when the whole world gets flipped
and currencies start shifting
because a lot of the anchors are broken.
And this scares me because it's a complex system.
And the only way I think you can do that
is you have to build AIs to help us.
How far are we away from that flip, that inflection point where people, it becomes evident and people
start searching very rapidly for answers? I think that, you know, in five years time,
things get very crazy. I think we're at the early innings of that.
I don't see how things calm down because again, this technology proliferates and
then 10, 15 years past that. It's very, very short time in classical cycles.
It can be longer, but again, this is an industrial AI revolution done incredibly
quickly with multiple things from self-driving to autonomous agents,
to intelligence, all coming at the same time as the end of mathematically our debt fuel society that we've had.
Again, Japan has 500% debt to GDP. How are they going to borrow more?
How is the US going to borrow more?
And part of it has been a decreasing birth rate that's been impacting these nations, right? If a nation is growing in population and labor and automation, then it's producing more year
on year.
And I think this is the really fascinating thing with China because everyone's been like,
what if China gets AGI?
What if China gets a hundred million or a billion robots before everyone else and produces
the robots.
That's what you should be worrying about.
It's about energy one second.
You've mentioned this a couple of times and I just was looking at a chart that shows the
US has been generating four terawatt hours per year and it's been pretty flat for the
last 20 years and it's projected to be flat.
We're not increasing the energy production in the country.
And unfortunately, Generation II and Generation III nuclear
plants gave nuclear a bad name.
At the same time, China has tripled the amount of energy
that it's putting out.
India is on the path to doubling it and
One of the things that came out of Leopold's
paper on situational awareness is the projection that the US could use a hundred percent of
Its current energy production to power its its AI
Needs by 2030 and so energy is a big issue. What are your thoughts about that?
I mean, I think that that energy extrapolation is wrong because you're chip constrained and
I think scale only gets you to a certain point. I think it's an S curve that we're thinking
is an exponential in capabilities like distributed data and high quality data is far more important. But the reality is that we're already hitting energy limits across the entire base.
And the new substrate, like you and I, we need to pay for food, we need to pay for housing,
we need to pay for our kids' education. The AIs and robots of the future need to
pay for energy and computation. So if we're thinking about money, no, not data.
I think there's a misnomer on the data side
because once you have a high quality curriculum,
you don't need more.
Like the original GPT-3 paper was called
language models are few shot learners.
And this is what we see, we have a generalized model,
like a GPT-4 or an anthropic Claude or a Gemini.
It is a file that's trained on all of this stuff
and then tuned and it can adapt to any scenario
through its context window, through its input prompt.
Does it need more data?
No.
So you get a data point of data saturation
for a model that has capabilities of X, then Y, then Z,
and then proliferation and optimization
of that. Whereas what we're seeing with things like the Leopold essay is that you will have
an exponential of compute that continues. I think you'll have a glut in a couple of
years, but nobody can take the other side of that bet. I think Mark Zuckerberg just
said that a few days ago at Metta. We can't underspend in
this cycle. Sundar Pichai said the same. The US does not have enough energy to meet the demands
over the next few years. And even if you have a glut, then you'll be on the other side and you'll
ramp up. Energy is intelligence for now. You know, what it will be going forward, that's another question.
And that will depend on things like video and other things versus language models, which
I think a lot of it will go to the edge.
I think it will go from these large dense models to mixtures of experts and highly optimized
models.
But it's very hard to see how energy requirements go down or the reallocation of energy occurs here.
Like one of the shocking statistics is, you know Bitcoin, our favorite kind of
cryptocurrency, the total energy usage of Bitcoin is half of the energy usage of
all the data centers in the world right now, 160 terawatt hours. It's as much as
Argentina or the Netherlands and that's just one use case.
So if you think about it, like people who say that we're in a bubble in AI and everything like that,
AI uses like a fifth of the energy of Bitcoin. You know, and Bitcoin is useful for various things,
but it's not as useful as generative AI. So I think we've got many doublings of the energy usage of generative AI still coming on a training
and inference basis, running basis.
What did you think of Leopold's situational awareness paper?
One of the things in particular, in his paper and in his follow-on podcast that he did, he pits the US against China.
And the existential risk is who develops AGI first.
And so let's get into that conversation of AGI.
Are we going to achieve AGI?
When do we get it?
Is that a US versus China modality here?
What are your thoughts there? I think we have a choice on AGI, Artificial General
Intelligence, National Superintelligence of,
is it a swarm collective intelligence that
uplifts everyone as representative of everyone,
which I think is a safe thing?
Or is it a unitary intelligence based on just a few entities?
I think the distributed intelligence,
the collective intelligence, is a far more powerful vision
and way of doing things.
Again, open infrastructure enables that.
And this is the level above the llama type infrastructure
where you gather the people and you coordinate.
I think the big AGI ASI is less likely.
I think it's an extrapolation on the curve of capabilities
that we just don't know about. But regardless, we're talking, I think, this is the Ray Kurzweil thing, right? What's
his current forecast?
Well, his, for AGI is 2029 still and for, you know, the singularity is early 2040s.
I think that sounds about right, right? Like if you have AGI as capable as any human with a mixture of experts and physical embodiment,
it's tough to argue against that, honestly, given, and this is important, as much computation as you
need for the inference step. A lot of the models we build today are trained on 10, 20, 100,000 chips,
but they're designed for consumers.
And the energy that you use in running it is a few watts of electricity or 100 watts
of electricity or 1,000 watts.
If you don't care about the amount of energy and you have a thousand or a million agents,
then we're pretty much AGI now, but I think we'll get there, where it's as capable of
any human.
The ASI singularity, I think, is a bit further.
But again, is it a centralized one
or is it a distributed one?
This is the question.
One of the points that Leopold makes,
and you and I have discussed this before,
is GPT2 is a preschooler,
GPT3 is elementary, GPT4 is a high schooler,
GPT5 he describes as a PhD level and at that point when you've
got an AI agent that's self-programming and you end up with an intelligence explosion.
Do you believe that's the case?
I think that, you know, we don't know if the stuff will outperform its base data, but also
I know that PhDs tend to be very depressed and sad people.
But I think it's a reasonable thing.
And this is one of the things I was worried about a lot last year.
Like I was one of the signatories, I was the only AI signatory apart from Elon Musk on
the pause letter, for example, because I think we needed to debate about it.
But I think if we build high quality data going in,
I'm not so worried about that.
And I think that with the self-improvement
of recursive stuff and the MCTS type thing,
multi-color tree search we see,
the type of technology that goes into that deep Google,
deep brain, deep mind paper that got to silver medal
on the level on the International Math Olympiad,
it makes reasonable sense. But will it go out of control and get self-awareness? I don't really
think so. But even if it does, let's say that's the premise, I think this China versus US thing
is a bit of a misnomer. Like for a start, I don't think China wants that but also I think that if
you look at the components, data, talent, compute, China beats the US.
Because on data...
Let's break those down. Yeah.
China doesn't care about IP.
You know, look at the video models. They're trained on all of Hollywood.
I mean, some of the US ones are as well.
They'll train on Sci-Hub, you know, they'll train on whatever,
and they'll train multilingual. So data, there's an advantage there.
But also
the Chinese via WeChat and these other things can get a hundred million Chinese to do data annotation and feedback. Whereas open AI pays up to $200 per annotation for that data. The Chinese could
easily do that for free. You look at compute, China has two exascale computers. So exascale is a thousand petaflops of compute.
So Elon Musk's new 100,000 H100 cluster is two exaflops.
The fastest supercomputer in the US,
Aurora is about two exaflops.
The fifth fastest supercomputer is like a hundred,
to give you an idea.
So we've got this order of magnitude increase.
China has Tianhe 3 and Oceanlight,
which are publicly
known about that are both exaflop and at least four other exaflop computers that can run these
giant models and they can source the chips elsewhere. But again, we're moving from this thing of
big compute was a substitute for crap data and we can do distributed compute and data augmentation.
That's the type of thing China's very, very good at.
They're good at industrialization.
There are approaches now we're seeing with OpenAI and others
where why did they invest in figure robotics for robots?
Who's going to build most of the robots of the world?
It's going to be China.
Well, I think they're going to,
they will have their companies
that compete against the US base for sure sure And one of the things that's interesting is these humanoid robots become a source of data as well
They all feed back into that data augmentations
But I think a large part of this is we've had this AI AGI giant model fixation
Well, I think it's more about a distributed collective intelligence
That's constantly learning and adapting from real-world usage again a swarm of robots would be an ideal one for that, right?
Just like a swarm of Tesla's a swarm of robots.
And then we've had this fixation on really gigantic compute when it can be localized data optimization that then feeds in.
If you optimize the data for models, you can have orders of magnitude less compute required.
But we're not quite sure what that data optimization is yet.
And I think, again, this is a race to this existential threat type of thing, where it's
China versus...
I don't really buy that, honestly.
You do push for international cooperation.
How do we get international cooperation in AI?
What would be...
You're on the stage, you've got the nation's leaders around the world,
what are you saying to them?
I think that, you know, we're really seeing exascale computers come on board with things
like the Euro HPC initiative, with the new Japanese exascale computer, unfortunately
Britain's Shelled, it's one for the time being.
Like a coordinated deliberative effort to build high quality data sets and models for
humanity, generalized, localized and specialized.
Like we're doing that anyway at Shelling, you know,
and we'll be building national champions that can collaborate with governments
and more. Um, and then I think standards around input data,
we care about the ingredients that feed us in our food,
but not the ingredients that teach us and will guide us in the future.
I think that any decision-making system should have data transparency.
I think that will go a large way to helping on AI safety, on AI alignment and more.
And, you know, rethinking about what should we feed these models?
We should feed them diverse data.
You know, not from a DEI or anything perspective, but from diversity of viewpoints representing humanity versus that snapshot on the internet and
ensuring that we have that done properly. One of the conversations we've had as
well is, you know, most of the corpus fed the AI models are Western, if not US, and
there's, you know, a hundred nations out there that don't have
their data digitized.
In other words, the data of the grandfather and grandmother is in verbal language and
hasn't been, you know, digitized in a way that it can be consumed by an AI model.
I mean, for me, that's a massive public works project that countries could take on.
A hundred percent. I mean, for me, that's a massive public works project that countries could take on. 100%.
I think that we're going to be doing that with crypto-economic other incentives at Schelling.
The goal is to build a data set for every nation and be massively collaborative, as
well as for individual sectors and science and more.
There'll be more details of that soon as a self-reported system.
But we can think of it like we send a probe out to space, right?
What's on that that represents humanity?
We engage with a new species.
Because we've got a new species, this AI Atlantis, a new continent.
Are we showing them one cultural point of view?
What's on that data hard drive that we send them?
What do we feed in?
And again, who do we want to represent? And
I think you need to have this hive mind concept, you need this collective intelligence concept,
because that's how we proliferate and flourish as a society. I don't think it's about scaling
up a liberal arts polymath, you know? I think it's about building a hive collective intelligence
and Neuralink and others, again, are huge things around that.
And having it reflect the best of humanity
and everyone contribute what they think
the best of their culture is.
Because then these models can translate between cultures
and I think that's just beneficial full stop.
And that is the education system, it's more.
Yeah, helping people, you know, one of the things I find AI so
incredibly useful is if if there's someone on the opposite side
of a of an issue or someone who who are not able to
clearly understand their motivations AI can help me communicate with that
individual in a way that I never could right. Please help me explain my position on guns to someone on the other side
of it in a way that they might be receptive to it.
That translation of not just language but translation of intent and desire.
It's a knowledge translation, it's a context translation and again this is the universal
translator. That's what these models are and again I think we need to deliberately get together and build that
out and realize I think this is very important. The Leopold essay assumes a
negative sum game. Again there's fantastic things, I think 80% of it's great.
This is a massive positive sum game and most of the AI discussions are negative
sum. Which I don't think they should be. It's about abundance.
Where does he point out negative sum game?
There's a negative sum game in that we must be the first to AGI because then it contracts
everything.
As opposed to building open common infrastructure for everyone, this is a competitive race dynamic
with unstable equilibria.
I mean, I think his point of view includes the potential for a hard takeoff and at that point if in fact we have
intelligence explosion the person who gets there first
dominates it and it's a question of do we want the
US and its
National partners getting there first. Well, I think this is interesting because reflected in the Sam Maltman recent
Op-ed in the Wall Street Journal
We talked about authoritarian AI versus Democratic AI
But then recommends that it's locked down and it's restricted and it's basically unitary in its point of view, which sounds very
authoritarian
China's producing open source AI that's as good as all this stuff already. And it's a complex thing. But again, I think that there should exist open infrastructure and a collective approach
as well as this collected approach, shall we say.
I want to hit on two final points. Investing in AI.
A lot of people right now are like, I know AI is huge.
I know it's the most important part in the world for us economically.
We're seeing these companies from Nvidia and Google and Microsoft dominate the stock market.
It's like if you look at these companies as a percentage of global GDP compared to other
market sectors, they're massive.
The question is, how does an investor think about it today?
Where should they be putting their money?
Do they continue to invest in the giants or are they overpriced?
What's your, I'm not sure if I'm asking for investment advice, but how should someone
think about it?
Since your paper is about how to think about AI, how should they think about investing
in it?
It's a separate thing.
Putting on my former hedge fund manager hat.
OK, let's do it.
So look, I think that the first wave was kind of these
NVIDIAs and kind of others.
Microsoft and Google, et cetera, they're up, like, what,
20%, 30% over the last year, maybe 50%.
It's not a huge amount.
In line with market, NVID Nvidia has been the standout,
because they've established market dominance, they capitalized on it. But it's not expensive
like Cisco was in the dot-com boom, for example, as an infrastructure provider on classical terms.
But one of the things happening here is that we're going to have it overbuilt, regardless,
obviously we will, because they can't afford to underbuild. But then we move on to the application and implementation of this technology by Centaurs.
And so highly regulated industries or industries with pricing power that can replace rote human
work with AI Atlantis has super normal margins. and then you see a margin expansion.
The impact of this is how much will it cost to...
When we had cloud computing, it reduced the cost of building a startup dramatically.
We had to wire up our own servers and everything.
Now you will have an AI agent in the box that can help you build a hairdressing business
or AI business or whatever, almost there in the next few years.
So you have this explosion of creativity. AI enabled work,
I think is where you see this the most. So pricing power is kind of one thing.
Industries that can have this on the other side to increase their reach.
So you increase your audience again, where,
if this business existed,
I could apply a million graduates,
you know, liberal arts grads,
and there'll be increasingly specialized.
How does that impact?
So that should be a framework
for looking at the business opportunities of any stock.
And again, regulated industries with pricing power
are a fantastic place for that.
You don't need the private equity to come in anymore
and redo that.
Do you think the current major players of Nvidia, Google, Microsoft are overvalued or
they still have growth? On classical valuation terms, they are perfectly fine on valuation.
And again, we're seeing continued growth and it's not like it's crazy yet. Bubbles in the past,
we've seen crazy. This is not crazy yet. Like when you're valued by the number of eyeballs in the past, though we've seen crazy, this is not crazy yet. Like, you know, when you're valued by the number of eyeballs in the dot-com bubble
and things like that that we've seen.
Second question here.
This does point to something which is, how much of the global economy will be AI?
A lot.
As compared to what it is today, which is still miniscule.
Yes.
And again, if we compare it to even self-driving cars,
you still need five, 10 times as much investment
to catch up to self-driving cars versus model training.
My last question, my friend, is on the education front,
a lot of people are looking, I need to get educated.
You've said this on the abundance stage.
We've said this in our podcast that it's
critical for anybody,
any point in their career to become educated about AI. What do you think they should do?
How do they start? I'm not seeing this happening in high schools. I'm sure there are courses
and colleges. Do they just head to YouTube or do they just go to Gemini or chat GPT and have a conversation
with the AI?
I think that's probably one of the things we should recommend to governments, which
is standardized AI curriculum on implementing and using things like prompting and other
things.
But really it is about just using it day to day and thinking of it not as the expert but
the graduate.
And so there are a few of these things, the mid journeys and the chat GPTs and others of the world.
Like this is where bringing the next generation is useful.
As you said, you've got your chief AI officer,
but also your almost chief AI innovator or trier.
That's just hacking and trying and thinking,
how can I apply this to the day-to-day workflow?
Again, you have things like your Lindy's of the world
and your others that can chain together different models.
I think that's the next stage,
but you've got a little bit of time now,
but you've got to get AI native by immersing yourself.
There's no better way of doing that.
And you shouldn't be overwhelmed by the technology
or getting into the weeds
because that is a massive, massive depth thing.
Again, that's why I wanted to create this piece, just to give a bit of a framework and
then encourage people to try and use the technology and think about the world in a slightly different
way.
Imad, what can we expect next from you?
What's on the near term horizon here?
So we're kind of finishing the implementation document of a white paper on what an open distributed AI system looks like on a
Practical basis to build these data sets for every nation and every sector and then chain them together into things that can really make it impact
Open source in a sustainable way so hard at work on that
And then we start well, we're getting our clusters into start building models
You know nothing like it and then release and see what the world makes.
So today, title wise, founder of Shelling?
Yep. That's the one.
Are you going to take the CEO role or hire a CEO to support you?
What do you think?
Ah, being a CEO is like, as you said, staring into the abyss and chewing glass.
Maybe I'll see if I can avoid it for as long as possible.
That's why you build intelligent protocols.
They can manage themselves.
Maybe you'll get an AI model.
An AI model to be your CEO.
Maybe I will actually, and that's a good idea, I think.
It will get lots of advice from AI Peter.
Where can folks follow you on social and website and such?
I think, yeah, my ex is at eMostac and
then at Shelling AI for our company organization as well. Buddy, thank you for the time. Again,
if you've not had a chance to read Imad's paper, How to Think About AI, it's a very,
for me, powerful, succinct and clear structure for thinking about this.
I find it very compelling.
I will put the link in the show notes.
And, Imad, looking forward to seeing you again shortly, my friend.
Take care.
Thank you for having me on.