Moonshots with Peter Diamandis - EP #36 Stability AI Founder: Halting AI Training is Hyper-Critical For Our Security w/ Emad Mostaque
Episode Date: April 6, 2023In this episode, Emad and Peter hop onto Twitter Spaces to discuss the latest petition to halt AI training, if language models will lead to AGI, and what could happen if the power of AI is abused. ... You will learn about: 20:08 | Should We Be Slowing The Development Of AI? 44:36 | How Do We Stop People From Abusing The Power Of AI? 59:28 | Is There A Competition With AI Across Economies? Emad Mostaque is the CEO and Founder of Stability AI, a company funding the development of open-source music- and image-generating systems such as Dance Diffusion and Stable Diffusion. Generate images with Stable Diffusion. _____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsor: Levels: Real-time feedback on how diet impacts your health. levels.link/peter _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now:  Tech Blog. _____________ Connect With Peter: Twitter Instagram Youtube Moonshots and Mindsets Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Will you rise with the sun to help change mental health care forever?
Join the Sunrise Challenge to raise funds for CAMH,
the Centre for Addiction and Mental Health,
to support life-saving progress in mental health care.
From May 27th to 31st, people across Canada will rise together
and show those living with mental illness and addiction that they're not alone.
Help CAMH build a future where no one is left behind.
So, who will you rise for?
Register today at sunrisechallenge.ca.
That's sunrisechallenge.ca.
Maple syrup, we love you, but Canada is way more.
It's poutine mixed with kimchi, maple syrup on Halo Halo,
Montreal-style bagels eaten in brandon manitoba here we take the best from
one side of the world and mix it with the other and you can shop that whole world right here in
our aisles find it all here with more ways to save at real canadian superstore
everybody peter here i just spent the last hour with Imad Mustaq, the CEO of Stability AI,
talking about the recent petition that's been going around to halt or pause the development
of large language models. It's early April. I just had him on stage last week at Abundance 360,
but that wasn't the subject. This conversation was around the fears, the hopes of large language
models. And some of the points we discussed was his belief that the next six months are
hypercritical, that we have six months to figure things out. Why? There are a couple of very
important reasons that he discusses in this next segment. I'm also going to talk about how this
technology is going to transform the world
of education, of healthcare, of supporting governments around the world. One of the best
conversations I've had with Imad in a long time. And also he's one of the most brilliant CEOs and
also the only private AI company CEO to sign that petition. So listen up. It educated me, got me feeling hopeful.
Hope it does the same for you.
Just for folks who may not know you,
Imad is the CEO, founder of Stability AI,
truly one of the most extraordinary AI companies,
large language model companies out there
with a mission of being an open,
a truly open AI company to transform, support humanity across health, against education,
again, about uplifting humanity.
So, Imad, I love what you do and proud to be a supporter.
So thanks for joining today.
It's my pleasure.
Yeah. So let me set it up for everybody. There's been a huge amount of conversation around AGI of late, and I titled
this session AGI Threat or Opportunity. Large language models go fast or go slow, open versus
closed, and there's a lot to discuss. And, you know, my view on this has been shifting over time.
And I'm going to be curious about whether yours has as well.
There's a lot of talk about artificial general intelligence.
Where are we really right now in the development?
How fast are things truly moving?
And should people be concerned about it or excited about it?
And other questions are, can it be regulated?
Can it be slowed down?
Is that possible?
And then I think an important conversation really to highlight what stability is doing
is open versus closed.
And then one final conversation, which is, can we actually get to AGI
using large language models, or do we need to have another sort of structural transformation
to get there? So shall we begin? Yeah, let's talk about the future of humanity.
I love that. I love that. And you and I share the same vision of really creating abundance around the world, uplifting every man, woman and child and really using AI for global education, global health, dealing with the grand challenges and slaying them.
You know, let's kick it off with how fast are things moving in your opinion?
And should people be concerned about the speed of progress right now?
Yeah, I mean, I think every day those outside of this are like, what's going on?
And those inside of this are like, what's going on, right?
It's like everything ever all at once.
Not only is it kind of all the smartest people in the world planning in on this,
but 80% of the research in AI now
is in this generative AI foundation model field.
And if you look at the research paper graph and archive
for ML papers, it's a literal exponential.
It's that people are implementing, exploring,
and understanding these things like nothing else.
The final bit that's really interesting
is that clearly half of all code on GitHub now
is AI generated.
And the average coder using Copilot,
this was a micro study, is 40% more efficient.
So people are using the AI to build better AI.
And that becomes a bit of an interesting feedback loop, right?
Against that, there is the hardware question.
And so there was a paper in 2017,
Attention is All You Need,
which kind of led to this boom
about how to take AI
from extrapolating large data sets
to instead paying attention
to the important bits of information,
just like we do when we basically
start creating principles
or heuristics, right?
It turned out that was very amenable
to scale.
So the original thing on OpenAI
and others was just applying scale,
more and more compute, which happened literally at the same time
as this whole GPU thing that Jensen had been building for years at Nvidia.
So the previous generations of super compute chips were like GPUs
and video ones in particular stacked on top of each other.
But as you get to 1,000, 2,000 chips, you stopped being able to scale
because the information couldn't get back and forth fast enough across these. But as you get to 1,000, 2,000 chips, you've stopped being able to scale
because the information couldn't get back and forth fast enough across these.
That has been fixed now as of this new generation that's about to hit.
The NVIDIA H100, the TPU V5s, it basically scales almost linearly
to thousands and thousands of chips.
To give you an example, the fastest supercomputer in the UK is 640 chips.
You know, NASA has about 600 as well.
And now you're building the equivalent
of 30,000 to 100,000 chip supercomputers
using this new generation technology
because you can stack and scale.
It's insane.
And we've got about six months before that hits.
And then GPT-4 level models will be available
to just about, well, not anyone, but more and more companies.
GPT-3, when it was trained, I think took three months.
That was two years ago.
On the new supercomputers, you can train four of those a day.
That's extraordinary.
And so how many large language models are there going to end up being, you think?
I think there will only be a few.
I think that these things are quite complicated.
They're actually like game consoles.
Like a lot of the chat GPT functionality
was already present from the model over a year ago,
but it just wasn't in nice format
and usable and testable.
So you can explore the capabilities
of that game console or game engine.
The way I put that is because
you look at the start of the Wii U or PlayStation
versus the games at the end of that lifecycle.
Exploring that space is very important.
And people don't want dozens.
So like we released Stable Diffusion, our image model, one of the most popular pieces of open source software ever.
Only five companies in the world that we know of have created their own versions.
Because why not just use the one that you have?
Sure.
To put the popularity in
context you're able to bitcoin and ethereum and cumulative developer stars in about two months
and the whole ecosystem now has overtaken linux which is insane for six months you know and yet
only three companies got their own because why would you so i think in the future it'll probably
be just a handful of companies building proprietary models, you know, Microsoft, OpenAI, Google, DeepMind, NVIDIA, a few others maybe. And then with the leaders in the
open models in terms of models that you can see the code, the weights, the data, etc., which is
essential for private data. So, Pal, the question ultimately is, it's great to have powerful large language models and the next generation that are coming online are amazing.
And the question is, do we need more than that? Is GPT-4 equivalent large language models enough to give us sort of the benefits to humanity without sort of bordering on the existential threats of AGI?
of bordering on the existential threats of AGI. And ultimately, the parallel question to that is,
are large language models sufficient to get to AGI, in your opinion?
Yeah, so we have to define what AGI is first, because different people have different definitions, right? And AI can do just about anything a human can do. Well, you know,
half of humans are below average intelligence, right? The,
okay, mathematicians, I'm not quite correct on that. But still, you know what I mean? Like, the bar for average output is not that high. And you see GPT-4 passing the bar exam and GRE,
maybe it'll go to Stanford soon, you know, like, it's getting there for these specific things,
but it's not a genetikus yet the way that we have is you know
as we scale these models they show emergent properties um and we feed them a crap the whole
internet at the moment so what we have at the moment are these models that are like really
talented grads that occasionally go off their meds and we fed with crap and that's the base model
then what we do is we do this thing called reinforcement learning with human feedback
where we tell it what a good and wrong response is.
We take these really creative models because you're taking petabytes of data,
hundreds of terabytes of data,
and you're compressing it down to just a few hundred gigabytes.
Like stable diffusion took 100,000 gigabytes of images and the output is a two gigabyte file.
Of course, they can't know everything,
but you're telling that you must know these things. So you reduce its freedom, its creativity,
because these aren't thinking machines, they're reasoning machines, including on-time principles,
right? And then putting that mask on the front to re-acclimatize them to human.
But you start with a very fragile base. And this is the concern on the scaling,
because what happens if as you scale it learns to lie and other things and it starts
being agentic we don't know to get to AGI though in terms of something that can be as capable as
a human across a generalized set of principles but it's better reading manuals you don't know
if this is enough it may be or it might not be well there was a very seminal paper by uh sorry
yeah I'm just going to say the thing that is interesting, of course, is the rate at which these large language models are surpassing humans and coding capability.
And I guess the question is, will it get recursive and allow them to improve on their own models?
Yeah, you could say already now the kind of overall ecosystem, just like Bitcoin provisions humans to buy assets.
This whole kind of conceptual thing,
it's really made programmers 40% more efficient from that study, if you kind of take it as a
human-computer hybrid type of thing. So if you get to the level of a human, that's one thing,
and then ASI is when you go superhuman, which we don't really understand what it looks like.
There was an interesting paper that was mentioned by Meta called Cicero, where eight language
models were combined together, and they outperformed humans in the game of diplomacy.
So they could convince humans on things.
And that's today.
So we don't know where this takeoff point is, where it can get self-refercive better.
We don't know when it gets to human level.
It's just we can see it's as good as an average human, like GPT-4 out of the box without tuning can pass the level three Google programmer exam.
And it can pass the, you know, the medical licensing exam, you know, something which
I never finished doing.
I got through part two, but not part three.
It's extraordinary.
And it's, that's this year.
And of course, humans are not getting smarter.
But every year we're going to see continued progression of all of these large language
models.
Can we take a second?
I always tend towards the abundant side of the conversation.
And I'm clear all the amazing things that these large language models are enabling and
will enable.
Let's just take a second to list people's concerns.
What are people threatened by just so we
can address those individually for a moment what would be top of your list hey relevance so you
know we talked to a lot of incredibly talented creatives and artists and programmers and others
the top level people love this there was an MIT study that showed recently the third to seventh deciliter got like 20 to 30% better writing reports,
but the top 5% got way, way better.
And this is what we typically see with people and tools.
The top creatives love this.
Those who are not so good, it raises the average level.
So you have to get better.
And that's difficult for some people to embrace.
Like Lisa Doll being beaten by AlphaGo at Go,
the average level of a Go player
has shot up dramatically and he's got way better. You know? It's not humans versus AI, it's human
plus AI versus AI in that sense. Yeah, or versus humans, right? Like again, you either have to
embrace it or you can get irrelevant very quickly. And there's certain jobs that are sticky, you know,
and there's certain jobs that are not.
So regulated industries are quite sticky,
but still like something like Indian BPO
is probably not for basic programming.
And so this is a real danger.
It's a real threat.
There are questions around, you know,
the data sets being used
because they typically are from scrapes
of literally the entire internet.
What are the rights around that?
There are questions around the output.
Is it computer generated?
Is it copyrightable?
You know?
And then there's finally the question of,
are we building something more powerful than a nuke
that is an existential threat to humanity?
You know, what is a super intelligence like?
Yeah, I remember Sam Altman wrote a blog and he said,
I'm going to just quote from it.
He said, some people in the AI field think the risk of AGI and the successor systems
are fictitious.
We would be delighted if they turned out to be right, but we're going to operate as if
these risks are existential.
And so now the question is, if in fact you're operating as if the risks are existential,
If in fact you're operating as if the risks are existential,
why are you building potentially existential tech?
Right. I mean, that's an interesting conversation debate to have.
Yeah. I mean, like you mentioned ABC recently, because there's only a few people that can build this tech. Right.
And he was asked if there's a 5% chance that this tech could wipe out humanity
and you could push a button to stop it, would you do it?
And his answer was, I would push a button to slow it down.
We can wind it back.
We can turn it off.
And that didn't make me feel too comfortable.
Because my base assumption, actually, I was talking to the head of a trillion-dollar company, the founder, earlier this week,
I was talking to the head of a trillion dollar company,
the founder earlier this week,
is that AGI will probably find us boring,
just like in Her.
My favorite AI movie,
the one that was the most realistic and the least dystopian, Her.
That's great.
Well, you know, broken hearts,
just like when Replica turned off
their boyfriend-girlfriend feature on Valentine's Day.
They broke tens of thousands of hearts. I think that would be the case, you know, spoke to Elon last week, and he said,
he thinks if we make the AGI curious, then why would it have any reason to harm us? You know,
because again, we're interesting in some ways, I don't think we're interesting. But I could be
wrong. Are you familiar with with Mo Gadot, who was a COO at Google X?
He wrote a book called Scary Smart, which I read recently.
And his basic thesis is the large language models are our progeny.
They're learning from reading our content, which they are.
And they're learning from how we interact with each
other and how we treat our machines. And if we're good parents, and you and I are both parents of
biological organisms, if they're good parents and they see us respecting each other, they'll tend to
follow suit. But if we are disrespectful and harming each other, they might tend to, at least in the
early days, tend in that way.
Do you, you know, people don't realize large language models are a reflection of human
society to a large degree.
You know, I'd be sympathetic with that view.
So like when Web3 came about, we tried to create a system outside the existing system
and all the money was made and lost at the intersection.
With these models, they are literally trained on our collective consciousness on the Internet.
And it's the equivalent of taking a super precocious kid,
taking his eyes open and showing him everything.
No wonder they turn a bit weird at times, right?
So I think we should probably be feeding it better stuff and teaching it better stuff.
But right now, we're at the bulking before cutting points so that's why we create you know you might have seen these memes
about these shop off you know this elder weird tentacle being that's what it comes out of the
oven looking like this deep learning thing and then you have reinforcement learning with human
feedback you tell it how to be human you know so the shoveled ground becomes more and more human
but there's still something lurking behind feeling better stuff i think is what we've got to do now rather than racing ahead because
again like i think we'll probably be fine but i'm not sure and these things are becoming more and
more capable because as you scale you see emergent properties that's lying yeah you know what happens
when you've got like in the gpt4 paper my they said, you know, we experimented in a closed loop system, giving it some money and telling it to go and make more money.
And it was hiring task rabbit people and things like that.
I said, that's not a good idea.
You don't know what happens there.
I don't know what happens.
It's trying to create jobs for us.
But then again, this is the amazing thing.
So again,
we don't officially
publicly know details
about GPT-4,
but NVIDIA said
they designed
their new T,
their H100
and V-Linked
ones,
chips for it.
That's just
two 80 gigabyte
chips stuck together.
It's 160 gigabytes
of VRAM,
has GPT-4
with all its
capabilities in,
which implies it's about 200, 250 billion parameters a model.
A single flash drive.
That's crazy.
So what does that mean when you start chaining them together
and you get them to check each other's outputs and things?
Tell you what, it gets a lot better.
And there's a lot more that I particularly want to discuss,
but I don't know what the upper limit is even at this stage.
And this is before, again, this massive ramp up to compute that's coming,
whereby, like, I know of clusters of H100s.
So the fastest supercomputer in the world right now is, what was it?
Not Sierra.
There's a one exaflop computer.
Our cluster is probably the 10th fastest public cluster in the world right now.
Now, the new supercomputers are 20 times faster, times faster like there is an insane pickup that then will lead
to emergent properties and this becomes again i don't know what happens we don't know what happens
nobody knows what happens i think it'll probably be fine i could be wrong this episode is brought
to you by levels one of the most important things that i do to try and maintain my peak vitality and
longevity is to monitor my blood glucose. More importantly, the foods that I eat and how they
peak the glucose levels in my blood. Now, glucose is the fuel that powers your brain. It's really
important. High prolonged levels of glucose, what's called hyperglycemia, leads to everything
from heart disease to Alzheimer's to sexual
dysfunction to diabetes and it's not good the challenge is all of us are
different all of us respond to different foods in different ways like for me if I
eat bananas it spikes my blood glucose if I eat grapes it doesn't if I eat
bread by itself I get this prolonged spike in my blood glucose levels but But if I dip that bread in olive oil, it blunts it.
And these are things that I've learned from wearing a continuous glucose monitor and using the Levels app.
So Levels is a company that helps you in analyzing what's going on in your body.
It's continuous monitoring 24-7.
I wear it all the time.
It really helps me to stay on top
of the food I eat, remain conscious of the food that I eat, and to understand which foods affect
me based upon my physiology and my genetics. You know, on this podcast, I only recommend products
and services that I use, that I use not only for myself, but my friends and my family, that I use, not only for myself, but my friends and my family that I think are high quality and
safe and really impact a person's life. So check it out, levels.link slash Peter. I'll give you
two additional months of membership, and it's something that I think everyone should be doing.
Eventually, this stuff is going to be in your body, on your body, part of our future of medicine.
Today, it's a product that I think I'm going to
be using for the years ahead and hope you'll consider as well. So let's talk about the other
question, which has been the debate over the last week or two, which is, can we slow down
development? Should we slow down development? You know, I remember asking a question when I was,
God knows, in college. I said, if Einstein understood what EMS equal MC squared would lead to the nuclear bomb
Could he have stopped thinking about it? Would he have stopped?
Developing it right. So the question that you mentioned earlier about Sam all and being asked if there's a 5% chance would you?
Would you halt it or slow it down?
can we regulate can we even regulate against this?
Because we live in a nation or a world of porous borders
and technology very quickly becomes globalized.
And we might slow it down,
but I don't know if other nations around the world
would slow it down.
So what do you think is possible here?
Well, the question is, would you turn it off?
Not would you slow it down?
And he shifted it to I'd slow it down,
which I thought was interesting.
There's only a few entities even now
that can train these large models,
like actually a handful.
And if you look at, again, the OpenAI AGI paper,
they said it's time to offer transparency to governments,
transparency on governance.
So one of my big issues with OpenAI and DeepMind and others
is there's no transparency on the governance of these systems that could overthrow our
democracy.
And that could kill us all.
Again,
I don't think they will,
but I could be wrong.
I think that there isn't a slowdown in thinking,
but there can be a stop on training before this big ramp up because,
you know,
people came back to me and said,
what if China creates an AGI?
And I was like,
you know,
they don't have the chips.
They don't have the know-how. We've seen no evidence of that. And they don't want to overthrow their system.
They want to perpetuate it. And do you know what the best way for them to get an AGI is?
It's to send someone with a flash drive into DeepMind or OpenAI
and just download it. Again, it fits on a flash drive. Why would you bother?
It's much cheaper to do it that way. So now's the time to put in proper governance,
proper oversight. And again, the time to put in proper governance, proper oversight.
Again, the OpenAI blog on this lists a whole bunch of things
that they say in the future.
I think the future is now.
You should do it now.
And you should put proper security systems in place
if you think it's that dangerous,
so you don't get leaking files going everywhere
around these really rough models,
which I don't think are an optimal way to do things, by the way,
but we can discuss that later.
And you couldn't do this six months
ago, you couldn't do this three months ago, and
six months, it would be too late. So now
it's time for a public debate about this,
about systems that could take away our freedom,
that could kill us all,
and everything in between.
Did you actually
sign that petition that was being
circulated last week?
I believe I was the only private company CEO in AI apart from Elon to sign that.
Yes.
Amazing.
I didn't agree with everything in there.
But again, I think now is the time.
And is a six-month slowdown the right thing?
How do you define that versus a pause?
So you're never going to get a pause because of pressure pressures.
And for me, six months is the amount of time you need to get used to these new next generation systems that are landing that can scale almost infinitely.
So I was like, it's kind of the right thing.
And again, you know, even six months ago, that was when stable diffusion first came out.
It's been, what, three, four months since chat GPT first came out.
came out it's been what three four months since chat gpt first came out like you needed the permutation around the internet and you're seeing things now like actually being banned in italy
right you're seeing the haves and have-nots and you're seeing competitive pressure where
this is now big business it's affecting trillion dollar companies
um it's not going to stop so now's the time to start discussing this stuff
public governments i remember when uber was banned in in and I was like, OK, we're going to start to label countries as pro and anti technology.
I mean, it's kind of insane to think that you can ban these types of technologies and still remain competitive on a national or global stage.
Can we flip it over one second?
The endpoint of all of this technology
is to make us, in one sense, superhuman,
to allow anyone to be educated,
to be a creative agent,
to be extra healthy,
adding decades onto one's life.
You know, is the goal here
of this level of AI development to allow us to work less, to be more productive, to create abundance on the planet?
What's your vision of where we're going here?
I think everyone that's building this, maybe if there's some defense company exceptions, is actually doing this with good intentions, right?
Because they want to create something that can help humanity. if there's some defense company exceptions, is actually doing this with good intentions, right?
Because they want to create something that can help humanity.
Now, some people want to automate humanity.
And, you know, like literally,
you look at some of the statements of some of these leads,
and they're like, well, yeah, this will transform money.
And then we can redistribute it.
And I'm like, oh, and you're in charge of that.
Other people are like, we want to augment humanity.
So I'm very much in the augmentation camp where I believe humans and computers be anything.
And I'm like, this allows us to scale humans.
We've solved that problem.
It's still early stages with the iPhone 3G stage.
But any child in the world soon in the next few years will be able to have their own personalized tutor or personalized doctor.
And then this allows us to scale society as well
because the Gutenberg Press was an amazing thing
because it allows us to take stories down
and we're driven by stories, but it's lossy.
All those notes you're taking in your meetings
and everything like that are lossy.
Now the new systems, you look at the new office
and teams and all the hostages that are using that,
you fill in the gaps.
It's no longer lossy.
It automatically writes your emails
and summarizes stuff for you.
It can be that grad,
and he's generally on his meds.
It's going to be pretty awesome, I think,
and a real changer to the way
that we all collaborate
and achieve our potential.
But if it's centralized
and that's the only option,
then it probably won't go so well.
I think we've had a history of that.
You know, what happens
if new technology is controlled by a few hands?
Power is really attractive.
Really, really attractive.
Well, and this is the power that's going to drive
to a new global set of trillionaires
and I think a complete transformation of every industry.
So let's talk about open versus closed.
You very much, well, let's begin back now,
eight plus years ago when Sam Altman and Elon
talk about open AI being truly open
with a mission to help guide the development of AI
for humanities.
And it began as a nonprofit.
And then Elon put $100 million in.
He's pissed right now.
It's turned into a for-profit and is anything but open.
And I love the fact that if you go to open.ai, it doesn't point at open AI.
It points towards stability AI, which is very interesting.
But you made a conscious choice to build an open platform. Can you speak
about that?
Yeah, I thought that ethically, it's the correct thing to do.
And it was a better business as well, because the value in the
world is in private data and knowledge, and people want
control. So again, if these are grads that occasionally off their
meds using open AI, or Google or anthropic or anyone like that is
hiring it from McKinsey, but you want to have your own ones.
But then I think this technology had to be distributed.
Like when I came in, as a relative outsider,
I've only ever been to San Francisco once before October,
and I have a non-conventional background.
I talked to a lot of AI ethicists who were like,
well, we have to control this technology and who has access to it
until it's safe.
And I was like, when will it ever be safe for Indians or Africans?
I don't think it ever will be.
And it reminded me a lot of the old colonial mindset, you know,
and access to technology.
And I was like, if this can really activate every child's potential,
you know, and, you know, working with the XPRIZE for Learning winners
and deploying this with Imagine Worldwide,
my co-founder's charity into refugee camps,
I was like, how can I hold it back from these kids?
By the way, just for those who don't know, many years ago,
Elon Musk and Tony Robbins funded something called the Global Learning XPRIZE.
It was a $15 million prize asking teams to build software that could teach children,
in this case in Tanzania, reading, writing, and arithmetic on their own.
And Imad was a partner in one of the winning teams.
And you're now taking it much further, much faster,
which is amazing.
Yeah, we did the RCTs, the randomized control trials
with UNESCO and others.
And so in refugee camps around the world,
76% of children get to literacy and numeracy
in 13 months and just
one hour a day with this basic software what if they all had a trapped gpt it will change the
world right and so we figured out how to scale that to every child in malawi and then across
africa and asia in the next few years with the world bank and others and again like it was
interesting again to have this discussion because things are very insular when you have power and
control and it's very tempting to keep power and control and this is you know one of the things that how do you view
the world are we stronger together and we can deal with any problem or do i need to be in the lead
and control it if it's a powerful technology because people are fundamentally good or bad
so first of that's a scary thought i mean i i fundamentally believe that humans are good by
their basic nature and that the majority of humans are good and want to make the world a better place.
But we do have to guard against those that have sort of a bent arrow.
And that's why I look at what is the world's infrastructure run on?
It runs on Linux.
It runs on open source, MySQL.
That's the most resilient.
Windows is not resilient,
which is a very interesting analogy, I think.
So how do you make money as an open source company?
Oh, it's pretty straightforward.
Every nation wants to have their own models
because this is a question of sovereignty.
So we're helping multiple nations do that.
And then every single regulated industry wants their own models. So we're helping them do that that. And then every single regulated industry
wants their own models.
So we're helping them do that too.
Working with cloud providers, system integrators,
and others, lots of announcements to come.
Private data is valuable.
Public data is less valuable
and a bit of a race to the bottom.
So I think people are looking
on the wrong side of the firewall.
And again, we'll see that going forward,
but they're not going to stop.
Microsoft, Google, NVIDIA, others aren't going to stop.
They'll keep scaling.
And now that there's actually revenue that can be generated from these models,
you're going to move from research to revenue.
Hundreds of millions, billions will be spent on training.
And again, I think the scaling paradigm is one of them,
where you scale these things up.
I think it's an incorrect one to get us to, let's say, a level of AGI.
So major that I'm interested in is the human
colossus humans coming together to solve the world's problems augmented by technology yeah i
think the collective and swarm intelligence will beat an agi any day and that's what i'm interested
in i think arming everyone with their own eyes every single person company country culture in
the world we build a new intelligent internet that can solve any problem
because a lot of our problems
are information coordination
and this technology enables that.
So a very different view
from most of the other CEOs
of companies in this space.
Global, collective, augmented
as opposed to artificial
general intelligence
that can replace humans,
shall we say.
Let me just make a mention.
We'll be going to questions
from those watching and listening, from those listening in a few minutes. So, Imad, you know, is GPT-4 and its equivalence large enough for us to achieve what we want? Do we need to go to yet another generation?
yet another generation.
Can we build enough intelligence without
tipping over to AGI?
Nobody knows. Nobody will
ever be able to tell you that there's less than 12
months to AGI because you don't know until it gets there.
When it gets genetic.
I used to hate
all the doomsayers
on AGI. My
standard statement was always, listen, I'm not worried
about artificial intelligence. I'm worried about human stupidity. And to a large degree, that
remains the same, right? And what we're going to see with chat GPT and all of these large language
models are people's stupidity being amplified and people's brilliance being amplified as well.
And so the question is, does it go out of bounds
on the low side to cause us any real issues? No, it's a dangerous point right now, because you're
not quite intelligent enough to align and you have amplification of these things where you
people have powerful tools, right? I think when it does get intelligent enough, I think there's
no way that we can perfectly align it. Because alignment is orthogonal to freedom. We all know people people more capable than us and the only way to make sure they do exactly as we want is
take away their freedom i don't think a super agi will like that necessarily but again i think it
can be kind of boring so i think that we're on this dangerous space now between those two things
when it can't be self-correcting until you know you know peter that's stupid why are you going to do that you know don't be a dick um and you know these are dangerous powerful technologies
that can be downloaded on a flash drive you know again why would you bother train it yourself when
you can just take it and use it in really original ways again i don't want to get too much detail but
it's probably not just deep learning we've already implemented reinforcement learning and there's all
sorts of ways that you can create much better systems.
Like the simplest one, just get two GPT-4s to check each other's answers, and it becomes better.
Yes.
Right?
It's surprising that.
You know, I have to say, having you as the CEO makes me feel safer in terms of at least for stability AI.
uh safer uh in terms of at least for stability ai uh i'm i feel you have a mission uh to build a strong and viable company while uplifting humanity and doing good in the world which is important i
think that has to be at the core for all of these companies because the technology is so fundamentally
powerful yeah i think
realistically sam has an amazing mission demis has an amazing mission dario has an amazing mission
like everyone is quite mission driven who are the ceos of the companies that could build this type
of technology um it's just none of us should be in control of it i don't know how it should be
but i think this is the time for a public debate around it.
And, you know, everyone should realize that this is taking off crazily.
And that's what democracy is about, you know.
I remember when I was in medical school years ago, restriction enzymes were first coming online.
were first coming online. And there was a huge amount of fear mongering about designer babies and being able to clone children. And it was, you know, we're playing with godlike powers,
and we're going to lead to killer viruses. This is in the late 80s, early 90s. And there was
concern about regulating and could this even be regulated because you could sneak out the restriction enzymes in a few picoliters of fluid.
And what came together was a series of summits called the Asilomar conferences in which the heads of the industries got together and self-regulated versus the government coming down on them.
Have you seen that conversation taking place here at all?
I think it's starting to emerge now because we need to self-regulate as an industry.
We need to increase transparency of our governance, you know, and how we make decisions.
Like last year, OpenAI with DALI 2 forbade every Ukrainian from using it.
And if you typed in a Ukrainian term, it threatened to ban you.
Why did they make that decision? Who knows?
Is there any redress?
Nope.
You know?
And so we have to be really mindful about things like this and set industry norms.
And now I think it's time to come together.
This is a big deal.
I think we want to do it before it's forced upon us.
But at the same time, the discussion needs to be widened here because this affects every
part of life.
It's kind of insane.
We've never seen anything like this before, right?
And we have to do it quick, quick, quick.
And this is the last window that we have before, again, everything takes off.
So that is one of, I remember when we first met, I asked you, is this time different?
Because we've been hearing about this conversation forever.
And so you're pretty clear, definitivelyly this time it is fundamentally different dude i
mean like any coder that uses gpt for it's like wow look at that i'm like multiple times better
right it's like this thing can write better than me it can draw better than me it can paint better
than me like i think people don't want to believe that things are different um i recently went and
said some talks i said this is a big economic impact on the pandemic
because all information system flow changes.
Like things that involve atoms don't change,
but so much of our life is about information.
And this has made a meaningful difference in that.
Just play with it, try it, go in depth,
and you can't go away with any other answer, I think.
Let me sort of go down a few of these. We've talked about education,
the potential there. If you would, five years from now, what's your vision of the potential
for generative AI in global education? Every child has their own AI that looks out for them
and customizes them. Are you an auditory learner, a visual learner? Are you dyslexic? That's all information flow and you can fix all of that, right?
And it brings students together to work together as well because most of school
right now is a childcare system mixed with a status game.
That's why I think, Peter, we were having discussions before and an incredibly smart
young kid asked a question, you know, my school says
that it's cheating to use chat gpt and i'm
like it's not a competition education right we made it a competition education is about actuation
and so i think these things will fundamentally change the way when everyone's got their own
amazing teacher the young ladies illustrated prior from the diamond age how exciting is that
and a hundred percent we can get there in five years for every kid in the world.
Well, just about every kid.
That's amazing.
That will change the world.
For the cost of fundamentally electricity, which is getting cheaper all the time.
Yeah.
So let's put it this way.
GPT-4, if it does run on two H100s, as in videos indicated, is 1400 watts.
The human brain is 25 watts.
How crazy is that?
A lot of improvement. That's the fourth globalization my session yes especially when we start feeding it junk all right let's go to health next you and i are both passionate
about that you've done a lot in your in your life in the health area and i'm clear i mean one of the
beautiful things is that all eight billion people on the planet are biologically identical so
something that works for someone in ind or Iraq or Mexico is the same
that works for you in San Francisco.
What's your vision there?
Yeah.
People are people just like education.
If you have open source education that recursively goes,
healthcare is the same.
We used to increase the information density and organize knowledge.
So when my son was diagnosed with autism,
they said there's no cure,
no treatment.
I had to build an AI system to analyze all the medical clinical trials and do a
first principles analysis what could cause it and then we did drug repurposing to ameliorate the
symptoms that should be available to everyone you know anyone who's had a condition which is
difficult the process of finding knowledge is so complex you? And so our system is a girdic.
It treats a thousand tosses of the coin the same as a thousand coins
tossed in a row, right?
Like a percentage of the population has a cytoprime P450 mutation.
It means you metabolize things quicker.
So your codeine becomes morphine.
Those are the ones that die of fentanyl.
How do we not even know that, right?
And so we have all the tools in place now to have your own personal doctor,
you know, to have all the knowledge in medicine aligned
so we can see across and have that information set.
And I think as we do this with longevity, with cancer, with Alzheimer's,
within 10 years, we'll be able to cure all of these
because the knowledge is all there.
It's just not all in the right place.
So everyone's trying to do their own things the knowledge on clinical trials the information leakage
it's ridiculous we can capture every piece of information on that yeah people don't realize
the all of the failures in these trials is extraordinarily valuable information that is
not captured and not utilized at all um the other thing, just to hammer it home, because those of you who know my work, I'm focused
on longevity, age reversal, how do we make us live longer, healthier lives.
This decade is different, and we're going to be understanding why certain people age,
how to slow it, how to stop it. A lot of it's going to come out of the work that companies like Stability and others are doing.
Then we have quantum technologies coming soon, which are going to put sort of fuel on the fire here.
My guess is what, within five years or less, it'll be malpractice to diagnose without ai in the loop
yeah i think it'll be humans plus ai it won't be ai that diagnoses again sure pilots for everything
is going to be the way i think just like self-driving cars i think are eminent in terms
of getting to that level four level five um and i think you know it's five ten years it's going
you have to really prove yourself to drive on the road you know, as five, 10 years, it's going,
you have to really prove yourself to drive on the road. You know,
AI is better.
You have an army of graduates that will become associates that will pass the bar, you know,
that will pass their medical exams over the coming years and infinitely
replicate them.
Yes.
What people don't realize is every time an AI learns something,
it shares it with every other AI out there as you update the models.
And AIs are getting better every year while we humans pretty plateau after a certain point.
The other thing I love is...
Yeah, I think the thing I need to really drive home here is when you're trying to chat GPT and GPT-4, that's just one of them.
Imagine if there was a hundred of them checking each other.
No doubt the outputs would be even better than you've seen.
And three,
10 of them were learning about everything about you.
So right now we're like single.
Soon we'll be parallelizing these things.
And just like if I had a hundred brands that were exceptional,
you know,
and generally good all working around you and you
wouldn't have to like oversee them your life would be better uh when we were on stage at
at abundance 360 we were talking about uh interesting subject of who would you hire
into your company and what background would you require if you're hiring someone into your company
and what is the most important skill that people will need in this world of exponentially
growing AI? And, and your answer was passion all the way around.
I mean, like this,
we've had 15 year olds contributing to open source code based 60 year olds,
right? Because you don't know where it comes from.
You have to be passionate and throw yourself into this because it's going so fast.
If you're not passionate, you won't be able to keep up.
And then you bring the latest breakthroughs to your company
and kind of communicate it.
And you can use the technology to communicate it even better,
which is kind of awesome.
It will automatically create the presentations and slides for you.
So I think passion is kind of the key thing,
plus a level of intelligence and capability.
But if people aren't passionate,
they're not going to be able to keep up here i think that's the key thing yeah hey everybody
this is peter a quick break from the episode i'm a firm believer that science and technology and
how entrepreneurs can change the world is the only real news out there worth consuming i don't
watch the crisis news network i call CNN or Fox and hear every devastating
piece of news on the planet. I spend my time training my neural net, the way I see the world,
by looking at the incredible breakthroughs in science and technology, how entrepreneurs are
solving the world's grand challenges, what the breakthroughs are in longevity, how exponential
technologies are transforming our world.
So twice a week, I put out a blog. One blog is looking at the future of longevity, age reversal,
biotech, increasing your health span. The other blog looks at exponential technologies, AI,
3D printing, synthetic biology, AR, VR, blockchain. These technologies are transforming what you as an entrepreneur can do.
If this is the kind of news you want to learn about and shape your neural nets with, go to demandus.com backslash blog and learn more.
Now back to the episode.
Listen, I would love to open it up for some questions if you're open for that.
Yes, I am.
Amr Awadullah.
Hey, Ahmad, how are you?
First question, Ahmed, is how's your noggin doing?
It's good, man.
I had a bit of a fender bender,
so my brain was jiggled for the last few days.
I think it's back to normal now.
Okay, please make sure to go check on it
and make sure it's all in order
because we need those neurons continuing to fire
for a few more years to come.
So this is my question to you, and I'm very curious about your thoughts on this one.
We are clearly on a trajectory now where we're going to have Cambridge Analytica multiplied by a billion times.
I assume you're familiar with Cambridge Analytica and what they did, right?
Yeah.
and what they did, right?
Yeah.
So now imagine on a website like Character AI where I can go and create characters
that are digital replicas of people,
like there is a digital replica of Elon Musk.
And then if it was modeled very correctly,
then I can now start playing arguments against it,
just like AlphaGo played itself.
And I would play it against itself
for billions and billions
and billions of iterations until I find the perfect way to get them to do whatever I do.
So it's Alpha subjugate. I can subjugate any citizen to my will. And you said earlier,
it's very tempting to keep power and control. How are we going to react? How are we going to
solve that? That's a very big problem, in my opinion. I'm curious your thoughts on it. Like,
how do we stop that from happening? That's going to big problem, in my opinion. I'm curious your thoughts on it. Like, how do we stop that from happening?
That's going to happen.
No, it's inevitable, right?
And again, it's like it's here pretty much now.
Creative ways of using this for mass persuasion are there.
Like, you really get robocallers that call up with the voice of your,
like, grandmother saying it's an emergency now.
And you can't tell the difference.
Like, this is going massive.
And voice is very, very convincing.
So I think the only way you could... I think the only way you can do this is you have to build your own AI to protect you.
And you need to earn those AIs. And they need to look out for you, honestly. And then trust the
authentication schemes. The Twitter tick on steroids.
We have to move fast, fast, fast,
because you said armies of replicants are coming here,
as well as being able to gamify that particular thing.
We shouldn't talk too much about it though, because yeah.
So countermeasures. So you're saying it's countermeasures, right?
It's just like antivirus, virus, antivirus.
So we need something that is countermeasure at the speed that is catches this
before becomes much worse so is that something that's any of us are working on should we
should we unify efforts to work on that yes uh but we've got some things in that space
it's uh yeah idea idea idea antivirus there we go so to close out that you know i'm fond of
seeing the world's biggest problems the the world's biggest business opportunities. And believe me, as that issue comes online,
there will be multiple entrepreneurs looking to slay it.
So Ahmad, with Stability AI, one of the initiatives is OpenBio ML. So I was wondering if you could
speak about, you know, best case, where's that going? Where are we going to be in five years?
If sort of your best case vision turns out for where are we going to be in five years if sort of your best
case vision turns out for that what what will that look like yeah so open biomarkers are medical
competition biology ones so we're one of the main backers of open fold um to do kind of alpha fold
type things with a twist dna diffusion to kind of see how dna folds by lm see chemical reactions
based on language models.
There's a whole bunch of things that I just think the opportunity is massive because, again, the data quality is poor, but now we can create synthetic data
and we can analyze data better than anything.
We can figure out how chemicals interact.
The work of DeepMind and Alphal was amazing.
And now we can do things in silica.
So we can test chemical reactions, drug interactions, and others
that when
combined with a knowledge base on all the medical knowledge in the world and research and able to
extract patterns from it that are superhuman i don't think there's anything we can't cure
honestly but we've got to build this as a public common good in my opinion and give these tools out
to the experts to use and unify. We have to do this very intentionally.
So OpenVital is just at the start now.
And we also have MedArc, which is the healthcare equivalent of that,
doing like synthetic radiology with Stanford AMI
to create more data sets of rare lung diseases and things like that.
And it's working great.
So I think community, commons is the way.
And by the way, all of this is how we create an abundance of health around the planet,
right?
The best diagnosticians on the planet are going to be AIs and the best surgeons will
be robots driven by AIs.
And that then becomes available everywhere.
And I just want to hit on a point that I think is important for folks to hear.
A world in which children have access to the best education and the best health
care is a world that is more peaceful, which I think is one of our key objectives here. How do
we uplift every man, woman, and child on the planet? I wanted to ask a personal question here,
and it's something I haven't had a chance to ask Peter yet about AI. And Peter, I know that you and
seemingly Ahmad generally have an abundance
and optimistic mindset. I understand your views are changing. Is the potential threat of catastrophic
job loss concerning to you both? And if so, how do you potentially suggest people address finding
meaning in their lives without consistent work or careers? I'd love
to hear your thoughts, Peter, as I know you're typically really focused on this from an abundance
mindset and also yours, Ahmad. Well, Peter, you want to start? So, yeah. So, listen, I think
we're lucky. Almost all of us here taking the time to listen to this conversation are extraordinarily
lucky. We're doing a job. I don't know what everybody's doing, but I'm pretty much guessing
we're doing jobs that we love, that we dream about, that we're excited about. The majority
of humans on the planet, unfortunately, are not doing what they love. They're doing what they need
to put food on the table, get insurance for their family. So one of the things I think about is how do you
use these extraordinarily powerful technologies to self-educate and to self-empower to go and do the
things that you love, to augment yourself, to have a co-pilot if you want to be a physician, a teacher,
a writer, whatever it might be. And that trains you on the job as you're doing the job. So it's going to allow people to dream a lot more.
You know, I think the only fear I have about AI today is what I would call its toddler into adolescence phase.
I think, you know, when AI is in its earliest AGI state, if we get there, before it's developed fundamentally sort of an ethical, emotional capability, I think the more intelligent in general that systems are, the more empathic and the more good nature there will be. I don't think, we live in a universe of infinite resources.
I don't buy any of the dystopian Hollywood movies where they have to come and grab all of the whatever
off the planet Earth.
That's bullshit.
There's so much, you know, an abundance of everything.
I think we just have to deal with the early stages
where humans are using it in a dystopian fashion
or the toddler doesn't know its own strength.
Yeah, I think that people will create new jobs, and they'll do it very fast. I think
the pace will be picked up by a mixture of open and closed technologies as well. You see the
innovation around stable diffusion, new language models coming out and other things as well.
But I think I'm really excited about emerging markets in particular. I think they'll leap to
intelligence augmentation, just like they leapt over the PC to the mobile, you know,
and that will create ridiculous abundance and value there because they embrace
technology because they need to and they want to.
And there's amazingly people there that just haven't had the opportunity.
We'll now have the opportunity with this. So I think, you know,
I think it's a bigger economic disruption than the pandemic.
I don't know which way, but I believe it will be positive, to be honest. And you'll see points
added to the GDP of India and other countries once they get going. This brought up a lot of
questions, really great conversation. The one that I think is most pertinent to me is after hearing you mod talk and i think peter made the comment he's he's
thankful that you are one of the leaders of this and leading your company because you clearly have
uh good intentions you sound like a giver you can't in the adam grant a given take sense how
do we control against the other players that are that have their fingers on the proverbial button,
right? Because this like if you have people with really bad motivations and takers versus
imagine Trump was smart enough to be a CEO of one of these companies. I mean, that is terrifying.
I prefer not to.
How do we how do we control against that? And part of it part of our answer part of that,
I think I'm an investor and early
stage investor. I think it is incumbent upon investors to be
picking good, good actors versus bad actors and who they back.
So anyway, I'm really this is a really concerning note because
I'm glad that you're one of the good guys, but I'm sure they're
also bad guys.
I think actually the most dangerous of the good guys in some ways.
Like most of the evil in the world is done by people who think they're doing
good.
Not me,
but you know,
you never know.
I think that the key thing here is transparency.
If you are using powerful technology that has the ability to impact the
freedom of others,
you need to be transparent and you need to have proper governance and other things.
Like, you know, we build out in the open and I think that there needs to be some mechanisms
for that.
Again, opening our list to a whole bunch of them in a blog post without saying whenever
they do it, we need to implement that now.
So that's really important.
You know, it's not just being open or closed.
It's really being transparent and having an understanding of
how the technology is being developed and utilized. And then having investors and
governments. Listen, I never depend on governments for much of anything.
But, you know, it's not bad to have a set of requirements that society holds your feet to the fire on and this is the
last time we can do that in my opinion this next period of six to twelve months that that is the
single most important thing i've heard said imad uh putting that time frame on it uh because of
the you know uh the you know the clusters of h100s and and the new capability that we're adding
and what most people don't realize is that we're in this situation today with
these large language models and deep learning,
because we've seen massive growth in computation over the last just a few
years and massive amount of labeled data out there over the last few years.
The ideas for deep learning have been around for 30 plus years, right?
But it's just now that it's capable and it's adding you
know fuel to the fire so um yeah so everybody just listen up it's uh it's the time frame is now
but just a quick note like how do you how do you make that a forcing function over
other than just a letter i mean the letter that's great but like is that really leverage i mean why
like what's going to cause that and what happens if they choose not to,
and we don't pause for the next six months.
I don't think there'll be a pause. I think they'll continue, right?
Like there's nothing enforceable within that period,
but I think that they will feel more and more pressure to come and build
industry standards.
And I think there will be policy responses literally like we've just seen in
Italy.
Chat GPT is now banned in Italy and on the flip side side you don't have chat gpt in saudi arabia why because openly i decided not to so we need to have
some standards around this sooner rather than later and it'll be a mixture of public pressure
government pressure investor pressure and more i think um it's not easy man it's not easy and then
imad one of the things i truly hope is that your voice, and one of the reasons I wanted to do this Twitter space conversation with you is I really want to hear your voice in this world of AI.
I know how brilliant you are.
I know how giving your heart is.
I want people to want, you know, we keep on hearing from Sam and Elon and others like Jeffrey Hinton and folks at DeepMind.
But I'm excited to hear you lead in this industry.
I think, again, everyone needs to speak up now.
It's the time.
Hello. Thank you so much for having me. Peter, Imad, just had a question around community.
And I was wondering, you know, with all this kind of stuff happening with automation and things are going to change,
do you think we can start leaning more into humans finding purpose in in-person events?
I just want to hear your thoughts on that.
Like, what role does in-person events have in this, you know, new future?
I love AI.
I think we're pro-social. No. I think we're pro-social.
No, I think we're pro-social species, right?
And AI can help us connect better with people around us.
So, you know, I mean, we'll come out with this COVID weirdness and now getting used
to meeting people in person.
I think it's super important.
You know, I think there was an interesting thing as well, like the FS study in Argentina,
where rather than giving direct cash handouts, they actually universal basic jobs pick your jobs for your community and people loved
it because they have purpose they have meaning you know and it brought loads of women into the
workforce and they're like don't give us back to direct cash handouts people like being with people
fundamentally and so we've got to use this technology to increase engagement with other
humans rather than that wally type future of the fat guy with the VR headset, you know, just being waddled around.
I think people like stories and stories bring us together and this allows us to tell better stories.
Yeah, up until now, connecting and building community has typically been almost a random process. If you happen to be in the right place, if you happen to read the right
thing or sign into the right community, imagine systems that are able to proactively gather people
who are great matches and didn't know that they are. So I think we're going to see a lot of things
accelerate in the directions that we choose. So what do we want?
And be careful what you ask for.
Again, we're moving from the book and the Gutenberg phase,
which is massively lossy,
to being able to capture some of the complexities of humanity.
Like, you know, when you watch Moana,
why does Maui pick up the sun with a fishing hook from the ocean?
It's because it's all they ever need.
And we can start to capture these stories of the world so we can better understand each other and engage,
or the vice versa.
This is up to us now.
I have two very quick ones and pick your poison.
You can answer one or both of them.
But one is on the pause and development.
So what do you think the implications are
for the economic competition between countries?
And like, should we even be looking at it this way?
And then the other question is that the conversation is really sharply focused on AI right now.
But I think the real power is going to be
in the converging of different technologies.
So based on what you're seeing,
what do you guys think is going to be the next big tech
to reach mass adoption the way that we've seen with AI
in recent months that's really going to accelerate
this sort of convergence with AI and create really interesting things?
Cool. So on the first question, I'd say there's only two companies in the world that are actually
basically building this AI, and that's the UK and the United States. Like you're not seeing
GPT-4 level models in any other countries. And so the pause was basically those two countries.
On the second one, I'm not sure.
I'm going to put it up to Peter because Peter's got a much wider view than me.
So what's interesting is I think about something called user interface moments.
When Marc Andreessen created the web browser, it became a user interface on top of ARPANET. And it made it accessible and usable,
right? We can see throughout time, even the app store as a user interface moment.
The ultimate user interface moment is, in fact, AI. So if you don't know how to 3D print,
and you don't know how to use any print and you don't know how to use any graphic programs,
but you know how to describe what your intention and desire is, you could speak to your own version
of Jarvis. And I think all of us are going to have our own version of Jarvis, our own personal AI
that we've hyper-customized that we give permission to know everything in our life
because it makes our life better. And you can say, listen,
I'd like to create a device that's got a handle on it that looks like this. You describe it physically.
And then your AR glasses or VR glasses, you're seeing it come together, being shaped as you
describe it, right? That technology is here today. And then you say, that's it. Print. And then it
says, great. And then you say, add it to the store and it's available for anybody for free or for a penny.
So AI is going to be the ultimate user interface for all exponential technologies from computation, sensors, networks, robotics, 3D printing, synthetic biology, AR, VR, blockchain.
And that's when it starts to become interesting because it used to be that you needed very specialized knowledge.
Now, the knowledge you need is what's in your mind, what's your intention, what's your desire.
And AI becomes your partner in implementation, going from mind to materialization, if you think.
I wanted to get Imad's and Peter's thoughts on something called the factorial
paradox. It's in the game Factoria, where once you learn to automate everything and build factories
that are smarter than your factories, you would think that the work for human decreases, but
actually your work increases because now you have the whole map to expand to. And I was wondering
if there is a possibility that this might also happen
with the adoption of AI tools,
that human labor might actually become
even more expensive and more rare.
As we kind of see now,
we've used, I mean, we've been using,
Codex has been out for a year.
Half the code, as you said,
now is generated by AI.
Yet we're still finding
it hard to hire people
on stuff.
So what do you think?
What do you guys think about that?
I think that's great.
And I've spent thousands of hours on Factoria, so I'm very familiar with it.
And I think it's the thing, human discernment is still very important and human guidance
is still very important.
It's just as we build new technologies, it lifts us up, right?
And it adjusts where our point in that cycle is.
So, you know, I think it will replace,
like I said, well, there's no more coders.
There's no more coders as we see them right now.
When I started coding, God, was it 20 years ago
when I was like 22 years ago, gosh, when I was 17,
we didn't have GitHub or that stuff.
We started using subversion that had just come out.
You kids have it easy these days, you know,
those who are listening with all this kind of thing.
So we get better and better abstractions of kind of knowledge
and the role of the human changes.
And it becomes more valuable because you can do so much more
and enhances our capabilities.
I think, like I said, that's the exciting thing rather than,
oh my God, no one's got anything to do.
They're just there getting fat and watching their own customized movies, right?
You know, there's another part of the conversation
we haven't discussed,
which is the coming singularity.
Whether you believe it or not,
I feel we are moving very rapidly in that direction.
Ray Kurzweil described it,
actually Werner Winsch described it first,
and Ray projects it to be some 20, 25 years out, the point at which the speed of innovation is moving so rapidly you can't project what's coming next.
And there's another book that I love called Zero Marginal Society by Jeremy Rifkin talks about what happens when we have AGI and we've got nanotechnology and we've got abundant energy,
right? Everything becomes possible pretty much all at once everywhere. And we were making that
joke on stage, Imad, you know, where I have effectively a replicator, a nanobot. Materials
are effectively abundant. Energy is abundant. Information is open source.
So it starts to become an interesting world.
And you know what I find even more interesting?
It's going to happen during our lifetimes.
We can get into a long conversation about whether we're living in a simulation or not,
but it's the next 20 years.
It's the next five years
when all what we're talking about today
is playing out definitively.
But in the next 20 years,
as we're adding decades onto our healthy life,
as brain-computer interface starts coming online,
we thought things moved quickly this year.
They're gonna move.
We're gonna see, I think,
the estimate that Ray Kurzweil talked about, and we've talked about at Singularity University for a while is we're going to see a century worth of progress in the next 10 years.
So what does that look like?
And the biggest concern is I think governments don't do well in this kind of hyper growth, in this kind of disruptive change.
Young kids will do reasonably well.
growth in this kind of disruptive change. Young kids will do reasonably well, but governments,
which are, you know, governments and religions are structured to keep things the way they are.
Any reaction to that, Imad? Yeah, I mean, I think they perpetuate the status quo.
I think decision-making under uncertainty, you minimize from maximum regret. That's why I set up Stability to offer stability in this time, so we can help out governments and companies and
others and standardize the open source models and others that everyone builds on.
It's the foundation for the future.
I'm glad, yeah.
Yeah, and then I think that, you know,
just like we're setting up subsidiaries in every single country
and they'll eventually be owned by the people of that country.
We've got interesting things coming that we'll announce.
The interesting thing you said there, like, why is it happening like this?
It's not Moore's law, right?
This is like Peel's law, Metcalfe's law.
It's like a network effect that's happening right now.
And that's why you're getting this acceleration, because network effects are really exponential,
where everyone's trying out this thing and exploring this new type of technology and
sharing information back and forth quicker than anything you've seen.
And all these technologies happen to be mature at the same time.
So I can't see that far out.
Everyone says, why do we say 20 years?
We just pull it out of our butts.
We just got thumb in the air, right?
All we know is that things are never the same again.
You will never be able to set an essay for homework again at a school for every school
in the world.
And there's more and more of that that's coming.
But these things also take time to fit into our existing system.
So as you noted, you know, programming,
Copilot's been out for a year.
It's got a lot better, but it takes time to integrate into workflows.
It's not like it goes and takes over everything at once.
You know, how does it take to get into things?
Like 1.5 million people still use AOL you know
Lotus Notes is still used around the world I think it makes it hundreds of millions a year
but this may change quicker than anything we've ever seen before but it still takes time
and it's very exciting yeah it has changed quicker than anything we've seen before
and there will be something that that moves 10 times faster than it did.
Yeah, it is the most extraordinary and exciting time ever to be alive, other than perhaps tomorrow.
Iman, I just want to say thank you. I know your heart, and I know your mission, and I'm grateful for what stability AI is doing.
And, and thanks for joining me today.
And thank you to everybody listening and for the questions.
It's my pleasure.
You know, I'm thankful to the community and I think it's going to take all of us to guide this properly, you know, so just embrace it, dig in.
It's the most exciting thing ever.
And if you haven't followed Imad on Twitter yet, please do.
He does tweet on a regular basis.
And all right, pal, I'll talk to you in the days ahead.
Thank you, everyone.