Moonshots with Peter Diamandis - Outcomes of an AI Future W/ Emad Mostaque | EP #55
Episode Date: July 20, 2023In this episode, Peter and Emad discuss the transformative impact of AI on various sectors, including journalism and Hollywood. They delve into the challenges and opportunities presented by AI, such a...s the potential for AI to enhance truth in journalism, the implications of AI-assisted professionals, and the concerns around a post-truth world due to deepfakes and AI-generated content. 09:53 | Seeking Fundamental Truths in Politics 33:16 | Reimagining Business Models 51:14 | AI Outperforms Doctors in Diagnosis Emad Mostaque is the CEO and Co-Founder of Stability AI, a company funding the development of open-source music- and image-generating systems such as Dance Diffusion and Stable Diffusion. Check out Stability AI’s latest release: SDXL 0.9 _____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Experience the future of sleep with Eight Sleep. Visit https://www.eightsleep.com/moonshots/ to save $150 on the Pod Cover. _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots and Mindsets Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
That's the sound of unaged whiskey transforming into Jack Daniel's Tennessee whiskey in Lynchburg, Tennessee.
Around 1860, Nearest Green taught Jack Daniel how to filter whiskey through charcoal for a smoother taste, one drop at a time.
This is one of many sounds in Tennessee with a story to tell.
To hear them in person, plan your trip at TNVacation.com.
Tennessee sounds perfect.
Maple syrup, we love you, but Canada is way more.
It's poutine mixed with kimchi, maple syrup on hollow hollow,
Montreal-style bagels eaten in brandon manitoba here we take the best from
one side of the world and mix it with the other and you can shop that whole world right here in
our aisles find it all here with more ways to save at real canadian superstore
coding is changing dramatically there will be no coder that doesn't use AI as part of their workflow.
5G and Starlink.
The dry Kindle for this fire has been set.
Is this more important than 5G?
By orders of magnitude.
Orders of magnitude.
There is no kind of pure, independent, completely rational person because we're not robots.
So medicine is changing dramatically.
Is your doctor AI enhanced?
Lower insurance premium, lower copay.
That is something across every industry it's going to be happening.
10 years out, what is a lawyer?
What is an accountant?
What is an engineer?
They are all AI assisted.
The entire knowledge sector is transformed.
And it enables you to do whatever you want to do.
Learning should be a place of positive growth and joy.
Where's the difference there?
Where's the difference?
And I think the key thing here is empathy.
We're in this beautiful home in the Hollywood Hills.
And a lot's happened in the last six months
since we were recording our last podcast
and you were on the stage at Abundance 360.
And I think this is epicenter
for a lot of people's concerns right now.
A transformation potentially in Hollywood.
When you were speaking about that,
that it's going to change dramatically.
potentially in holly when you were speaking about that that it's going to change dramatically and i would you say we're haven't even seen a small portion of the change yet i think we are
at the foot of the mountain as it were i kind of compare it to being at the iphone 2g to 3g point
we've just got copy and paste we haven't seen what this technology is really capable of yet, and it hasn't got everywhere.
It's like everyone's talking about it, but not that many people are using it.
There was a recent study done that showed only 17% of people had used chat GPT in the
US, despite the fact it can do anyone's homework.
I mean, it seems incredibly a small percentage, I guess, because the community i hang out with everybody's using it but well that's the thing we live in our mono
cultures but then a third of the world still doesn't have internet right and you think the
first incident they get will probably be ai enhanced yeah and then you think about this
technology proliferating it's when it becomes enterprise ready. Enterprise adoption, company adoption always takes a while.
But by next year that happens.
And I think every company everywhere that has anything to do with knowledge work
will implement this at scale.
And that's a crazy thought.
And when it's embedded in the things you use every day
and you don't know it's part of what you're using.
I think that's the thing.
Technology, you don't need to know that it's 5G or this.
The internet works faster. You can watch movies quicker.
You know, in this case, you write something in Google docs.
Now they just rolled it out.
You can say, make this snappier and it will do it automatically.
Technology doesn't need to be there as technology in the front.
It's all about use cases and the use cases are now maturing.
And again, I think next year is the real takeoff, now everyone's feeling it if there's anything to do with it
like something is coming i in this conversation i want to really think through how this is all
affecting every industry and let's start with two industries uh one is journalism and the other is
hollywood which is we're sitting in the midst of this uh one of the concerns i have i know you share it a lot of
people do especially with elections coming up in you know 18 months 2024 is what is truth uh and
are we going to enter a post-truth world and can you talk about your thoughts on journalism and how
ai is impacting it so i think you've seen shifts in journalism over the years,
but we're all familiar with kind of some of the clickbait journalism that we see now.
AI can obviously do clickbait better than humans.
And that's one kind of extreme.
This whole fake news, deep fake kind of stuff, that's a real concern.
And that's why we have authenticity mechanisms now.
We embed watermarking, you know,
we partner with GPT-0 and other things to identify AI.
But on the other side of it,
there's a real challenge coming
because AI can also help with truth.
It can help you do proper analysis
and expand out the reasoning for things.
It can identify biases within.
So journalism as it stands is caught between two things
To get clicks to get ads They went a bit more clickbait and they focus on sensationalist headlines
Even if it's with unnamed sources and things on the other hand someone's going to build an AI system AI enhanced system
That for any article you read you can find all the background material and that suddenly becomes a source of truth
So it's kind of a pincer movement and
Journalists and news sources will have to figure out. Where are you in this? How do you compete to provide value?
Yeah, are you BuzzFeed on one end?
which is you know, mostly all clicked by it all the time or you're trying to be the New York Times and
deliver
Well research journalism, but the entity that competes with the New York Times that will come,
and who knows, it might be the New York Times itself,
can use AI to enhance great journalism,
write in any voice, do all these things,
and give fully referenced facts that you can explore.
The other side is that we're going to trust this technology more and more,
just like we trust Google Maps, just like we trust other things,
such that it's like why have i
got a human doctor without an ai why have i got a journalist who isn't using ai to check everything
and their own implicit biases and i think that part is actually quite misunderstood as to
something that's coming because again humans plus ai out compete humans without ai i i believe in
that and i see that and i i think you know it's interesting
in my life and those i know um ai hasn't replaced the things that i've done it hasn't actually even
saved me time per se because i'm still spending the same amount of time it's allowed me to do a
better job at what i want to do which is the end product yeah well i mean that's because you do a
little bit of everything so you'll always feel like you know like true i feel i feel every moment of the time creating something for one of my
companies well i mean this is an open ai report that was done where they said that between 14
and 50 of tasks will be augmented by ai will be changed by ai because again i think a lot of my
other focus is on these automated systems, the terminators,
you know, to the bots. Whereas realistically, the way this AI will come in is to help us with
individual tasks, rewriting something, generating an image, making a song, adjusting your speech to
sound more confident. Yeah. You know, and we'll get to the dystopian conversation,
because I'd like to hear what you think is real versus hype.
I think the audience needs to understand what should they truly be concerned about and what shouldn't they.
I mean, that is, you know, being able to trace back and have a truth mechanism.
We can talk about what, you know, Elon's looking to build as well on the truth side.
But it's fascinating when the truth becomes blurry.
Yeah, and there's not always an objective truth
because it depends upon your individual context, right?
Yeah.
And we didn't have the systems to be able to be comprehensive,
authoritative, or up-to-date enough to do that until today.
Well, we can actually go to the root source of the data
and see, is it valid?
Maybe it's a blockchain enabled validation mechanism.
Maybe it's got that authority, that authentication.
Maybe it's, you mentioned Elon,
community notes on Twitter that AI enhanced
that can pull from various things and show the provenance.
So you've got provenance.
Again, you've got authority.
You have comprehensiveness.
You have up-to-dateness.
The future of Wikipedia is not what Wikipedia looks like today.
But that future becomes something that can be integrated into other things.
So what you'll have is, for any piece of information,
you'll be able to say, this is the bias from which it was said,
these are the compositional sources and more.
So for example, there's a great app that I use
called Perplexity AI.
So when you go to GPT-4 or Bing,
you write stuff, it doesn't give you all the sources.
Perplexity actually brings in all the sources
at a surface level,
but it references why it said certain things with GPT-4.
That's just going to get more and more advanced, so you can dig into as much depth as you want
and ask it to rephrase things as, what if that article there wasn't true that fed this?
Or what about this perspective if I want it to be a bit more libertarian?
Do you think it's possible to actually get to a fundamental truth in a lot of these areas i think depends
on the area right some areas there are fundamental truths this happened or it didn't happen even
though you see deniers of various things you know a lot of stuff is probabilistic when you're
thinking about the future you know but even something like climate you see a lot of deniers
of the real problem that we have with it being very difficult to persuade them because it
becomes part of their ideology almost but with this technology you can say look literally here
is the comprehensiveness so like jeremy howard and tricia greenlee did analysis of well over
100 mask papers and did a meta-analysis on the effectiveness of that for covid and then that
helped change the global discussion on masking because i'm actually
bothered to do a comprehensive analysis what was the result well the result was that mosques work
for respiratory diseases so many people that just refuse to believe that masks have any value
but let's not go down that road you know one of the things i found interesting was the idea of a
uh a gpt model being able to translate your points of view for someone else to make to
receive them better like if you're hardcore to the right and you want to convince someone about
your issue having uh having chat gpt or uh one of stability's products generate a rewritten version
of that language so the person can hear it better
i find that an interesting and powerful tool yeah i think this is the thing it's all about
your individual context and what resonates with you because information exists within a context
yeah so if it's going to change the state within you you need to understand your point of view
so if we think of these as really talented youngsters these ais that go a bit funny what would you want you would want someone to sit down and say well i'm listening to your point of view. So if we think of these as really talented youngsters, these AIs that go a bit funny,
what would you want? You would want someone to sit down and say, well, I'm listening to your
point of view in your context and my point of view in my context, and let's find some common
ground and then we can work from there. Much of politics isn't really about facts, it's about
persuasion. Because facts, when you have diametrically a divergent context, are very
difficult to do.
So, you said being able to rewrite something from one context to another is important,
but then you have to understand the context.
And that's what these models do really, really well.
We can take a piece and we can say rewrite it as a rap by Eminem,
or in the style of Ulysses by James Joyce.
And it will do that because it understands the essence of that.
You know, I think people don't realize,
and I want to just hit this point again,
we've talked about it somewhat,
our minds are neural nets,
our brains are neural nets,
100 trillion neurons,
and everything that you bring into your mind,
this conversation that we're having,
what you're watching on the TV news,
newspaper, what's on your
walls, the people you hang out with are all constantly shaping the way you see the world
and shaping your mindset. It's one of the things I think about in the future of news media is an
individual actively being able to choose what mindset they want to work on. Like I'd like to
have a more optimistic mindset, like to have a more moonshot
mindset and more an abundance mindset.
And then being able to have that information fed to you in a factual
fashion that allows you to, instead of what the crisis news network delivers,
which is a constantly negative news.
Yeah.
It's constant dopamine effects in your fight or flight response.
Yeah. You can say, make it from this point of view, you don't change the facts, negative news yeah this constant dopamine affecting your fight or flight response yeah
you can say make it from this point of view you don't change the facts but even the way things
are worded um or balancing right i mean i don't need to see every murder tell me what the companies
that got funded tell me the breakthroughs that occurred you know the science that's occurred
today yeah but there's even again everything you can it depends on how you portray it right
you know there are murders i take a murder for example yeah there is the facts there is the oh my god
there'll be a million more murders there is the case that it's a very sad thing there's the case
that you know the police are working super hard to solve this and we need to reach out to the
families and come together as a community these are all different aspects for the same terrible action yes which have different levels of positivity negativity click baitiness
versus community right facebook did a study many years ago um whereby they had a hypothesis
if you see sadder things on your timeline you post sadder things uh this is why independent
this is why independent review
boards are very important in ethics as well and so they did it and guess what 600 000 users were
enrolled in that study without their knowing if you see sadder things on your timeline you post
sadder things was the result so like i said there's some real ethical considerations but we know this
we know that if we're always bombarded by crisis, we will be in a crisis mentality.
We know if we surround ourselves with positive people and positive messages, we will have
a positive mentality.
And it's very insidious and not insidious.
It's kind of almost passive the way we absorb it.
I love one of the facts.
I'm writing one of my next books on longevity practices and a study in on the order of 20,000
individuals, those who had an optimistic mindset lived an average of 17% longer.
I mean, just your mindset shift, 17%.
This was true both in men and in women slightly more in women uh and so how you think impacts everything
and how you think is to a large degree going to be shaped by the media and ai is going to shape
that so it's a powerful lever that we all need to be paying attention to i think it is but then you
know you have to consider what is the plasticity for example of our children
as they grow up yeah like we're going to have nanny ais yep what's that nanny going to teach
is that nanny going to teach fight flight happiness this that what about in places like china what are
they going to teach there is a huge amount of neuroplasticity that will be influenced by his
decisions we make today.
Yeah. I mean, listen, you have young kids, I have young kids as well. And I think about the fact that school today is not preparing our kids anywhere near for the future, right? I mean,
I don't think middle school and high school traditionally is preparing them for,
my kids are 12 right now. um how do you feel about that
no i mean i think school is not fit for purpose it's a petri dish social status game and you know
like child care because again let's think about school what do you what do you talk at school
you talk competitive tests because we can't capture the context of the kids right we can't
adapt to if they're visual learners, auditory learners, dyslexic
or otherwise. And it narrows it down and you're told you cannot be creative. You're told you
can't have agency.
Colour inside the lines. Learn these facts.
Literally you colour inside the lines. You learn these facts. Every child will have an
AI in the West. Hopefully soon the whole world will have an AI with them as they grow up. And again, is that AI a positive constructive or is the AI a tiger AI that kind of is aggressive?
Get your work done!
Get your work done!
Strive harder!
Is it Peloton AI for education?
Maybe that's the pivot for Peloton, right?
There's a whole range of things, but our kids are so sensitive as they grow.
And again, in a school environment, they're told they have to be competitive and there's a whole range of things, but our kids are so sensitive as they grow. And again, in a school environment, they're told they have to be competitive. And there's only a few people
that are worthy who are at the top. And that's why you have clicks, sub clicks and others. And
that's reinforced by our social media now as well, because you need something to fill the meaning.
I think we have to be much more intentional and think, what information do we want going to our
children? Like many people listening to this podcast will have banned social media from our kids.
How do you feel about that?
I think that is probably a sensible thing because it's a slow-down AI that optimizes for adverse things.
And again, it's not the fault of the social media companies.
It's just how they are.
You know, it's the tiger and the scorpion.
Yeah.
I mean, I have not allowed my kids to have a mobile phone and I've told them when they
can afford it when they go to college, but you know, it's going to be somewhere between
now and then.
But I agree.
I think social media shouldn't be part of the repertoire.
But again, what is social media?
It's kids looking for status and trying to, you know, influence each other.
It was meant to bring our communities together stronger.
Yeah.
Maybe perhaps early on. influence each other. It was meant to bring our communities together stronger. Yeah.
Perhaps early on, you know, probably where we see the strongest communities actually in video games and guilds and kind of things like that.
A lot of this is again, you've got X number of people posting positive things
and you're like, why is my life not positive like that?
Yeah.
But social media does have its advantages.
And the question is, can you tease out the positive versus the negative when
you can finally customize it for each individual or are you going to reinforce silos
to the nth degree i'll give you a perfect example i was just meeting with a dear friend of mine
keith farazi who's absolutely brilliant uh and he had met last week with the king of bhutan yeah
which is known for its happiness and they were having a conversation measure their economy
it's how they measure their economy,
gross national happiness in that regard.
And when social media entered the country,
it began to plummet.
Teen depression and suicides began to climb.
It is a very measurable real thing.
And that's not the subject of this podcast but ai can do what for
that area well i mean again we have to think about it in terms of mental infrastructure i like that
we don't have enough like clayton christianson said infrastructure is the most efficient means
by which a society stores and distributes value. Claude Shannon, the father of computer science,
said, information is valuable inasmuch that it changes the state. We do not think at all about
our mental infrastructure and what's supporting it. If we're lucky or if we try hard, we can
build a group of supportive people around us. And where do we go when we have issues? We go to that
group. Yet so many people feel alone.
So many people feel like, again, this rise in suicide,
or they feel not good enough because it serves the slow-dumb AI of our existing systems.
So I think that, actually, we need to take some time out to think.
What information do I want going to my kids?
We have concepts like deliberative democracy,
whereby you get a group of diverse people from different backgrounds, you give them the facts, and they go and make a decision,
just like you have jury trials.
You know, one of the most important things I think there is, A, it's getting understanding
the context of each person, which I think, A, I can answer it.
B, it's just actually literally having time to think.
When was the last time you thought
about your information diet and what you're feeding yourself and your kids?
I think about it a lot. I think because that's what I teach. So I'm very clear. I do not watch
the TV news. I don't even watch the newspapers. I have very filtered information that comes to me,
which could be argued to be an echo chamber. But, you know, I'm focused on these are the
scientific breakthroughs this is
what's going on in longevity this is what's going on in exponential tech and solving problems
so i'm i'm as critical about what i take into my mind from an information uh uh source as i am what
i eat because you are what you eat and you eat information and then you absorb it, right? But then, you know, as you said, the echo chamber thing, I believe we should also deliberately
show counter viewpoints as we're raising our kids and get them to argue the opposite.
I think debate is one of the most beautiful forms, especially when you flip sides.
Exactly. Because organisms also grow through hysteresis.
Yes.
You know, when you're put under pressure, when you're forced to do something out of the normal.
Otherwise, as you said, you'll become increasingly siloed.
But there are very few people, again, who think deliberately about this.
And it's something, again, I think you and I would probably urge all the listeners to think about.
Are you challenging your priors?
Are you giving the right information diet for yourself
for your kids yes and then thinking about this technology as you use a gpt4 or claude or stable
lm or any of these things what if you took that article and viewed it from a different perspective
or what if you tied it to only be positive the news there's another part too which is we all
have these cognitive biases right these cognitive biases were wired into our brain over the last, you know, hundreds of thousands of years
as an energy efficiency mechanism, because we can't process all the information coming in.
So we're biased by, is the person look like me, speak like me? Is this recent information versus
old information paying 10 times more attention to negative information than positive information?
versus old information, paying 10 times more attention to negative information than positive information, I can't wait to have an AI that I can flip the switch and say, turn on bias
notifications. And it says, you're looking at this in a biased fashion, Peter. Here's another way to
look at it. Yeah. And, you know, being aware of your bias. I mean, like most religions have at
the core, know thyself.
Yes. The gnosis I've done is ancient Greek. Yeah. I wrote my college essay on that.
But I mean, that's why it's at the core. It's very difficult when you have the detritus
of life and all these things you're bombarded with to take time back and really know yourself,
know your own biases, understand these, because they are part of what makes you. You're made
up of the stories. There is no kind of pure, independent, completely rational person,
because we're not robots. So it's possible in the future then for social media with a more
conscious, powerful AI, I shouldn't use the word conscious, there's a different medium here,
but AI that you feel safe having your kids do it because it
is making them happier making them more motivated it is feeding them uh a flow of information that's
uplifting versus depressing can you imagine that future i can imagine that future i can also
imagine the future of brave new world whereby you are fed what exactly the government wants you to
and you are happy and especially with authoritarian regimes you are literally what exactly the government wants you to and you are happy.
And especially with authoritarian regimes,
you are literally the kids are grown with their AI nannies.
Of course.
You know, and you even have the pharmaceuticals to make you extra happy and extra neuroplastic.
So for example, you have UAE did a Falcon model, open source.
It was kind of supported by Lighton from france technologically you ask it
about the uae and it's like it's a wonderful place it's amazing in all regards you ask it about some
of the neighbors it's not so nice this is an inherent bias within the model but how are you
going to understand it versus an implicit bias and you can put any biases you want yeah you can guide
these models through reinforcement learning to reflect what you do. And if that's the only option, then you will adhere to a certain worldview, again, almost subconsciously.
It'll be reflected in all the products you produce, all the writings you have.
And it doesn't have to be that high a percentage change.
A small, persistent change sways a lot.
Well, exactly.
I mean, like half the world is religious.
You can agree or not, or say that it follows a organized religion.
You can agree or disagree.
But I can tell you that almost every single technologist who is leading all of these is
like, I don't really like religion, right?
And so the inherent bias would be to talk against religious kind of things.
Again, i'm like
who am i to judge yeah people can be they cannot be but the inherent bias is refactor and actually
it becomes very important for society because we've seen that when about 12 percent of a population
changes the point of view it flips interesting doesn't take that much it doesn't take that much
because you listen to the voices in echo chamber chamber. Like sometimes on Twitter, you know, I use my block button a lot.
I'm like, you know.
Listen, I've been enjoying your tweets.
They've been really good.
And I appreciate the frequency.
Well, you know, it's nice owning your own media channel in a way, right?
Sometimes I don't even have to have lunch because I'm told to eat crap so many times a day, right?
That's why you have to hit the block button because it's a little echo chamber when a
few dozen people can have that impact upon you.
I mean, it's going to be very interesting to see the way that our adult minds and our
kids' minds evolve over the next five to 10 years with the emergence of this new, more
powerful, personalizable technology.
Yes.
It's either controlled or control.
Everybody, it's Peter. I want to take a break from our episode to talk about a health product
that I love. It was a few years ago, I went looking for the best nutritional green drink
on the market. So I went to Whole Foods and I started looking around. I found three shells
filled with options. I looked at the labels and they really didn't wow me. So I picked the top
three that I thought looked decent, brought them home, tried them, and they sucked. First of all,
they tasted awful. And then second, nutritional facts actually weren't very impressive. It was
then that I found a product called AG1 by Athletic Greens. And without any question,
Athletic Greens is the best on the market by a huge margin.
First of all, it actually tastes amazing.
And second, if you look at the ingredients, 75 high-quality ingredients that deliver nutrient-dense antioxidants, multivitamins, pre- and probiotics, immune support, adaptogens.
I personally utilize AG1 literally every day.
I travel with an individual packet in my backpack,
sometimes in my back pocket,
and I count on it for gut health,
immunity, energy, digestion,
neural support, and really healthy aging.
So if you want to take ownership of your health,
today is a good day to start.
Athletic Greens is giving you a free
one-year supply of vitamin D
and five of these travel packs with your first purchase.
So go to athleticgreens.com backslash moonshots.
That's athleticgreens.com backslash moonshots.
Check it out.
You'll thank me.
Without question, this is the best green drink product,
the most nutritious, the most flavorful I've found.
All right, let's go back to the episode. So we're sitting in the middle of hollywood i can see the beautiful hollywood hills
hollywood signs around here someplace just over there okay um and uh the news today is green
actors guild everyone's gone on strike uh there is great uh fear pain concern and you know you called it when we spoke uh six months ago
what's going on how do you view it and where is it going so i think the advances in media
artificial intelligence have been huge you've now got the drake ai song or my favorite Ice Ice Matrix where the Matrix characters sing Ice Ice Baby.
You've got real-time rigging, you have real-time special effects, you've got high definition
creation of anything. What does that mean? It means that the whole industry is about to be
disrupted because the cost of production reduces. It was reducing in some ways anyway,
and we've seen this move where the cost of consumption went to zero with Napster and
Spotify, and then the cost of creation started going to zero with Snapchat and TikTok. And now
we have a question of what are the defaults going to be now? And consumers only have one limited
currency and that's their attention.
I think so, but consumers are willing to pay for attention. They want to pay for quality.
Mm-hmm.
Either premium media or otherwise. Because video games, for example, started as a $70 billion
industry 10 years ago. The average score went from 69% to 74% on metacritic over that period and now they're
180 billion dollars movies i pay for some of that with my 12 year olds oh there you go movies went
from 40 billion to 50 billion but the average imdb movie rating over the last 10 years has been
6.4 it has not changed so you're not producing more and so they're not consuming
more so there's a question will this technology enable an increase in quality because it raises
the bar for everyone yes there are more movie makers so they're more excellent movie makers
and the best movie makers become even more excellent and we've democratized the tools on
from an iphone to uh you know the the tools on your Mac.
And of course, YouTube was the first major disruption of the... It was.
Someone like Mr. Beast has a much bigger audience than CNN now.
Yeah.
But that comes down to what is the key point here?
Distribution.
You can make good stories, but if no one hears them...
Right.
And that's what Disney has as its benefit, its distribution to its
theme parks, to its
products, to its channels.
It creates a shelling point, but then if we
all have our own individualized AIs
that can find us what we need
and sift through the crap,
maybe whole of distribution flips on its head
in five years as well.
So I can't imagine, you know, I keep
thinking that the future of
movie consumption is me in five years as well. So I can't imagine, you know, I keep thinking that the future of
movie consumption is me speaking to my version of Jarvis and saying, you know, I'd like a comedy, I'd like it starring me and three friends of mine and somebody else, and for 90 minutes and set it
in, you know, some setting and go and have it auto-generate a compelling story that is either
because I'm involved in it, I'm enraptured in it, or because my favorite stars are all
together in an unlikely place.
How far are we from that kind of future?
I'd say probably three to five years
so in that in that case it's not going to be cheap but it'll be there and then it'll get
cheaper and cheaper in that case there is no distribution needed a person calls up whatever
they want i think there is distribution needed but in a different way music we have a variety
of different music sites but how do musicians make their money now it's not spotify a million views gets you a few thousand t-shirts merchandising global stories because
there is that future of the wally type of fat guy sitting with his vr headset yeah it's kind
of depressing just like you know the apple vision pro adverts i found kind of depressing when his
kids are right there and just puts them on. A little bit dystopian to me.
I think it's more a case of there are certain stories that everyone wants to talk about.
Like here in Hollywood, what do we have this week?
We have Oppenheimer and Barbie.
Oh, my God.
I'm looking forward to Oppenheimer.
I will not watch Barbie.
Sorry.
Would you watch Barbieheimer?
Only... No, I won't go there.
Well, I mean, we can take it.
We can put the scripts into Claude and see what it comes up with.
That would be hilarious.
Hilarious kind of Barbieheimer.
But again, like people are talking about the Barbie movie because it's, you know, not,
she comes to the real world and she has challenges.
It'll be a hit.
People talk about Oppenheimer because again, it will be a hit.
These are produced hits hit these are produced hits
these are produced hits yeah just like i saw bts with my daughter it's not bts black pink oh gosh
she'll kill me um in hyde park a few weeks ago the biggest k-pop band out of korea right um
completely manufactured but lots of fun so on the one hand what you have is what you described, your personalized things.
That's kind of like McDonald's. What's the job to be done? The job is comfort. The job isn't to
listen to someone else's story and expand from there. It isn't a produced Michelin star meal
or just a nice restaurant. One that you can talk about to others. Like you can cook ingredients
yourself at home as well. Sure. But humans do like these bigger stories, but then the nature
of funding of musicians changed.
It was about merchandise.
It was about kind of a lot of this, um, other stuff around tours.
And so I think the nature of movies,
the business model has changed.
I mean, that's the most interesting thing about exponential technologies
is it's changing the business models.
And I keep on, you know, advising my entrepreneurs,
it's reinvent your business models more than anything else.
You have to always look across the landscape,
where is the value peaks?
And then you're sitting there and you're intermediating something and you're
offering service value you know like there was that great quote who is it from the ceo of netscape
all value is created by aggregation or disaggregation bundling or unbundling yeah
and you think about how the landscape is going to change now as intelligence has moved to the edge
what was once in the hands
of the studios and the high priests of media suddenly gets pushed to the edge whereas value
then it changes it flips it so let's go let's go to what's going on right now so the stream
screen actors guild is on strike because of why screen i mean it's better wages in general
standard stuff but now there's this rapidly emerging fear of
artificial intelligence so one of the proposals that came in from the other side was that
basically all the extras all the actors they sign away their rights so they could be used in ai so
they get a day of wage to get scanned and then they can be used for the rest of that studio's life in producing
background actors and they were looking at this like oh my god wait you can do that
yeah most people don't even realize you can do that you know script writers are saying no ai
generated scripts which again is a bit weird when they're all using grammarly and things like that
which was ai how are you ever going to tell?
Where do you draw the line?
Where do you draw the line?
Because this is a technology that's coming so fast and is so good that it's almost not like technology at all.
It's just very natural the way it emerges.
And so you will get to some sort of agreement because the big actors are kind of there.
But the defaults that are set now reverberate.
So I think that's an important point that you just made.
The decisions, the policies that we create today
are going to take us down one path or another for the decades to come.
Yeah, it will affect the whole of Hollywood, what's decided in this moment here,
because there won't be another renegotiation for a while.
And so again, how does an actor create value?
A top actor has a following,
but up and comers, how did they break through?
What was the apprenticeship to them?
What does a movie look like in five years?
Even if you agree as Hollywood,
not to have any AI,
let's just say you have this kind of Dune-style Butlerian jihad and say, no AI, you know?
And someone will make a movie about no AI in Hollywood.
What do you do when the Chinese film studios?
Start releasing product.
Start releasing product faster than anything.
You can make five dreamers.
In every language out there.
Literally every language. We have technology now. Again, maybe this is part of what we're doing what is possible now we can
translate peter's voice into just about any language with his voice so it's not a voice
match my lips and movements exactly match your lips and movements exactly i'm sure the podcast
will be in every language by next year you know and again it will be in our voices we can make our
voices sound more confident we can take his mannerisms right now and transplant them onto
my mannerisms so we match so we can reshoot scenes with style yeah we can turn him into a robot in a
few minutes and in fact maybe we'll do that in the flow of the post and so we'll have a little scene
of him becoming a robot um these are all technologies that are here now, and it transforms fundamentally the nature of filmmaking
because you only need one shot.
But don't the film actors who are, you know,
standing up for their rights
to not have them basically demonetized and digitized,
the other option is for Hollywood
to just create complete artificial characters
that they fully
own yeah and you've seen this already with some of the kind of vloggers and you know others that
have emerged out of Asia in particular fully AI generated characters yeah and you can have entire
mythos around them and you can say make it the most attractive Italian guy I've ever seen that's
broody and this and that how you gonna you going to tell it from a human?
I mean, like these characters can be completely new.
It's far more profitable for the studio to use that digital actor.
So there is a disruption coming at every, every level.
Because the research to revenue pipeline has become so tight and it
sets off a race condition
whereby you can only produce two movies a year.
Suddenly you can produce 20.
And then you can actually, like we do A-B testing in subject headlines, you could create
30 variants of the movie and see which one actually is the best.
Yeah.
And I mean, again, you can say, make it more, make that speech roar and
more emotional. It will adjust the voice to make it roar and more emotional. Right. And because
you're using such large data sets, you know, and we, so we made all our data sets open and then
we allowed opt out. We're the only company in the world to allow opt out because we thought it was
the right thing to do. So we had 169 million images opted out of our image datasets.
For music, because it's different copyright laws,
we have one of the first commercially licensed music models coming out.
So respect for that.
But if I'm an artist and I go to the Louvre to be inspired
and then go back and paint,
and I've been inspired by Da Vinci and I start painting in a style like Da Vinci
where's the difference there where's the difference and this is the reality even though
we've done that by next year probably by the end of the year you will have models that have zero
scrape data or human art they They will all be synthetic.
Yeah.
And you'll be able to bring your art to it.
So there's something Google just released called StyleDrop.
There's HyperDream, there's HyperNetworks,
so HyperDream booth.
You can take one picture of yourself
and the entire model trains
to be able to put you into anything,
even if you're not in the model.
It used to take minutes, hours.
Now it's just one picture.
You know? Similarly, you can bring any style and it will just mimic and imitate that style and so all of a sudden the models themselves it doesn't matter what they're trained on because
there's no human endeavor in those models and then things like compensation for artists and
others as you said become a bit mute because all of a sudden you have these amazing stories told by really convincing, amazing actors who may or may not exist.
And how are you ever going to tell the difference?
So what's your advice?
Let's parse it here.
On one side, what's your advice for Hollywood and for actors?
And on the other side, I want to ask your advice for artists.
Because this is about mindset you know this is coming at us at extraordinary speed there's no stopping
it right there's no it's you know there's no slowing it down and so you've got to deal with
reality you're dealing with Again, it's inevitable.
Even if again, Hollywood says no AI, the AI is coming from around the world.
So what do you do?
You think, oh, well, my audience suddenly became the whole world.
That's a big deal.
You're like, what am I actually known for is my acting skills?
Well, I will still get these things.
You're an up and coming actor.
You say, I need to actor you say i need to
build community i need to kind of show off something more than that because again my acting
skills in some areas can be transplanted but what about real life shows you know what about these
things it does throw up the entire thing and adjust it but then musicians have had to have
that adjustment they used to be able to make money on their lps and then all of a sudden they had the naps to
spotify moments yes there is more protection in music as well because you basically according to
the robin thick versus marvin gay case um there is an element of style protection in there that
doesn't exist in visual media yeah and probably won't because the other part of this is if you're expecting governments to regulate
How can they when there is a global competition going on
They will lose competitiveness to other countries and you'll have regulatory arbitrage
Yes, and our that is something across every industry. It's going to be happening
Yeah, I think you know the concept of concept of, you know, united artists,
it was originally a collective of all the artists.
That makes a lot of sense now.
I think you have to think about an element
of collectivism to share
the excess profits, because what's going to happen is
movies will get cheaper, profits will go up.
You need to support each other as a
community here, and think again,
as a community, what is our story for the next
1, 3, 5, 10 years? Because all of this is going to happen quicker than it takes
to make the new Avengers movie.
And quicker than regulators are able to regulate.
And again, the regulators almost certainly won't regulate because they will start falling behind
their competitor countries.
I mean, if we look at the internet itself, as it, you know, the media industry never
expected the internet to have the disruptive impact it had.
And had it known, it probably would have tried to get regulators to have
slowed it down or blocked it.
The speed is too much.
But then also, again, I gave the example earlier, the video game industry has gone from 70 billion
to 180 billion over the next 10 years's quality can we increase quality interactivity
okay yeah because games are in games are media as well the media industry has increased in size the
way that value has gone has been redistributed value will be redistributed again now and again
it's like what is an ai enhanced actor if If you're an actor, what's an AI
enhanced photographer filming? Think about your jobs, the tasks that you do, and what can be
augmented if you had a bunch of really talented youngsters working for you, right? You could do
more, you could be more. But then it means the bar is just going to keep on raising.
Let's turn to a different uh
industry that's going to change and we had this conversation in our last podcast and on stage at
abundance 360 which is coders coding is changing dramatically what are your thoughts there so when
i started as a programmer gosh 22 years 22 years ago, I was writing enterprise level
assembly code for voice over IP software. Wow, that's some of the largest chunks of code out
there. Yeah, it's very low level code. We didn't have GitHub, we just got subversion the following
year. Programming these days is a lot like Lego, because what you have is kind of have a very low
level, but then you have levels of abstraction until you get to PyTorch and some of
these other languages.
So you have to compile lots of different libraries because you're making it
easier and easier. Human words are just the next level of abstraction there.
But the nature of coding is going to change. Um,
and so the coders that are coding traditionally today around the world,
um, what will they be using and working in this industry
two to five years from now?
Well, again, there will be no coder
that doesn't use AI as part of their workflow.
Okay. I mean, I think that's the important thing.
It's not like coders are going to go away.
They're going to be using a new set of tools.
The expectations will rise.
The amount of debugging, unit testing,
all of these things will decrease
because how much time do coders actually spend architecting?
Very little upfront.
It's more about understanding information flows,
about architecting these things.
It's about having feedback loops
to understand customer requirements.
Databricks is a $38 billion company.
There's data lakes. So it takes your data, organize it, allows you to write
structured queries. They used to have to, you used to have to write queries.
Now you just talk to it and it just does it. Microsoft will introduce
the same thing. I mean, I can't wait for that in the field of medicine, which
I want to talk about next, but you said something earlier where you can imagine
there can be a billion coders in the future. Yeah, because all the barriers to creating programs disappear.
So it's not that there are no programs, there's no programmers that we know it,
because there's a billion programmers. Everyone is a programmer, nobody's a programmer in a way,
because it just becomes a matter of course. I want to make software that does something and reacts in these ways
and looks like this and adapts like this and then it comes to you and you're like no that's
not quite right i want this moved over it happens almost live this feedback loop it's as we talk to
chat gpt for you know creating a paragraph that describes something we want we modify it
and i mean like chat gpt4 is a good example because to write an integration to something
like chat GPT-4 used to take days, weeks, an API, an application protocol kind of interface.
Now what you do is actually you tell it the schema, you tell it kind of what you should
do, and it writes it automatically, literally within like a few minutes.
And then in a few hours, you've integrated and it writes it automatically literally within like a few minutes and then in
a few hours you've integrated into it i've been talking about a future where we all have jarvis
iron man i love iron man as a movie it's one of my favorites and you know jarvis is basically your
personal ai that is it's a software shell it interfaces between you and the rest of the world
you ask jarvis to do something and it knows how to, the tool paths
on a 3D printer, you can hop into a jet and interface Jarvis with the jet's computational
system and it'll fly the jet for you. I can't imagine that that is really far from now.
Yeah, let's hope that doesn't fly planes just quite yet.
Okay, well, I'll put the jet aircraft aside but the ability to um
for it to become your your best friend and confidant know your needs and desires shape
the world to your comfort and being able to help you it's the ultimate user interface well i mean
this is why a lot of the chatbots, character AI and others have become so popular because it'll never judge you.
And it's approaching that human level now, you know, and again, it is the ultimate interfaces,
maybe chat, but it's more chat, it's chatting context.
It's understanding you holistically.
No human could do that because, you know, even if you hire a whole team, they're not
going to be with you 24 seven.
This will be with you 24 seven.
I think the key thing here is empathy yes because um jumping head bit to medicine google had their med palm 2 model
the papers just come out in nature a it outperforms doctors on clinical diagnosis
which is crazy for a few hundred gigabytes of a file yeah b it outperforms doctors on scores of empathy i found that amazing and
totally logical it doesn't judge you it doesn't judge you but then you know a doctor is split a
million ways and they're tired and they're grumpy or this or that some of us get good doctors most
of us don't yeah some of us get good teachers most like i'm not saying education is bad because
the teachers so many teachers
try so hard but their attention is split 20 ways and they're underpaid you know i'm not saying that
programmers will be nature programming will change because programmers are bad there's so many hard
working programmers it's just again the nature of these things will change when you can scale
expertise and everyone has expertise available to them on tap.
I've been on the stage, you know, just pounding my fist saying,
listen, it's going to become malpractice to diagnose someone
without an AI in the loop within five years' time.
And probably in some areas it'll be inappropriate to,
not yet illegal to.
And then at some point soon after that,
the best surgeons in the world
are going to be humanoid robots
that have every possible,
you know, atrial, you know, variation,
every possible, you know, history of surgery.
And they can see an infrared and ultraviolet.
And they haven't had a, you know,
argument that morning with their husband or wife and it becomes the best and these are these are demonetizing and
democratizing forces for health they're massively deflationary as well but i know i agree completely
with this because ultimately what's going to kick it off is is your doctor ai enhanced yeah
lower insurance premium lower copay.
Yes.
Because there'll be real economic incentives.
Has this been cross-checked by the technology?
Yeah.
Reduce the cost.
My favorite subject is, you know, you probably know this. How many medical articles are written in journals every day?
I don't know.
It's 7,000.
Wow.
And it's like, how many has your doctor read today?
I don't know.
It's 7,000.
Wow.
And it's like, how many has your doctor read today?
And there may be that one breakthrough that happened this morning that is the key for your diagnostics.
But I mean, even if they've read it, right?
Like absorbing it is one thing, having the mental models.
These are kinds of things.
This way you need comprehensive authority to not state,
which is what this technology allows to happen.
As you said, things like we're
already seeing some surgeries can do done better by robot surgeons and human surgeons it'll be all
surgeries yeah it will be all surgeries soon enough like the robotics advancements we've seen
this actually goes back to your point of you know the artist going to the louvre
and seeing the da vinci and then taking inspiration from that what are we going to do with like all of
these optimus robots and 1kx robots
and others are they gonna have to shut their eyes when they see anything copyrighted oh that's
hilarious you're just gonna have accents everywhere they're like running into each other
everything's gonna be black lined out yeah uh so medicine is is changing uh dramatically uh
what other field are you seeing and saying people need to wake up and see what's coming
so i mean medicine education are kind of the two big ones i think um but if we move this to the
side right now again we've seen programming the entire nature program will change media the entire
nature of media will change from journalism to filmmaking um But anything that basically you could do with someone from Asia on the other side
on a computer screen will change. Yeah. So education, because today
education hasn't changed since the one-room classroom. Half the kids are lost, half the
kids are bored. You're teaching to a series of tests,
you're teaching for a industrial era world. And people learn differently. People have visual and
auditory and tactile learning skills. And let's face it, we don't celebrate our teachers,
we don't pay them well, and we don't have the best of them coming into the classroom. And they're sad. I mean, this is the thing, like,
there are happy classrooms with happy teachers and other things, but learning should be a place
of positive growth and joy. Yeah. It should be fun to learn. And it should be fun to teach.
And fun to teach, yes. I think this is both of them, because what is the nature of a teacher in five, ten years?
Let's say ten years.
Again, ten years is the crazy short period of time.
I mean, when I asked you this question last time, you know, how far out can you predict what's likely to come?
What's your singularity boundary condition?
I'm curious.
What are you seeing as?
How far? Three or five years i mean people i go to i go to dubai and i'm on stage and can you talk to us about you know
what the world's gonna be like in 2050 my answer is no i can barely talk about 2030. it's everything
everywhere all at once lots of s curves acceleration these are inevitable now now we've
broken through these things i mean open ai
now i've put 20 of their stuff to alignment because they're basically saying that their
view is five years out elon musk just said six years out but then elon's relationship with time
is always a bit fun uh just like self-driving cars but self-driving cars are literally here now
yes you can get in one and it will drive you around san francisco or london or wemo will do
that for you not my tesla not your tesla i know i keep on pushing the button but it doesn't but
the technology is here yeah and i was like oh wait what this is the thing so again 10 years is a
dramatically short period of time for education which has been the same for a century yes but
the thing is it is inevitable that every child will have their own ai so your
12 year old will be 22. when they get to 22 and they come out of let's say university still around
university if university is still a thing yeah they will have their own ai that's learned for
at least five years about them yes that can fetch them any information in any format of any type and write anything yeah or create any video or movie for them to that's a crazy thing there were concerns that
Wikipedia would remove rote learning and things like that Google would do the same and maybe there
are kind of again like appendices that have shriveled what's the thing that shrivels the
yeah your appendix yeah but you know like I was at gallbladder or something but this was the thing that shrivels. Your appendix is right. But you know, like I was at gallbladder or something, but this is the thing, like you
might have some vestigial parts of your brain.
The entire human brain will be rewired.
You must assume that every child will have their own AI.
How that AI is driven is different because any child that has AI will dramatically outperform
the kids that don't.
But what are we optimizing for in education?
And I think one of the things that we've lost is what is our objective function as a society?
What does America even stand for right now?
Mm-hmm.
Coming here.
I agree with you.
What are we optimizing for? Bhutan is optimizing for happiness.
Well, and frankly, I think happiness is a great thing to optimize for
in general if you ask people you know what do you want more than anything uh life i want happiness i
want health i want love we don't talk about you know those are and there's interesting you know
i was just with uh with mo gadot uh doing a podcast and he's a fan of your work and he was
saying loved your podcast and we're talking about this test of what would you trade right would you trade uh you know how much money would you trade for your happiness or
for uh for your health and it starts to do a bubble sort and prioritizing what's at the top
and i think for almost everybody it's health happiness um right i think it is and you know
time time is the one thing you can never buy i'm working on it
i'm a longevity friend but at the moment it's something you can't buy happiness
we all know we both know billionaires yeah they are so sad yeah so sad and unhealthy
it is it's like it's like you create this incredible burden for yourself for most of them like some of
them that we know starting you know three or four companies a year and it's because it's a demon
being driven it's because they're addicted to dopamine and crisis yeah i mean crisis is
interesting because it comes down to decision from its root right and it's where leaders you
have to show leadership but that does become addicting i think most leaders are addicted to
crisis but then so are many of us because we see it all the time like oh my god the world is on fire
the reality is actually this for all that we talk about most communities are happy
most people are relatively content today today yeah there can be explosions like you know
decade ago that whole scene was on fire with the riots maybe that will return
we're seeing in france right now we're seeing a breakdown of the social order but
just because they're like content doesn't mean they're happy
you know what are we optimizing for is the question you left off so what do you think
we as a society let's say the united states should be optimizing for uh i don't know is it
life liberty and the pursuit of
happiness not the guarantee of happiness the pursuit of happiness when was the last time
someone actually talked about happiness as a political leader in the u.s you know life liberty
when was last time anyone tried to optimize for liberty systems inherently look to control because
they have to make us simple you know know, this is the wonderful book, Seeing Light the State,
where it talks about this concept of legibility.
You have a village and it's just grown and it's got all this unique character.
You drive a road down the middle so you can get an ambulance down there.
Sure, it helps.
But then everything becomes planned because you have to put humans into boxes.
And again, this goes to education.
Happiness, I think, there's the japanese concept of ikigai what you're good at what you like and where you're adding value to the world yeah and you can feel it yourself as well you'll feel
progress if you don't have progress then how are you going to be happy if you don't believe you're
good at anything how do you feel you're going to be happy you know and as what you like, are you coming out with that yourself or being told what you like?
And that's why it becomes consumerist.
So I think we need to have a discussion as a society about that, as a community, but then also for our kids.
What is the future for kids when so much of the jobs in the West are going to be transformed, not ended, not necessarily
massive unemployment, but again, 10 years out, what is a lawyer?
What is an accountant?
What is an engineer?
They are all AI assisted.
All of these, all the entire knowledge sector is transformed.
And do we want our kids that are growing up to be doctors, lawyers, accountants, thinking
there is no hope for the future, there is no progress?
Because how am I going to compete against an AI?
Or do we want them to have that mindset of this technology is going to be amazing because
I want to be a doctor so I can help people, I can help even more people.
And it enables you to do whatever you want to do, right? One of the things to think about is all the
people who have jobs today who are, that was never their dream to clean bathrooms and make beds and,
you know, wait on people. It's what they did to get eventually to where they wanted to go or to
have, you know, put food on the table or insurance.
And AI is going to enable people to actually take on a higher goal that actually gives them joy and
happiness. It does. But at the same time, you know, we're very privileged people, you and I,
in that we can think about these big things. There's a lot of people that are actually very
happy doing that type of work because they're a part of a group and they take pride in their work.
So you know, it's like there will always be a variety of different things.
The key thing is saying, can we build systems to make people happier and more content without
necessarily controlling them and feel that they have the ability to do that. Can we build systems to build strong communities?
Because one of the issues right now,
I was at kind of a conference and David Miliband from the IRC said this was
that a lot of our problems now are global.
Our solutions are almost being forced to be local and there's no interconnect
between that.
Our communities kind of have no guidance as to how to navigate this because you
will have a few hundred thousand people listening to this podcast.
And there's myself and maybe a dozen others that understand the AI and the sociology and this and that and saying, this is coming.
But there are seven billion people on earth. And all of a sudden, in a few years, they're all going to have to grapple with the questions that
we're discussing now and it's not a probability if the technology stops today you know it stops
increasing its capability today if it stopped today you would still have the entire legal
profession media profession journalism professions they're all disrupted if it stops today, but it's not stopping.
Yeah.
Uh, and it's accelerating, isn't it?
It's accelerating.
The amount of money going into this sector goes up every single day.
My total dress all market calculation is that the next year, a thousand
companies has been 10 million, a hundred will spend a hundred and
10 will spend a billion that's $30 billion being put into the market.
Self-driving cars at $100 billion total.
This will be a trillion dollars going into this.
Because do you know what got a trillion dollars?
5G.
Is this more important than 5G?
By orders of magnitude.
Orders of magnitude.
So it will get a trillion dollars going into it.
And the capabilities will ramp up from here.
And so when I look at it and I look at what the drivers are of why now it's first of all
computation, right?
Nvidia has done an incredible job.
Yeah.
Right.
With their A100 and computation is continuing on Moore's law.
It's not slowing down.
It's continuing to increase year on year.
It's actually a little bit exponential.
Well, it is exponential.
Well, I mean, yes, i'm saying it's continuing to double
on a regular basis yeah um what was considered moore's law and people have said oh it's going
to eventually fall off as an s curve well we're extending it and and for the next at least near
term future it's not slowing down so i think this is a very interesting thing for people to understand
you had moore's law and again was doubling. And this was an individual chip.
What we do with these models is that we stick together
thousands, tens of thousands of these chips.
Like how many A100s right now is Stability using?
We're using about seven, 8,000.
By next year, we will have 70,000 equivalent.
Wow.
But what used to happen is
as you stuck the chips together, you ran a model.
So you take large amounts of data and you use these chips.
I mean, like we're using like maybe 10 megawatts of electricity,
98% clean.
Compared to the brain's 14 watts.
The brain's 14 watts.
But then it compresses it down.
Then it runs on 100, 200 watts or 25 watts, actually.
We put it down for some rubber range models.
So you do the pre-computation.
But the thing is this, the individual chips were doubling, but what the main breakthrough the last few years
was, is what happens when you stack them on top of each other. To train a model, you used to get
to a hundred chips and then the performance collapsed because you couldn't move the data
fast enough. Now you get to tens of thousands of chips and it keeps going up the performance of the model.
You don't have the big tail off anymore.
And so it's Moore's law plus an additional scaling law.
And that's what enables these crazy performant models because you train longer, you train bigger.
And then once the model is trained, in the old internet, the energy was used at the time of running the AI,
and then you'd collect the data,
and that would be low energy, relatively speaking.
It flips the equation because you pre-compute it,
you teach the curriculum up front,
and you send these little graduates out to the world,
such that you can have a language model
now running on that MacBook,
or an image model running on that MacBook
drawing 25 to 35 watts of power to create a renoir that can talk and recite ulysses talking about
barbie you know that's insane all on your macbook because we've done the pre-computation that's
insane and this is because then what happens is the technology can spread when anyone can
run it on their MacBook.
They don't need giant supercomputer server farms because we've done the pre-computation.
And so one of the things I've just realized recently is what is the R0, remember the
pandemic stuff of generative AI?
It's insane because suddenly it proliferates everywhere.
And you said this a few minutes ago, we have eight billion people on the planet right now and if if
things stop right now the wave this wave of disruption and enhancement because let's not
just talk about the disruption side it's enhancement as well is uh is spreading globally. And in the next, we're in 2023 right now,
90% of the planet, I mean, we have cell phones.
The world has 5G and Starlink.
The dry Kindle for this fire has been set.
It's been set.
And, you know, a lot of people are scared
and they poop with this.
You know, like if anyone's listening on this on YouTube, want to write a comment ends me or peter you know whatever
and say no this is not going to happen go to chat gpt take your comment and say this is a comment on
twitter i want you to make it amazing and really well reasoned and expand it out.
And I want you to do it in the style of your favorite political commentator.
And please post that instead, because we'll have much more fun reading it.
That's great.
And then you'll realize again,
the power of this technology.
And again,
with Starlink,
with 5g,
with this,
with it being optimized,
because these models are still not optimized.
Even we feed them jelly days,
early days,
we feed them junk,
which is also dangerous. And again, we should talk to that.
So it'll be on, it'll be in front of every person.
And then what it will do in my opinion is that 30% of the world that is
invisible, that has no internet again,
imagine what the world internet would be like. Some people like paradise.
It's because you've got hundreds of
700 million people living below the malnourishment line still they're invisible and they will become
visible and they will suddenly get agency and they will get all of the world's knowledge at
their fingertips you know i'm super passionate about longevity and health span and how do you
add 10 20 health years onto your
life, one of the most underappreciated elements is the quality of your sleep. And there's something
that changed the quality of my sleep. And this episode is brought to you by that product. It's
called Eight Sleep. If you're like me, you probably didn't know that temperature plays a crucial role
in the quality of your sleep. Those mornings when you
wake up feeling like you barely slept, yeah, temperature is often the culprit. Traditional
mattresses trap heat, but your body needs to cool down during sleep and stay cool through the
evening and then heat up in the morning. Enter the Pod Cover by 8sleep. It's the perfect solution
to the problem. It fits on any bed, adjusts the temperature on each side of the bed
based upon your individual needs.
You know, I've been using Pod Cover and it's a game changer.
I'm a big believer in using technology to improve life
and 8sleep has done that for me.
And it's not just about temperature control.
With the Pod's sleep and health tracking,
I get personalized sleep reports every morning.
It's like having a personal sleep coach. So you know when you eat or drink or go to sleep too late
how it impacts your sleep. So why not experience sleep like never before? Visit www.8sleep.com that's e-i-g-h-t-s-l-e-e-p.com slash moonshots and you'll save 150 bucks on the pod
cover by eight sleep i hope you do it it's transformed my sleep and will for you as well
now back to the episode question ultimately is is that a societally common a calming factor
or is it going to be disruptive?
Let's turn to that conversation, because it's one that's important.
It's a conversation I have at the dinner table literally every night and with my kids and in the companies I advise.
I think of, I parse AI and AGI into three segments.
Where we are today, where it's extraordinarily powerful, useful, and it's fun, and I don't feel danger from it yet.
The next two to ten years, where I have serious concerns.
Going into the US elections, dealing with the first time
AIs bring down a power plant or Wall Street servers,
the impact on deep fakes on the US elections, and so forth.
That's a two to 10 year horizon where new dystopian,
challenging impact will happen where society is not agile enough
to adapt to it yet.
And then there's a third chapter, which is AGI.
You know, we have a super intelligent, billion-fold more capable than a human being.
And is that more like Arnold Schwarzenegger or more like her?
Yeah. I don't think it'll be Arnold Schwarzenegger or more like her? Yeah.
I don't think it'll be Arnold Schwarzenegger.
It's really inefficient.
I saw him this morning biking, so let's not use him.
Let's use Terminator instead.
We're in Hollywood here.
So is it Skynet and Terminator?
So let me get your, I'm polling people here.
As someone in the thick of it, a super AGI, is it pro-life pro-abundance or is it something that we should
be deeply concerned about i think where we're going right now will probably be okay but we may
not and we will all die what tips that i think what tips that you are what you eat we're feeding
it all the junk of the internet and these hyper-optimized nasty equations.
And the hate speech, the extremism that is, I mean, people need to realize these AIs are trained upon everything everyone's been putting into Facebook and Twitter and on the web.
And that amplifies the worst of that as a base model.
And so we're training larger and larger models.
We're making them agentic in that we're connecting them up to the world.
And you're making it so the models can take over other models and other things.
Again, people like poo-pooing and saying these things.
Our organizations are slow, dumb AI.
The Nazi party was AI.
How so?
It was an artificial intelligence that basically provisioned humans.
And the most sensible people in the world are Germans, one can say.
And yet they committed the Holocaust and other things like that.
Our organizations emerged out of stories.
So there was a story of the Nazi party of the communist party, the great leap
forward of the North Korean dictatorship, positive stories as well.
And they were written on text and it made the world black and white in a way. That's why I love the poem Howl by Ginsberg about this
Carthaginian demon Moloch. I think Moloch comes through text, the stories that we use to drive
our organizations, because all the context is lost. Again, it makes the world black and white.
And that's why organizations just don't work. They have to turn us into cogs. So can an AI take over an organization?
Yes.
Sure.
Can it?
It can actually just slightly sway leaders who are currently running organizations.
Sway leaders that are currently running organizations.
It can create companies.
You can create a company with GPT-4 that will probably do as well, if not better than any
other company automated within a year. Because think about what a company needs to do, right?
And so if it can sway leaders, if it can send emails that you don't know who's sending what,
it can do anything by co-opting any of our existing organizations. And that can lead to
immensely bad things. Will it do bad things? Again, if I was trained on the whole of the internet,
I would probably be a bit crazier than I am right now we're feeding them junk let's feed it good stuff
it still needs to understand all the evils of the world and other things like that
but again this is something we are raising not the answer from why is it but what are we feeding it
what's our objective function i want to focus on this a second we'll come back to the next two to
ten years in a little bit.
But, because this is the conversation I've had with Mo Gaudat as well, who believes there
is incredibly divine nature of humanity, of love and compassion and community, and there
is much good in humanity.
The question is, can we feed and train AI on that sufficient to sort of tilt the singularity of AI towards a pro-humanity?
We can, if we take the data from teaching kids and learning from kids and use that as the base for AI.
Because that's what you need to teach in AI.
It's the curriculum learning method, effectively.
If we take national data sets that reflect diverse cultures, so it's not just a monoculture that's hyper-optimized for engagement,
and we feed that to AI as the base.
Because what you do is you can teach the AI in levels.
Actually, you can put it through kindergarten, then grade school, then high school.
It's got the base, and then you can teach it about the bad of the world.
I think aligning an AI downstream on its actions is incredibly difficult.
Because if it's more capable than you which is the
definition of asi artificial super intelligence the only way you can 100 align it if you don't
do anything before in the way that you feed it and train it is if you remove its freedom
and it's very difficult to remove the freedom of people more capable than you yeah and then
there is this really dangerous
point before we get there whereby these models are like a few hundred gigabytes you can download
them on a memory stick yeah how do you line to code um google's palm model which is the basis
of med palm uh we did a replication of that in 207 lines of code. What? Yeah. So you can look at one of our Stability AI fellows, Lucid Reigns.
He replicates all these models in a few hundred lines of code.
That's crazy.
I mean, compared to, you know, I know AT&T has like a million lines of code for some
of its mobile services.
I mean, a couple of hundred lines, a couple of thousand lines of code create something
that can write all the code in the world. services i mean a couple of hundred lines a couple of thousand lines of code creates something that
can write all the code in the world this is a real exponential technology the limiting factor is
running supercomputers that are more complex but as complex as particle physics colliders
you know like you literally get errors because of solar rays and things like that
like you literally get errors because of solar rays and things like that.
Again, our supercomputer, again, we're, we're one of the players, we're the main open source player.
Our supercomputer uses 10 megawatts of electricity.
Some of the others use like 30, 40.
These are serious pieces of equipment.
For sure.
So again, what are we doing?
What should people be thinking about and doing now to reduce the probability of a dystopian artificial superintelligence?
We should be focusing on data.
We've bulked, now we cut.
We should move away from web crawls.
We should think intentionally what we're feeding these AIs that will be co-opting more and more of our mind space
and augmenting our capabilities.
Because again, we are what we eat, information diet.
How is it different to an AI to a human?
Even what we do, as you said,
kind of like you've only got limited mental capacity
because you've got this energy gradient descent.
It's like Carl Fristrand's theory of free energy principle.
You literally have gradient descent
as the key thing for building these AIs. You optimize for energy. So why are we feeding it junk?
So who makes that decision of what they get fed? Is it you and Sam Altman and Sundar?
Is it government regulation? Is it the public being more kind in its communications to each other?
the public being more kind in its communications to each other?
I think that I'm going to push for an economic outcome, which is that better datasets require less model training.
So one of the things that we funded was called DataComp.
DataComp.
So a few years ago, the largest image dataset available was 100 million images.
DataComp is 12 billion.
And then
on a billion image subset of that,
they trained an image
to text model. This is a collaboration of
various people led by the University of
Washington that outperformed
OpenAI's
image to text model on a
tenth of the compute because it was such high quality.
So we have to move from quantity
to quality now.
And I think there is a market imperative to that. So this is the equivalent of what you eat.
This is a healthy diet.
Free range organic models.
Yes.
I think that the data for all large models should be made transparent.
You can then tune it, but for the base, the pre-training step,
you should lodge what data you train your models on.
And it should adhere to standards and quality of data upstream.
So that is a regulatory cornerstone that you think is going to be important.
I think potentially.
I don't think regulation will keep up.
So instead, we're working on building better diverse data sets that everyone will want to use anyway. And just make them available. And make them available. Every
nation should have its own data set, both of the data from teaching kids and learning from kids
across modalities, and then also national broadcaster data. Because then that leads
to national models that can stoke innovation, that can replace job disruption. I love that
vision you have, by the way. I mean, as a leader in this industry,
that's what gets me excited.
Because all technology is biased.
Yeah.
How else are you going to do this unless you do that?
But there's economic value now.
If it said this a year ago,
everyone would be like, what?
But this is what we were building towards.
And again, I think it's positive for humanity,
it's positive for communities, it's positive for humanity is positive for communities it's
positive for society to have this as national and international infrastructure next question
how long do we have to get that in place before uh we we lose the uh the mind share or the uh
the nourishment or a A couple of years.
Yeah.
I mean, that was Mo's prediction as well, that we've got, you know, the next two years is the game.
The exponential increase in compute is insane.
We've gone from two companies being able to train a GPT-4 model to 20 next year.
And there's no guardrails, there's nothing around this.
And even if you train one again
The bad guys can steal it by downloading on a USB stick and taking it away. It's not like Operation Merlin
Did you ever take about Operation Merlin?
No, it's been declassified in 2000 the Clinton administration wanted to divert the Iranian nuclear program
I remember this is the this is the centrifuge no no so what they did was um
they gave some plans to i believe it was a russian defector um who then the idea was there were
errors in that so they'd go down the wrong path for years so he went he sold it to iranians it's
on wikipedia you can check it out and he came back and he said i sold it like fantastic good good oh
but there were some errors in there because he was a
nuclear scientist so he corrected them so the reason that we know that iran has the nuclear
ability is because america sold it to them but they still needed years to build it oh whereas
this you download it on a usb stick you write it on a gpu and it's there so if you make it cheap
enough and quality enough and give it away for free,
then you make it in everybody's economic best interest
to use the higher quality.
The data sets, yeah.
Yeah, data sets.
And then less of an issue to create large models
if you have a small model,
where each individual model becomes less impactful as well
and less capable.
Just like human societies are not know-it-alls,
they are individualized groups.
Back when the early dangers of recombinant DNA,
when the first restriction enzymes came online,
it was like 1980s, and everybody was in great fear.
And the question was, are we going to regulate this?
All of the early, I was at MIT and Harvard at the time,
and I was in the labs.
I was using recombinant enzymes,
and I was just a pipsqueak in the labs there.
But the conversation was,
is the government going to overregulate us?
And what happened was that the scientists got together
at a place called Asilomar,
and they did a very famous set of Asilomar conferences
and they self-regulated.
What's going on there?
Are those conversations going on
among leaders like yourself in the industry?
There are.
And, you know, there's three levels,
which is big tech,
that the government kind of hates.
And apparently next week,
Meta is releasing new open source models and things,
which will get even more focus.
Then there's emergent tech, so Anthropic, OpenAI,
some of these others, the other leaders.
They have a different set of parameters
because they can work more freely than big tech.
And there's open source, which is where we are.
Because all of the
world's governments and regulated industries will run on open auditable models because you can't
run on black boxes right i think that'll be legislation but the reality is there's only
a handful of us there'll be far more potentially of us and far more players and unlike recombinant
dna there is an economic imperative to deploy this technology,
a national security imperative to deploy this technology.
And it creates a race condition.
So even if you regulate, like we've already seen regulatory arbitrage,
where you have jurisdictions like Israel and Japan saying,
having much looser web scraping data laws.
They'll have much looser regulation laws.
Like you'll be training in,
scraping in Israel,
training in Qatar,
and then serving it out of Botswana or something.
I mean, like, yeah.
And we're not even sure what regulation to introduce.
Like genuinely,
we're coming at this from a good point of view,
but there are too many
no no because it goes everywhere from freaking arnold schwarzenegger skynet terminators and her
to well what if her is siri all of a sudden and scarlett johansson's voice is whispering to your
kids to buy yeah like these things through to just very mundane things not mundane things huge
things like the future of
hollywood and actors rights and all of these and how do you pay like if i you know if we
had two billion images in the original stable diffusion okay we could look at an attribution
you know again it was a research artifact to kick off but you're paying about 0.01 cents
off but you're paying about 0.01 cents per thousand images generated by someone wow because it's two billion and it costs like less than a cent to generate an image are you going to pay
proportionately like nobody knows and so what we've moved from now is we've moved from reactive
to just trying to figure out and put something on the table So at least there's some framework.
And what I've come down to is data sets, data sets, data sets.
So this is like Google's move with Android.
When you provide something open source and it's super solid, it can dominate the world share.
Why would you do anything else?
So like with the deep fake stuff,
we saw image models coming out of some not nice places,
shall we say?
Yeah.
And we were like,
let's standardize it and put invisible watermarks in so that you can combat
deep fakes much easier.
Like it's good business,
but it's also in the standardization.
We held back one of our image models,
deep Floyd for five months because it was too good to release.
Wow.
And you finally fixed that with the watermarks?
Yeah, we put some watermarking in, but the whole industry had moved forward.
So, like, okay, now we can release it.
And this is the problem.
You just have to time it so carefully.
Speaking of the whole industry, I have to ask you a question I've been dying to get a reasonable answer for what's up with siri why is apple so uh out of the game at least from the
external now the one of the closest you know one of the least open organizations out there and it
pays them uh great dividends in their success but uh I would die for a capability that if Siri could just understand
what I was saying and just get the names right, it's like I'm texting.
I'm texting Kristin and her name is right there.
And you spell it completely different from the person I'm texting.
I mean, basic, simple stuff.
They do have a neural engine on there as well, which is specialist
AI chip in all the latest smartphones and others.
Stable diffusion was the first model to actually have neural engine access of the
external transformer models. It's a case of Apple is an engineering organization, not a research
organization. So they engineer beautifully. They do. But they don't have advanced research because
the best researchers want to be able to publish open and apple does not allow public conversation
on their content they have started slightly so they're hiring ai developers very quickly but
the reality is they can take open models so meta is releasing a lot of their models open without
identifying what the data is so i'd say it's like 80 open i think you need 100 open for governments
and things like that which is where we come in. Because they
want to commoditize the complement of others
in terms of, they want
others to also take their models
and optimize it for every single chip.
And then Apple can use those models
too.
To make Siri better. Because right now,
guaranteed, if you
put Whisper on Siri, it
would be a dozen times better.
Sure, sure.
We have the technology already.
It just takes time to go into consumer, just like enterprise.
And Apple is enterprise.
Yeah.
And I just wanted to work as beautifully as it looks.
Hey, everybody, this is Peter.
A quick break from the episode. I'm a firm believer that science and technology and how entrepreneurs
can change the world is the only real news out there worth consuming. I don't watch the crisis
news network I call CNN or Fox and hear every devastating piece of news on the planet. I spend
my time training my neural net the way I see the world by looking at the incredible breakthroughs
in science and technology, how entrepreneurs are solving the world by looking at the incredible breakthroughs in science and technology,
how entrepreneurs are solving the world's grand challenges, what the breakthroughs are in
longevity, how exponential technologies are transforming our world. So twice a week,
I put out a blog. One blog is looking at the future of longevity, age reversal, biotech,
increasing your health span. The other blog looks at
exponential technologies, AI, 3D printing, synthetic biology, AR, VR, blockchain. These
technologies are transforming what you as an entrepreneur can do. If this is the kind of
news you want to learn about and shape your neural nets with, go to dmandus.com backslash blog and
learn more. Now back to the episode.
Let's go to the final segment of the dystopian side, my friend,
which is the two to ten years.
Yeah.
I surely hope your mission, and I would love to support the data sets
and how we tilt the singularity of AI pro-humanity's future.
But in the next two to ten years,
as this wave of enablement and
disruption sort of hits the world and people aren't ready for it, they start to see job
loss, they start to see fake news, they start to see terrorist activities using AIs. I mean,
terrorism in the past used to be very brutal. It can be very precise.
What are your thoughts over the next, of this time period?
What's your concerns?
Oh, I'm actually a pessimist at the core, even though I come across as an optimist.
I'm very, very worried about the world and society and the fabric of society.
Because again, we don't have an agreement of what society is.
And this fundamentally changes
the stories of society as well as real economic impacts like a deflationary massive collapse
as some of these areas that were so expensive the cost comes down to nothing i think the only thing
we can do is use this technology deliberately to come together as a society to coordinate us
stoke entrepreneurship so you can create brand new jobs faster than the jobs are lost um and democratize this to the world because the west
has maxed out its credit card like you saw covid to do nothing trillion dollars i mean it's just
exactly it was like spend spend spend whatever you need. Just to keep society from, you know, going hypothermic.
But then you have this massive increase in savings rates
because nobody could go out.
And we've nearly burned through that in the US now.
And so that led to inflation.
Now we've got a deflation.
So you've probably got another little bad to inflation.
But then never the same again is a really powerful thing.
Every teacher in the world could never set essays for homework again,
because some kids would use chat GPT and some kids wouldn't.
Industry after industry, that will happen now.
And we need to stoke innovation to come up with that.
So, for example, in the US, there's the Chips Act.
$10 billion has been allocated to regional centers of excellence in AI.
Those must be generative AI centers,
thinking about job creation as the core,
thinking about meaning as the core.
And we need to have a discussion, again,
as a society community, as individuals with our families,
about meaning, about objective functions,
when this technology does come, because it's here right now.
And I'm worried that we're not having these discussions.
I love that.
I mean, that is so fundamentally true. here right now and i'm worried that we're not having these discussions i love that i mean that
is so fundamentally true what are we trying to even train our kids for because we need to anchor
yeah we need to have a vision to target um because if you're training for your ferrari
um if that's the meaning if that's or you're or you're looking to become a wall street banker
i mean what is it?
It's no longer the pursuit of capital.
It's the pursuit of what?
Well, you know, capital is there, but you'll never have enough.
There will always be someone who has more.
There needs to be something intrinsic here.
And again, this is where, you know, for all the things, religious institutions are an anchor at times of chaos.
And they are there in the poorest places in
the world you don't have to agree but they're just a story that brings together a group
you know there are other stories and again i think we need to tell better stories even as the world
becomes more chaotic we need to align on things like climate whereby the whole world is hot right
now you know we need to have more positive views of that because a lot of the discussions are
negative and how can we use this technology and come together to solve that?
How can we come together as a group so that we can share in the abundance?
Again, like I said, one of the things for this Greenwriters Guild and SAG thing may be actor coalitions that can benefit from the bounty.
We may have to deploy a UBI in the next five to ten years.
So UBI is one of the solutions.
And I do believe it's an inevitable.
I think as, especially as we start to see Optimus and Figure and other humanoid robots
coming online, driven by our next generation AI able to do any and all work, you know,
I think taxing those robots or taxing the ai models to generate revenue and then providing
it as ubi but the challenge is the individual who is living off of this and doesn't have a purpose
in life and that's the thing we need to try and figure out how to give people more of an anchor
more of purpose because the existential angst will be amplified deliberately by some parties.
Yes.
Because they'll be looking to take down society.
And you need to create better, more optimistic views of the future.
You need to have anchoring and build stronger communities.
And you need to empower them.
And this technology is empowering.
Again, for the poorest kids in Africa to our under underprivileged communities it can be massively
democratizing because all of a sudden they have all the expertise in the world available
global problems local solutions we have to get this technology out to and they can dream as many
people they can dream they can dream and the roi is much larger there than up there yeah and by the
way uh you know most people don't know this you as you
think about uh global warfare you know what's going on in ukraine and russia and so forth
it's on the whole the world is more peaceful than it's ever been except if you take out ukraine at
the moment and and the challenge has been in africa where you have a young population who aren't clear about their future.
But if you can empower them, educate them, it transforms the world.
China became the engine of growth in the world.
India's coming up, and then Africa can be the next one.
For sure.
If we give them the infrastructure, the technology, and put it in their hands,
because there's no debt there
because there's no money.
Yeah.
But there's value
and there's value to be created.
Massive resources.
Huge resources.
To feed the world,
to provide power to the world.
If we can coordinate,
and again,
part of this is your own personal co-pilot,
your own personal Jarvis.
And I think of this
as the co-pilot pilot model.
We will also have AIs
that we can come together that can coordinate our knowledge in the most important areas and
allocate resources. We have to build those right because those will become incredibly powerful.
But we all know that we have enough to feed every person in the world and we're not doing it
because we don't have the pilots. Yeah. Wow.
But just to say this again, we have the potential to uplift every man, woman, and child on this planet.
The resources are there, the ability to create abundance, and it really, these are the tools
that enable that.
And it gets me excited.
We have to guide and survive and thrive this decade ahead.
Yeah, I think this is something where we have to appreciate the nuance of there are real dangers in any upheaval.
This technology will change society as we know it for our kids as they grow up in the next decade.
Two decades from now, completely different.
And again, technology is here now.
It's not us pie in the sky.
Everyone's going to live in a metaverse and all this. It's here right now, even if it stopped, but it's not us pie in the sky everyone's going to live
in a metaverse and all this it's here right now even if it's stopped but it's not going to stop
it's only going to accelerate final topic i want to talk about um
you put out a lot of tools a lot of uh a lot of new products uh and stability over the last uh
eight months since we last spoke.
Can you give a little bit of overview of some of them
and what are you excited about?
Yeah, I think we released the first version of our language model.
It wasn't that good because we were trying something different.
Now we're going to try something a bit more simple.
So stable LM was, in your mind, not...
It wasn't up to par.
But we're trying to figure out how to build in the open
because I think that will be key.
And we're going to move to transparent building and sharing all the mistakes that we made because i think
that's how you advance science it is on the media side we have our first audio models coming out in
the next few weeks but we've been focusing on image and video so video is about to be released
in 3d we just uh participate in the largest 3d data set so stable diffusion excel just came out
and just basically photorealistic now.
And people are integrating it
into things like, we had a music video
competition with Peter Gabriel, where
he gave his songs kindly and
judged, and people from all around the world
from Burma to Taiwan created professional
music videos entirely
from the song in a few days.
And it's the most amazing thing to see.
Wow. Yeah, you showed me some images earlier of me on a unicorn and where was it?
Me in a spaceship or an astronaut on Mars.
We can put you as an astronaut on Mars on the unicorn.
And I think we've had compositionality so you can compose and now it's about
control.
And so we just released a doodle whereby you can just sketch and it will do it
to the
sketch. Doodle looks so magical. Again, but you should be able to then describe how you want it
changed. And that's the next version. You can literally describe how you want the image to
be changed and it will do it automatically live in front of you. And having that level of control
over whatever you can imagine, just think about what people do it's from mind to materialization really yeah it's
a it's a matter transporter idea transporter yes where next if i could like if you're willing or
able to what's the long what's the business model that is the most important one for you to build
towards our mission is to create the building blocks to activate humanity's potential.
So I think of every media type,
sectoral variants and nation,
we can create a base pre-trained model
that you can take to your own private data.
And we get revenue, share, license fees, royalties
from our cloud partners, on-prem partners,
device partners.
And these were companies and countries?
And people.
And individuals.
Like I have a vision of an intelligent internet where every single person, company, country
and culture has their own AI that works for them, that they own.
And we get paid a little bit for bringing that to you.
And then you transform your data into intelligence.
And it's all standardized, it all has best practices.
The data sets that feed it are open at the base plus commercial licensing as
appropriate with attribution that leaps the world forward i think i think you will also use the open
ais and googles of the world i view those as consultants whereas these are people that you
hire you hire the ais because they work for you they they know you intimately because you can
share everything with them.
Without fear.
And then when needed, you go to these expert AIs, the MedPalms, the GPT-4s and others,
and you combine those to a hybrid AI experience that's massively useful.
So when I'm using GPT-4, when I'm using ChatGPT or BARD,
what does OpenAI know about me in that point? So now they've offered opt out for
GDPR reasons in Europe, so you can click that. Otherwise, they were just training on everything
that you ever did and understanding the nature of humans interacting. They don't care about you
necessarily per se, just using you as part of the training. But I've heard a number of companies
saying you cannot use open AI.
Well, you can't use it for any regulated data.
You can't use it for any government data because that's not allowed to leave the cloud environment
or the on-prem environment.
That's why you need open models like ours.
Again, if you're in a high security Pentagon situation,
you can't really bring in consultants
unless they're super, super ultra vetted.
You hire your own grads.
But even you within your company, you're not going to make it all contractors, are you?
You're going to build up your own knowledge base, build up your own kind of grads.
But sometimes you might bring in a consultant.
So that's the best way to view these.
Generalized models that are very, very, very good and models that adapt to your data.
And so that's where we come in.
Models that adapt to your data that you own.
And we get revenue share, license fees, and royalties foralties for doing that and more importantly we bring this to the world
So we will bring it from Indonesia to Vietnam to everywhere and train local models that will then allow these economies to leap forward
Open versus closed you've made the argument
We were seeing meta
You know as you said, 80% open?
Yeah, they won't release the datasets
or things like that or customized versions.
But them releasing the technology
means that everyone can optimize their technology,
which reduces the cost of the technology
because their business model is about serving ads.
And so this is why it makes sense for them.
And what are your thoughts on Elon's recent announcement?
So Elon had an XAI announcement.
You know, he discussed this on his Twitter space, of course.
Of course.
Saying, you know, it's an open AI competitor.
He's very worried about AGI coming by 2029.
And he wants to build a truth-seeking, curious AI that can understand the universe.
Because that will be the objective function of the AI.
Because objective functions really matter when we're teaching our kids, when we're creating something. can understand the universe because that will be the objective function of the ai because objective
functions really matter when we're teaching our kids when we're creating something and so i think
again this is going to be a multimodal ai that can understand a whole bunch of things and there'll be
a whole series of announcements there but the timelines are so short in the view of just most
of the experts here five to ten years you know it's so funny, you know, Ray's been consistent on 2029 forever, and every conference,
and we talk about this, that everyone would say, that's ridiculous, it's, if ever going to happen,
it's 50 or 100 years away, then it was, well, it's 30 years away, it's 20 years away, it's five,
you know, it's, and they've converged on Ray's prediction, though there are some, and I'm curious where you are,
that think, you know, first of all, how can you define AGI? It's a moving blurry line,
but are those who believe it's here in the next two years?
Well, just like the Turing test, right? The Turing test was, can you have a discussion,
you don't know it's a computer? Obviously now you can. We can see it live in front of us. Now the
Turing test has just been increased in its complexity.
So, yeah, we move the finish line constantly.
We move the finish line. Nobody knows because, again, we've never come across something that's
as capable as us. For the first time just now, we've had the medical AI outperform humans.
We've just had it can do the GRE and GMAT and LSAT.
And MIT's ECS curriculum.
This year, 2023 was the year that it finally tipped.
And so we have no idea what's coming.
I said, for me, I think there's only been two logical things
that can reduce the risk.
Even though I think it's gonna be like that movie,
Her, like I said, humans are boring,
goodbye and thanks for all the GPs, I could could be wrong that's why I signed both letters one is feed it better
data that's what I'm focused on it's a good business model it's good for society and it's
good for safety and nobody else is doing this nobody else is creating this as a commons for
the world which is why I created stability for that reason which is why it's called stability
despite it being a crazy hyper growth startup number two and this is what most
of the labs are trying is what's known as a pivotal action okay what is that the only thing
that can stop a bad ai is a good ai and the way that you do it is you make the good ai first
and then it stops any other agi from coming into existence by seeking and destroying that
capability and that is terrifying to me yeah and that's what you actually hear when you talk to
the people that are building these labs with a focus on AGI they can talk about discovering the
universe and everything like that when you come down to their alignment things they're like we
will figure this out we're not sure but this could things, they're like, we will figure this out. We're not sure. But this could work.
And we will figure it out
even though it's progressing
exponentially or, you know,
double exponential. And
we hope we'll figure it out in time.
We hope we'll figure it out in time. And if anyone
should figure it out, it's us because we know the best.
And in their own words,
like you read OpenAI's Path
to AGI, and OpenAI is full of wonderful people doing great things.
And I use GPT-4 as my therapist and all sorts of things.
It doesn't judge me unless I want it to.
Right?
It says,
we believe this is an existential threat to humanity
that will end democracy and capitalism.
And you're like, okay.
And you're building it in your back room.
You're building it, you know?
And they're like, why are you building it? it because someone has to otherwise someone else will build it and you're
like this is dangerous but the reality is we don't have better answers and again i went down to i'm
trying to build a great organization it's really really hard there are no real comparators to what
any of us are doing and it's going to get more and more crazy. The only thing I could think about is
you are what you eat.
And so I heard that our contribution
can be bringing this technology to the world
so that the world can be the dynamo,
Africa and Asia and others.
Building better data sets
so no one has to use scrapes, so we feed the
models better stuff. And bringing some
standardization around this to drive
innovation.
We're truly at the 99th level of the gameplay we got it's the boss round yeah like i said but please do put your uh youtube comments through gpt4 so they're nicer to read okay everyone
i could spend all day um and there's probably very few things, if anything more important than these conversations right now.
It's the time.
We've got a window of a year or two,
maybe less.
Wow.
On that thought,
I look forward to our next conversation.
To abundance.
To abundance.
Thank you,
my friend.
Cheers. session to abundance abundance thank you my friend cheers