All-In with Chamath, Jason, Sacks & Friedberg - In conversation with Sam Altman
Episode Date: May 10, 2024(0:00) Welcoming Sam Altman to the show! (2:28) What's next for OpenAI: GPT-5, open-source, reasoning, what an AI-powered iPhone competitor could look like, and more (21:56) How advanced agents will c...hange the way we interface with apps (33:01) Fair use, creator rights, why OpenAI has stayed away from the music industry (42:02) AI regulation, UBI in a post-AI world (52:23) Sam breaks down how he was fired and re-hired, why he has no equity, dealmaking on behalf of OpenAI, and how he organizes the company (1:05:33) Post-interview recap (1:10:38) All-In Summit announcements, college protests (1:19:06) Signs of innovation dying at Apple: iPad ad, Buffett sells 100M+ shares, what's next? (1:29:41) Google unveils AlphaFold 3.0 Follow Sam: https://twitter.com/sama Follow the besties: https://twitter.com/chamath https://twitter.com/Jason https://twitter.com/DavidSacks https://twitter.com/friedberg Follow on X: https://twitter.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@all_in_tok Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://twitter.com/yung_spielburg Intro Video Credit: https://twitter.com/TheZachEffect Referenced in the show: https://twitter.com/EconomyApp/status/1622029832099082241 https://sacra.com/c/openai https://twitter.com/tim_cook/status/1787864325258162239 https://openai.com/index/introducing-the-model-spec https://twitter.com/SabriSun_Miller/status/1788298123434938738 https://www.archives.gov/founding-docs/bill-of-rights-transcript https://twitter.com/ClayTravis/status/1788312545754825091 https://www.inc.com/bill-murphy-jr/warren-buffett-just-sold-more-than-100-million-shares-of-apple-reason-why-is-eye-opening.html https://www.youtube.com/watch?v=snbTCWL6rxo https://www.digitimes.com/news/a20240506PD216/apple-ev-startup-genai.html https://www.theonion.com/fuck-everything-were-doing-five-blades-1819584036 https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model
Transcript
Discussion (0)
I first met our next guest Sam Altman almost 20 years ago when he was working on a local mobile
app called looped. We were both backed by Sequoia capital and in fact, we were both in the first
class of Sequoia scouts. He did investment in a little unknown fintech company called Stripe.
I did Uber and in that tiny experiment, I've never heard that before.
Yeah, I think so. Starting already.
You should write a book, Jacob. Maybe.
That tiny experimental fund that Sam and I were part of a Love you guys. I'm the queen of Kenwa. I'm doing all in.
That tiny experimental fund that Sam and I were part
of a scouts is Sequoia's highest multiple returning fund.
Couple of low digit millions turned into over 200 million
I'm told.
Really?
Yeah, that's what I was told by Rulof, yeah.
And he did a stint at Y Combinator
where he was president from 2014 to 2019.
In 2016, he co-founded Open AI with the goal
of ensuring that artificial general intelligence benefits
all of humanity. In 2019, he left YC to join OpenAI full
time as CEO. Things got really interesting on November 30 of
2022. That's the day OpenAI launched ChatGPT. In January
2023, Microsoft invested 10 billion in November 2023.
Over a crazy five day span, Sam was fired from OpenAI.
Everybody was going to go work at Microsoft.
A bunch of heart emojis went viral on X slash Twitter, and people started speculating that
the team had reached artificial general intelligence.
The world was going to end and suddenly, a couple days later, he was back to being the
CEO of OpenAI.
In February, Sam was reportedly looking to raise $7 trillion for an AI chip project.
This after it was reported that Sam was looking to raise a billion from Masayoshi San to create
an iPhone killer with Johnny Ive, the co-creator of the iPhone.
All of this while chat GPT has become better and better and a household name.
It's having a massive impact on how we work and how work is getting done.
And it's reportedly the fastest product to hit a hundred million users in history in just two months.
And check out OpenEye's insane revenue ramp up.
They reportedly hit two billion in ARR last year.
Welcome to the All In Podcast, Sam Ullman.
Thank you. Thank you, guys.
Sachs, you want to lead us off here?
Okay, sure. I mean, I think the whole industry is waiting with
bated breath for the release of GPT-5. I guess it's been
reported that it's launching sometime this summer, but that's
a pretty big window. Can you narrow that down? I guess where
where are you in the release of GPT-5?
We take our time on releases of major new models.
And I don't think we, I think it will be great when we do it.
And I think we'll be thoughtful about how we do it.
Like, we may release it in a different way
than we've released previous models.
Also, I don't even know if we'll call it GPT-5.
What I will say is, you know, a lot of people
have noticed how much better GPT-4 has gotten since we've released it, and particularly over
the last few months. I think that's like a better hint of what the world looks like,
where it's not the like one, two, three, four, five, six, seven, but you just use an AI system, and the whole system just
gets better and better fairly continuously.
I think that's both a better technological direction.
I think that's easier for society to adapt to.
But I assume that's where we'll head.
Does that mean that there's not going to be long training
cycles and it's continuously retraining or training
submodels, Sam?
And maybe you could just speak to us
about what might change architecturally going forward
with respect to large models.
Well, I mean, one thing that you could imagine
is just that you keep training a model.
That would seem like a reasonable thing to me. Do you think that...
And we talked about releasing it differently this time.
Are you thinking maybe releasing it to the paid users first or a slower rollout to get
the red teams tight since now there's so much at stake?
You have so many customers actually paying and you've got everybody watching everything
you do.
You know, is it... Well, GPT- is it, you have to be more thoughtful now.
Yeah.
Still only available to the paid users.
But one of the things that we really want to do is figure out how to make more
advanced technology available to free users too.
I think that's a super important part of our mission.
And this idea that we build AI tools and make them super widely available,
free or not that expensive, whatever it is, so that people can use them to go kind of invent the future,
rather than the magic AGI in the sky inventing the future and showing it down upon us.
That seems like a much better path, it seems like a more inspiring path, I also think it's where things are actually heading.
So, it makes me sad that we have not figured out
how to make GPT-4-level technology available
for users.
That's something we really want to do.
It's just very expensive, I take it.
It's very expensive.
Yeah.
Chamath, your thoughts?
I think maybe the two big vectors, Sam,
that people always talk about is that underlying cost
and sort of the latency that's kind of rate limited, a killer app.
And then I think the second is sort
of the long-term ability for people
to build in an open source world versus a closed source world.
And I think the crazy thing about this space
is that the open source community is rabid.
So one example that I think is incredible
is we had these guys do a pretty crazy demo
for Devon, remember, like even like five or six weeks ago
that looked incredible.
And then some kid just published it under an open MIT license
like open Devon.
And it's incredibly good and almost as good
as that other thing that was closed source.
So maybe we can just start with that, which is, tell me about the business decision to keep these models closed source,
and where do you see things going in the next couple years?
So on the first part of your question, speed and cost, those are hugely important to us.
And I don't want to like give a timeline on when we can bring
them down a lot because research is hard, but I am confident we'll be able to. We want
to like cut the latency super dramatically. We want to cut the cost really, really dramatically.
And I believe that will happen. We're still so early in the development of the science
and understanding how this works. Plus, we have all the engineering tailwinds.
So, I don't know when we get to intelligence too cheap to meter
and so fast that it feels instantaneous to us and everything else,
but I do believe we can get there for a pretty high level of intelligence.
It's important to us. It's clearly important to users, and it'll unlock a lot of intelligence. It's important to us.
It's clearly important to users and it'll unlock a lot of stuff.
On the sort of open source, closed source thing, I think there's great roles for both.
I think we've open sourced some stuff, we'll open source more stuff in the future.
But really, our mission is to build towards AGI and to figure out how to broadly distribute
its benefits. We have a strategy for that. Seems to figure out how to broadly distribute its benefits.
We have a strategy for that.
Seems to be resonating with a lot of people.
It obviously isn't for everyone and there's like a big ecosystem and there will also be
open source models and people who build that way.
One area that I'm particularly interested personally in open source for is I want an
open source model that is as good as it can be that runs on my phone.
And that I think is gonna, you know, the world doesn't quite have the technology for a good version of that yet.
But that seems like a really important thing to go do at some point.
Will you do? Will you do that? Will you release it?
I don't know if we will or someone will, but someone will.
What about Lama 3?
Lama 3 running on a phone?
Well, I guess maybe there's like a 7 billion version. Yeah,
yeah. I don't know if that's if that will fit on a phone or not. But that should be
fitable on a phone. But I don't I'm not I'm not sure if that one is like, I haven't played with
it. I don't know if it's like good enough to kind of do the thing I'm thinking about here. So when
Lama3 got released, I think the big takeaway for a lot of people was, oh, wow, they've like caught up to GPT-4. I don't think it's equal in all dimensions,
but it's like pretty close or pretty in the ballpark. I guess the question is, you guys
released four a while ago, you're working on five or more upgrades to four. I mean,
I think to Jamal's point about Devon, how do you stay ahead of open source? I mean,
that's just like a very hard thing to do in general, right? I mean, how do you think about that?
What we're trying to do is not make the sort of smartest set of weights that we can.
What we're trying to make is like this useful intelligence layer for people to use.
And a model is part of that.
I think we will stay pretty far ahead of, I hope, we'll stay pretty far ahead of the
rest of the world on that.
But there's a lot of other work around the whole system that's not just that, you know,
the model waits.
And we'll have to build up enduring value the old fashioned
way like any other business does.
We'll have to figure out a great product and reasons to stick with it and you know, deliver
it at a great price.
When you founded the organization, you the stated goal or part of what you discussed
was, hey, this is too important for any one company to own it.
So therefore it needs to be open.
Then there was the switch.
Hey, it's too dangerous for anybody to be able to own it. So therefore it needs to be open. Then there was the switch. Hey, it's too dangerous for anybody to be able to see it. And we need to lock this down because you had
some fear about that. I think is that accurate? Because the cynical side is like, well, this is
a capitalistic move. And then the I think, you know, I'm curious what the decision was here in
terms of going from open, we the world needs to see this,
it's really important to close, only we can see it.
Well, how did you come to that conclusion?
What were the discussions like?
Part of the reason that we released ChatGPT
was we want the world to see this,
and we've been trying to tell people
that AI is really important.
And if you go back to like October of 2022,
not that many people thought AI was gonna be that important
or that it was really happening.
And a huge part of what we try to do
is put the technology in the hands of people.
Now again, there's different ways to do that.
And I think there really is an important role to just say,
here's the weights have at it.
But the fact that we have so many people using
a free version of ChatGPT that we don't run ads on, we don't try to like make money on, we just put out there because we want people to have these tools.
I think it's done a lot to provide a lot of value and, you know, teach people how to fish, but also to get the world really thoughtful about what's happening here.
Now, we still don't have all the answers and we we're fumbling our way through this like everybody else,
and I assume we'll change strategy many more times
as we learn new things.
When we started OpenAI, we had really no idea
about how things were going to go,
that we'd make a language model,
that we'd ever make a product.
We started off just, I remember very clearly
that first day where we're like,
well, now we're all here.
That was difficult to get this set up, but what happens now? Maybe we should write some papers. Maybe we
should stand around a whiteboard. And we've just been trying to like put one foot in front of the
other and figure out what's next and what's next and what's next. And I think we'll keep doing that.
Can I just replay something and just make sure I heard it right? I think what you were saying
on the open source, closed source thing is, if I heard it right. I think what you were saying on the open source, closed source thing is if I heard it right,
all these models independent of the business decision you make are going to
become asymptotically accurate towards some amount of accuracy.
Like not all, but like, let's just say there's four or five that are
well capitalized enough. You guys, Meta, Google, Microsoft, whomever, right? So
let's just say four or five, maybe one startup. And on the open web. And then
quickly, the accuracy or the value of these models will probably shift to these
proprietary sources of training data that you could get that others can't or
others can get that you can't. Is that how you see this thing evolving,
where the open web gets everybody to a certain threshold
and then it's just an arms race for data beyond that?
So I definitely don't think it'll be an arms race for data,
because when the models get smart enough at some point,
it shouldn't be about more data, at least not for training.
It may matter data to make it useful.
Look, the one thing that I have learned
most throughout all this is that it's hard to make confidence statements a couple of years in the
future about where this is all going to go. And so I don't want to try now. I will say that I
expect lots of very capable models in the world. And, you know, like, it feels to me like we just
stumbled on a new fact of nature or science or whatever you want to call it feels to me like we just like stumbled on a new fact of
nature or science or whatever you want to call it which is like we can create
you can like I mean I don't believe this literally but it's like a spiritual
point you know intelligence is just this emergent property of matter and that's
like a that's like a rule of physics or something. So people are going to figure that out.
But there will be all these different ways to design the systems.
People will make different choices, figure out new ideas.
And I'm sure like any other industry, I would expect there to be multiple approaches and
different people like different ones.
Some people like iPhones, some people like an Android phone. I think there'll be some effect like that.
Let's go back to that first section of just the cost and the speed. All of you guys are
sort of a little bit rate-limited on literally Nvidia's throughput, right? And I think that you
and most everybody else have sort of effectively announced how much capacity you can get just because it's as much as they can spin out. What needs to happen at the
substrate so that you can actually compute cheaper, compute faster, get access to more
energy? How are you helping to frame out the industry solving those problems?
Well, we'll make huge algorithmic gains for sure. And I don't want to discount that.
Very interesting chips and energy.
But if we can make a same quality model twice as
efficient, that's like we had twice as much compute.
And I think there's a gigantic amount of work to be done
there.
And I hope we'll start really seeing those results.
Other than that, the whole supply chain is very complicated.
There's logic fab capacity, there's how much HBM the world can make,
there's how quickly you can get permits and pour the concrete,
make the data centers and then have people in there wiring them all up.
There's finding the energy, which is a huge bottleneck.
But I think when there's this much value to people,
the world will do its thing. We'll try to help it happen faster. And there's probably
like, I don't know how to give it a number, but there's like some percentage chance where
there is, as you were saying, like a huge substrate breakthrough. And we have like a
massively more efficient way to do computing. But I don't, I don't like bank
on that or spend too much time thinking about it.
What about the device side and sort of, you mentioned sort of
the models that can fit on a phone. So obviously, whether
that's an LLM or some SLM or something, I'm sure you're
thinking about that. But then does the device itself change? I
mean, is it does it need to be as expensive as an iPhone?
And does the device itself change? I mean, does it need to be as expensive as an iPhone?
I'm super interested in this.
I love great new form factors of computing.
And it feels like with every major technological advance,
a new thing becomes possible.
Phones are unbelievably good.
So I think the threshold is like very high here.
Like I think, I think like I personally think iPhone is like the greatest piece of technology
humanity has ever made.
It's really a wonderful product.
What comes after it?
Like, I don't know.
I mean, I was going to say that was what I was saying.
It's so good that to get beyond it.
I think the bar is like quite high.
Well, you've been working with Johnny Ive on something, right?
We've been discussing ideas, but I don't like if I knew...
Is it that that it has to be more complicated or actually just
much, much cheaper and simpler?
Well, every almost everyone's willing to pay for a phone
anyway. So if you could like make a way cheaper device, I
think the barrier to carry a second thing or use a second
thing is pretty high.
So I don't think, given that we're all willing to pay for phones, or most of us are, I don't
think cheaper is the answer.
Different is the answer, Ben?
Would there be a specialized chip that would run on the phone that was really good at powering
a phone-size AI model?
Probably, but the phone manufacturers
are going to do that for sure.
That doesn't necessitate a new device.
I think you'd have to find some really different interaction
paradigm that the technology enables.
And if I knew what it was, I would
be excited to be working on it right now.
But it's-
Oh, you have voice working right now in the app.
In fact, I set my action button on my phone
to go directly to
Chat GPT's voice app and I use it with my kids and they love it talking. I think got latency issues
But it's really we'll get we'll get that we'll get that better
And I think voice is a hint to whatever the next thing is like if you can get
Voice interaction to be really good it feels I
Think that feels like a different way to use a computer.
But again, like we already-
Let's get a locker with that, by the way. Like why is it not responsive and, you know, it feels like a CB,
you know, like over, over. It's really annoying to use, you know, in that way. But it's also
brilliant when it gives you the right answer.
We are working on that. It's so clunky right now. It's slow, it's like kinda doesn't feel very smooth
or authentic or organic.
Like we'll get all that to be much better.
What about computer vision?
I mean, they have glasses or maybe you could wear a pendant.
I mean, you take the combination of visual or video data,
combine it with voice,
and now AI knows everything that's happening around you.
Super powerful to be able to like, the multimodality of saying like,
hey, Chai GBT, what am I looking at?
Or like, what kind of plant is this?
I can't quite tell.
That's obvious.
That's another, I think, like, hint, but whether people want to wear glasses or hold up something when they want that.
There's a bunch of just like the sort of societal interpersonal issues here are all very complicated about wearing a computer on your face.
We saw that with Google Glass. People got punched in the face in the mission.
I forgot about that. I forgot about that. So I think it's like,
what are the apps that could be unlocked if AI was sort of ubiquitous on
people's phones? Do you have a sense of that or what would you want to see built?
I think what I want is just this always on like super low friction thing where I can
either by voice or by text or ideally like some other it just kind of knows what I want
have this like constant thing helping me throughout my day that's got like as much
context as possible it's like the world's greatest assistant and it's
just this like thing working to make me better and better. There's like a, and then when
you hear people like talk about the AI future, they imagine, there's sort of two different
approaches and they don't sound that different but I think they're like very different for
how we'll design the system in practice. There's the, I want an extension of myself. I want like a ghost or an alter ego or this thing that really
like is me is acting on my behalf is responding to emails, not even telling me about it is
sort of like it becomes more me and is me. And then there's this other thing which is like,
I want a great senior employee.
It may get to know me very well, I may delegate it,
you know, you can like have access to my email
and I'll tell you the constraints,
but I think of it as this like separate entity.
And I personally like the separate entity approach better
and think that's where we're gonna head.
And so in that sense, the thing is not you,
but it's like a always available,
always great, super capable assistant executive.
It's an agent in a way,
like it's out there working on your behalf
and understands what you want and anticipates what you want
is what I'm reading into what you're saying.
I think there'd be agent like behavior, but there's like a difference between a
senior employee and an agent.
Yeah.
And like, I want it, you know, I think of like my, I think like, like one of the
things that I like about a senior employee is they'll push back on me.
They will sometimes not do something I ask, or they're sometimes will say like,
I can do that thing if you want, but if I do it, here's what I think would happen.
And then this and then that, and are you really sure?
I definitely want that kind of vibe, which not just like this thing that I
give a task and it blindly does.
It can reason. Yeah. Yeah. And push back. And I definitely want that kind of vibe, which not just like this thing that I give a task and it blindly does.
It can reason.
Yeah.
Yeah, and push back.
It can reason.
It has like the kind of relationship with me that I would expect out of a really competent
person that I worked with, which is different from like a sycophant.
Yeah.
The thing in that world where if you have this like Jarvis-like thing that can reason. What do you think it does to products that you use today
where the interface is very valuable?
So for example, if you look at an Instacart,
or if you look at an Uber, or if you look at a DoorDash,
these are not services that are meant
to be pipes that are just providing a set of APIs
to a smart set of agents that ubiquitously work on behalf
of 8 billion people.
What do you think has to change in how we think about how apps
need to work, of how this entire infrastructure of experiences
need to work in a world where you're agentically interfacing
to the world?
I'm actually very interested in designing a world that
is equally usable by humans and by AIs.
So I like the interpretability of that. I like the smoothness of the handoffs. I like the ability that we can provide feedback or whatever.
So, you know, DoorDash could just expose some API to my future AI assistant and they could go put the order in and whatever. Or I could say like, I could be holding my phone and I could say,
okay, AI assistant, like, you put in this order on DoorDash, please.
And I could like watch the app open and see the thing clicking around
and I could say, hey, no, not this or like, there's something about
designing a world that is usable equally well by humans and AIs that I think is an interesting concept.
And can handle the hands of us.
Same reason I'm more excited about humanoid robots than sort of robots of like very other
shapes.
The world is very much designed for humans and I think we should absolutely keep it that
way.
And a shared interface is nice.
So you see voice chat, that modality kind of gets rid of apps.
You just ask it for sushi, it knows sushi you liked before,
it knows what you don't like,
and does its best shot at doing it.
It's hard for me to imagine that we just go to a world
totally where you say like, hey, chat GBT, order me sushi.
And it says, okay, do you want it from this restaurant?
What kind, what time, whatever.
I think visual user interfaces are super good for a lot of things.
And it's hard for me to imagine a world where you never look at a screen and just use voice mode only.
But I can imagine that for a lot of things.
I mean Apple tried with Siri. Supposed supposedly you can order an Uber automatically with Siri. I don't think anybody's ever done it because it's,
why would you take the risk of not putting it in your phone?
Well, the quality, to your point, the quality is not good,
but when the quality is good enough,
you'll actually prefer it just because it's just lighter weight.
You don't have to take your phone out.
You don't have to search for your app and press it.
Oh, it automatically logs you out.
Oh, hold on, log back in.
Oh, TFA. It's a whole pain in the ass. You know, it's logged you out. Oh, hold on, log back in. Oh, TFA.
It's a whole pain in the ass.
You know, it's like setting a timer with Siri,
I do every time because it works really well
and it's great and I need more information.
But ordering an Uber, like I wanna see the prices
for a few different options.
I wanna see how far away it is.
I wanna see like maybe even where they are on the map
because I might walk somewhere.
I get a lot more information, I think, in less time
by looking at that order of the uber screen
than I would if I had to do that all through the audio channel.
I like your idea of watching it happen.
That's kind of cool.
I think there will just be different interfaces
we use for different tasks.
And I think that'll keep going.
Of all the developers that are building apps and experiences
on OpenAI, are there a few that stand out for you
where you're like, OK, this is directionally going
in a super interesting area, even if it's like a toy app.
But are there things that you guys point to and say,
this is really important?
I met with a new company this morning,
or barely even a company, it's like two people
that are going to work on a summer project, trying to actually finally make the AI tutor.
And I've always been interested in this space.
A lot of people have done great stuff on our platform.
But if someone can deliver the way that you actually like, they used a phrase I love, which is this is going to be
like a Montessori level reinvention for how people learn things.
Wow, yeah.
But if you can like find this new way to like let people explore and learn in new ways on
their own, I'm personally super excited about that.
A lot of the coding related stuff, you mentioned Devon earlier, I think that's like a super
cool vision of the coding related stuff you mentioned Devon earlier. I think that's like a super cool vision of the future.
The thing that I am, healthcare I believe
should be pretty transformed by this.
But the thing I'm personally most excited about
is the sort of doing faster and better scientific discovery.
GPT-4 clearly not there in a big way,
although maybe it accelerates things a little bit
by making scientists more productive.
But Alpha 43, yeah.
That's like, but Sam, that will be a triumph.
Those are not like these, these models are trained and built differently than the language
models.
I mean, to some, obviously there's a lot that's similar. But there's a lot, there's kind
of a ground up architecture to a lot of these models that are
being applied to these specific problem sets is these specific
applications like chemistry interaction modeling, for
example,
that's you'll need some of that for sure. But the thing that I
think we're missing across the board for many of these things we've been talking about is models that can do reasoning.
And once you have reasoning, you can connect it to chemistry stimulators or whatever else.
So I guess, yeah, that's the important question I wanted to kind of talk about today was this
idea of networks of models.
People talk a lot about agents as if there's kind of this linear set of call functions that happen, but one of the things that
arises in biology is networks of systems that have cross interactions that the
aggregation of the system, the aggregation of the network produces an
output rather than one thing calling another, that thing calling another.
Do we see like an emergence in this architecture of either specialized models or network models
that work together to address bigger problem sets, use reasoning, there's computational models that
do things like chemistry or arithmetic and there's other models that do rather than one model to rule
them all that's purely generalized? I don't know.
Um, I don't know how much reasoning is going to turn out to be a super generalizable
thing.
I suspect it will, but that's more just like an intuition and a hope.
And it would be nice if it worked out that way.
I don't know if that's like.
But let's walk through the protein modeling example.
There's a bunch of training data, images of proteins,
and then sequence data, and they build a model,
predictive model, and they have a set of processes
and steps for doing that.
Do you envision that there's this artificial
general intelligence or this great reasoning
model that then figures out how to build that sub model that figures out how to solve that
problem by acquiring the necessary data and then resolving those solutions?
There's so many ways where that could go.
Like maybe it is, it trains a literal model for it, or maybe it just like knows the one
big model, what it can like go pick what other training data it needs and ask a question and then update on that
I
Guess the real question is are all these startups gonna die because so many startups are working in that modality
Which is go get special data and then train a new model on that special data from the ground up
And then it only does that one sort of thing and it works really well at that one thing and it works better than anything
else at that one, you know, there's like a version of this I think you can like already see.
When you were talking about like biology and these complicated networks of systems,
the reason I was smiling, I got super sick recently and I'm mostly better now,
but it was just like body like got beat up like one system at a time.
Like you can really tell like
okay, it's this cascading thing and
and That reminded me of you like talking about the like biology is just these like you have no idea how much these systems interact with
each other until things start going wrong and that was sort of like interesting to see but I was using I
Was like using chat GPT to try to figure out what was happening, whatever, and
would say, well, I'm unsure of this one thing.
And then I just posted a paper on it without even reading the paper in the context.
And it says, oh, that was the thing I was unsure of.
Now I think this instead.
So that was a small version of what you're talking about, where you can like,
can say this, I don't I don't know this thing, you can put more information, you don't retrain the model, you're just adding it to the context here. And now you're getting them.
So these models that are predicting protein structure, like, let's say, right, this is the
whole basis, and now now other molecules at alpha fold three, can they can? Yeah, I mean, is it basically a
world where the best generalized model goes in and gets that
training data and then figures out on its own? And maybe you
could maybe you could use an example for us. Can you tell us
about Sora, your video model that generates amazing moving
images moving video? And what's different about the architecture
there, whatever you're willing to share, on how that is different?
Yeah, so my, on the general thing first, my...
You clearly will need specialized simulators, connectors, pieces of data, whatever.
But my intuition, and again, I don't have this like backed up data whatever but my intuition
And again, I don't have this like backed up with science my intuition would be if we can figure out the core of generalized reasoning
Connecting that to new problem domains in the same way that humans are generalized reasoners
Would I think be be doable take a fast on lock faster unlock then I think I think, be doable. It's like a faster unlock. Faster unlock than... I think so.
But yeah, Sora does not start with a language model. It's, that's a model that is customized to do video.
And so we're clearly not at that world yet.
Right, so you guys, so just as an example, for you guys to build a good video model,
you built it from scratch using, I'm assuming, some different architecture and different data.
But in the future, the generalized reasoning system, the AGI, whatever system,
theoretically could render that by figuring out how to do it.
Yeah.
I mean, one example of this is like, okay, you know, as far as I know, all the best
text models in the world are still auto regressive models and the best image and
video models are diffusion models.
And that's like sort of strange in some sense.
Yeah.
Yeah.
So there's a big debate about, uh, training data.
You guys have been, I think, the most thoughtful of any company. You've got licensing deals now, FT, et cetera. And we've got to just be gentle here because you're involved in a New York Times lawsuit.
You weren't able to settle, I guess, an arrangement with them for training data.
How do you think about fairness in fair use? We've had big debates here on the pod.
Obviously, your actions speak volumes that you're trying to be fair by doing licensing deals. So,
what's your personal position on the rights of artists who create beautiful music, lyrics,
books, and you taking that and then making a derivative product out of it and then monetizing it.
And what's fair here? And how do we get to a world where, you know, artists can make content in the world and then decide what they want other people to do with it?
Yeah.
Yeah. And I'm just curious of your personal belief, because I know you to be a thoughtful person on this. And I know a lot of other people in our industry
are not very thoughtful about how they
think about content creators.
So I think it's very different for different kinds of,
I mean, look, on unfair use, I think
we have a very reasonable position under the current law.
But I think AI is so different that for things like art,
we'll need to think about them in different ways.
But let's say if you go read a bunch of math AI is so different that for things like art, we'll need to think about them in different ways.
But let's say if you go read a bunch of math on the internet and learn how to do math,
that I think seems unobjectionable to most people.
And then there's like, you know, another set of people who might have a different opinion.
Well, what if you like...
Actually, let me not get into that just in the interest of not making this answer too who might have a different opinion. Well, what if you like...
Actually, let me not get into that just initially not making this answer too long. So I think there's like one category people are like, okay, there's like generalized human knowledge.
You can kind of like go if you learn that, like that's like open domain or something.
If you kind of go learn about the Pythagorean theorem.
That's one end of the spectrum.
And I think the other extreme end of the spectrum is art.
And maybe even like more than, more specifically I would say it's like doing,
it's a system generating art in the style or the likeness of another artist
would be kind of the furthest end of that.
And then there's many, many cases on the spectrum in between.
I think the conversation has been historically very caught up on training data,
but it will increasingly become more about what happens at inference time. time as training data becomes less valuable and what the system does accessing information
in context in real time or taking something like that, what happens at inference time
will become more debated and what the new economic model is there. So if you say like if you say like create me a song in
this in the style of Taylor Swift even if the model were never trained on any
Taylor Swift songs at all you can still have a problem which is that may have
read about Taylor Swift it may know about her themes Taylor Swift means
something and then and then the question is like, should that model,
even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that?
And if so, how should Taylor get paid?
Right.
So I think there's an opt-in, opt-out in that case, first of all, and then there's an
economic model. Staying on the music example, there
is something interesting to look at
from the historical perspective here, which is sampling
and how the economics around that work.
This is not quite the same thing,
but it's an interesting place to start looking.
Sam, let me just challenge that.
What's the difference in the example
you're giving of the model learning about things
like song structure, tempo, melody, harmony, relationships, all
the discovering all the underlying structure that
makes music successful, and then building new music using
training data. And what a human does that listens to lots of
music, learns about and their brain is processing and
building all those same sort of predictive models
or those same sort of discoveries or understandings.
What's the difference here?
And why are you making the case that perhaps artists should be uniquely paid?
This is not a sampling situation.
The AI is not outputting and it's not storing in the model the actual original song.
It's learning structure.
I wasn't trying to make that point because I agree, in the model the actual original song, it's learning structure, right? So-
I wasn't trying to make that point,
because I agree, like in the same way
that humans are inspired by other humans.
I was saying, if you say, generate me a song
in the style of Taylor Swift.
I see, right, okay.
Where the prompt leverages some artists.
I think personally, that's a different case.
Would you be comfortable asking,
or would you be comfortable letting the model train
itself, a music model being trained on the whole corpus of music that humans have created
without royalties being paid to the artists that that music is being fed in and then you're not
allowed to ask you know artist specific prompts you could just say hey pay me a play me a really
cool pop song that's fairly modern about heartbreak with a female voice.
We have currently made the decision not to do music,
and partly because exactly these questions of where
you draw the lines.
And even as meeting with several musicians
that I really admire recently, I was just to like talk about some of these edge cases, but even the world in which
If we
went and
Let's say we paid 10,000 musicians to create a bunch of music just to make a great training set where the music model could learn
everything about strong song structure
and
What makes a good catchy beat and everything else.
And only trained on that.
Let's say we could still make a great music model, which maybe we could.
You know, I was kind of like posing that as a thought experiment to musicians and they're
like, well, I can't object to that on any principle basis at that point.
And yet there's still something I don't like about it.
Now that's not a reason not to do it necessarily,
but it is, did you see that ad that Apple put out,
maybe it was yesterday or something,
of like squishing all of human creativity down
into one really thin iPad?
What was your take on it?
People got really emotional about it, yeah.
Yeah.
Stronger reaction than you would think.
There's something about...
I'm obviously hugely positive on AI,
but there is something that I think is beautiful
about human creativity and human artistic expression.
And, you know, for an AI that just does better science,
like, great, bring that on.
But an AI that is going to do this, like,
deeply beautiful human creative expression,
I think we should like figure out it's going to happen. It's going to be a tool that will lead us
to greater creative heights. But I think we should figure out how to do it in a way that like preserves
the spirit of what we all care about here. And I think your actions speak loudly. We were trying to
speak loudly, we were trying to do Star Wars characters in Dali. And if you ask for Darth Vader, it says, Hey, we can't do that. So you've, I guess, red teamed or whatever you
call it internally. We try. Yeah, you're not allowing people to use other people's IP.
So you've taken that decision. Now, if you asked it to make a Jedi bulldog or a Sith
Lord bulldog, which I did, it made my
bulldogs as Sith bulldogs. So there's an interesting question about your spectrum, right?
Yeah, you know, we put out this thing yesterday called the spec, where
we're trying to say here are, here's how our model is supposed to behave.
And it's very hard, it's a long document, it's very hard to like specify exactly
in each case where the limit should be. And I view this
as like a discussion that's going to need a lot more input.
But but these sorts of questions about, okay, maybe it shouldn't
generate Darth Vader, but the idea of a Sith Lord or a Sith
style thing or Jedi at this point is like part of the culture
like, like, these are these are all hard decisions.
Yeah.
And I think you're right.
The music industry is going to consider this opportunity
to make Taylor Swift songs their opportunity.
It's part of the four-part fair use test is, you know,
these who gets to capitalize on new innovations
for existing art.
And Disney has an argument that, hey, you know,
if you're going to make Sora versions of Ashoka or whatever,
Obi-Wan Kenobi, that's Disney's opportunity. And that's a great
partnership for you. You know, to pursue.
So we're I think this section I would label as AI and the law. So
let me ask maybe a higher level question.
What does it mean when people say regulate AI?
Totally.
Sam, what does that even mean?
And comment on California's new proposed regulations as well if you're up for it.
I'm concerned.
I mean, there's so many proposed regulations, but most of the ones I've seen on the California
state things I'm concerned about, I also have a general fear of the states all doing this
themselves.
When people say regulate AI, I don't think they mean one thing.
I think there's like some people are like ban the whole thing.
Some people like don't allow it to be open source, required to be open source.
The thing that I am personally most interested in is I think there will come, look, I may
be wrong about this.
I will acknowledge that this is a forward looking statement
and those are always dangerous to make,
but I think there will come a time
in the not super distant future,
like, you know, we're not talking like decades
and decades from now, where AI says the frontier AI systems
are capable of causing significant global harm.
And for those kinds of systems, in the same way we have global oversight of nuclear weapons
or synthetic bio or things that can really have a very negative impact way beyond the
realm of one country, I would like to see some sort of international agency that is
looking at the most powerful systems and ensuring like reasonable safety testing.
You know, these things are not going to escape and recursively self-improve or whatever.
The criticism of this is that you have the resources to cozy up, to lobby, to be involved,
and you've been very involved with politicians.
And then startups, which are also passionate about in, are not going to have the ability
to resource and deal with this,
and that this regulatory capture, as per our friend,
you know, Bill Gurley did a great talk last year about it.
So maybe you could address that head on.
Do you feel like...
You know, if the line were we're only going to look
at models that are trained on computers
that cost more than 10 billion
or more than 100 billion or whatever dollars, I'd be fine with that. There'd be some line that'd be fine. And I don't think that puts
any regulatory burden on startups. So if you have like the nuclear raw material to make a nuclear
bomb, like there's a small subset of people who have that. Therefore, you use the analogy of like
a nuclear inspector in the situation. Yeah, I think that's interesting.
Sax, you have a question?
Chamath, go ahead. You had a follow up.
Can I say one more thing about that?
Of course.
I'd be super nervous about regulatory overreach here.
I think we can get this wrong by doing way too much or even a little too much.
I think we can get this wrong by doing not enough.
But, but I do think part of, and I, and now, I mean, you know, we have seen regulatory overstepping or capture just get super bad in other areas.
And, you know, also maybe nothing will happen.
But I think it is part of our duty and our mission to like talk about what we believe is likely to happen
and what it takes to get that right.
The challenge, Sam, is that we have statute that is meant to protect people, protect society
at large.
What we're creating, however, is statute that gives the government rights to go in and audit
code, to audit business trade secrets.
We've never seen that to this degree before.
Basically the California legislation that's proposed and some of the
federal legislation that's been proposed basically requires the
federal government to audit a model, to audit software, to audit and
review the parameters and the weightings of the model.
audit a model, to audit software, to audit and review the parameters and the weightings of the model.
And then you need their check mark in order to deploy it for commercial or public use.
And for me, it just feels like we're trying to rein in the government agencies for fear.
And because folks have a hard time understanding this and are scared about the implications of it, they want to control it.
And the only way to control it is to say, give me a right to audit before you can release it.
Yeah, and they're clueless. These people are clueless.
I mean, the way that the stuff is written, you read it, you're like, going to pull your hair out,
because as you know better than anyone, in 12 months, none of this stuff is going to make sense anyway.
Totally.
Right.
Look, the reason I have pushed for an agency-based approach
for kind of like the big picture stuff
and not a like write it in law is I don't, in 12 months,
it will all be written wrong.
And I don't think, even if these people were like true world
experts, I don't think they could get it right looking out
at 12 or 24 months.
And I don't, these policies which is like we're gonna look
at you know we're gonna audit all of your source code and like look at all of
your weights one by one like yeah I think there's a lot of crazy proposals
out there. By the way especially if the models are always being retrained all
the time if they become more dynamic. Again this is why I think it's yeah but
but like when before an airplane gets, there's like a set of safety tests.
We put the airplane through it.
And it's different than reading all of your code.
That's reviewing the output of the model,
not reviewing the insights of the model.
And so what I was going to say is,
that is the kind of thing that I think, as safety testing,
makes sense.
How are we going to get that to happen, Sam?
And I'm not just speaking for OpenAI,
I speak for the industry, for humanity,
because I am concerned that we draw ourselves
into almost like a Dark Ages type of era
by restricting the growth of these incredible technologies
that can prosper, that humanity can prosper from
so significantly.
How do we change the sentiment and get that to happen?
Cause this is all moving so quickly at the government levels
and folks seem to be getting it wrong.
And I'm personally concerned.
Just to build on that, Sam, the architectural decision
for example, that llama took is pretty interesting
in that it's like, we're gonna let llama grow
and be as unfettered as possible.
And we have this other kind of thing that we call llama guard that's meant to be these protective guardrails. Is that how you see
the problem being solved correctly? Or do you see that?
At the current strength of models, definitely some things are going to go wrong and I don't
want to like make light of those or not take those seriously. But I'm not like, I don't
have any like catastrophic risk worries
with a GPT-4 level model. And I think there's many safe ways to choose to deploy this.
Maybe we'd find more common ground if we said that, like, you know, the specific example
of models that are capable, that are technically capable,
not even if they're not going to be used this way, of recursive self-improvement or of,
you know, autonomously designing and deploying a bio weapon or something like that.
Or a new model.
That was the recursive self-improvement point
You know we should have
Safety testing on the outputs at an international level for models that you know have a reasonable chance of of
posing a threat there I
Don't think like GPT-4
Of course does not
Pose in any sort of well, I want to say any sort because we don't, yeah, I don't think that GPT-4 poses a material threat on those kinds of things.
And I think there's many safe ways to release a model like this.
But, you know, when like significant loss of human life is a serious possibility,
like airplanes or any number of other examples where I think
we're happy to have some sort of testing framework.
Like, I don't think about an airplane when I get on it.
I just assume it's going to be safe.
Right.
There's a lot of hand wringing right now, Sam, about jobs.
And you had a lot of, I think you did like some sort of a test when you were at YC about
UBI and you've been...
Our results in that come out very soon. had a lot of, I think you did like some sort of a test when you were at YC about UBI and you've been-
Our results on that come out very soon.
I just, it was a five year study that wrapped up or started five years ago.
Well, there was like a beta study first and that was like a long one that ran.
But-
Paul Park, what did you learn about that?
Yeah, why'd you start it?
Maybe just explain UBI and why you started it.
So we started thinking about this in 2016, kind of about the same time, started thinking AI really seriously.
And the theory was that the magnitude of the change that may come to society and jobs and the economy,
and sort of in some deeper sense than that, like what the social contract looks like, meant that
we should have many studies to study many ideas about new ways to arrange that.
I also think that, you know, I'm not like a super fan of how the government has handled
most policies designed to help poor people.
And I kind of believe that if you could just give people
money, they would make good decisions, the market would do its thing and you know, I'm very much in favor of
lifting up the floor and reducing
eliminating poverty
but I'm interested in better ways to do that than what we have tried for
the existing social safety net and and kind of the way things have been handled.
And I think giving people money is not going to go solve all problems. It's certainly not going to
make people happy. But it might solve some problems and it might give people a better horizon with
which to help themselves. And I'm interested in that. I think that now that we see some of the ways,
so 2016 was a very long time ago.
Now that we see some of the ways that AI is developing,
I wonder if there's better things
to do than the traditional conceptualization of UBI.
Like, I wonder if the future looks something like more like
universal basic compute than universal basic income and everybody gets like a slice of GPT-7's
compute and they can use it, they can resell it, they can donate it to somebody to use for cancer
research but what you get is not dollars but this like slice. Yeah, you own like part of the
productivity. Right. I would like to shift
to the gossip part of this. Okay. Gossip. What gossip? Let's go back to November. What the flying
You know, I, if you have specific questions, I'm happy to maybe I said, maybe I won't.
You said you were going to talk about it at some point. So here's the point. What happened?
You were fired. You came back. It was palace intrigue. Did
somebody stab you in the back? Did you find a GI? What's going
on? Tell us. This is a safe space.
I was fired. I was I talked about coming back. I kind of was a little bit unsure at the moment about what I
wanted to do because I was very upset. And I realized that I really loved open AI and the people
and that I would come back and I kind of, I knew it was going to be hard. It was even harder than I thought, but I kind of was like,
all right, fine. I agreed to come back. The board took a while to figure things out. And then
we were kind of trying to keep the team together and keep doing things for our customers and
started making other plans. Then the board decided to hire a different interim CEO.
And then
everybody, there are many people. Oh my gosh, what was what was that guy's name? He was there for like
a scare moochie, right? Like, I have nothing but good things to say about it. And then
where were you when they when you found the news that you'd been fired?
Like, well, I was in Vegas.
I was in a hotel room in Vegas for F1 weekend.
I think that's how I'm going to get a text and they're like, what did you say?
Did you fire pick up?
I said, I think that's happened to you before, Jay.
I'm trying to think if I ever got fired.
I don't think I've gotten tired.
Yeah, I got a text.
No, it's just a weird thing.
Like it's a text from who?
Actually, no, I got a text the night before and then I got on a phone call with the board.
And then that was that.
And then I kind of like, I mean, then everything went crazy.
I was like, it was like, I mean, I have,
my phone was like unusable.
It was just a nonstop vibrating thing of like text messages.
It's called basically.
You got fired by tweet.
That happened a few times during the Trump administration.
A few cabinet appointments.
They did call me first before tweeting.
Which was nice of them.
And then like, you know, I kind of did like a few hours
of just this like absolute fugue state in the hotel room.
Trying to like, I was just confused beyond belief.
Trying to figure out what to do.
And. So weird weird and then like flew home it may be like got on a plane like I
don't know 3 p.m. or something like that still just like you know crazy non-stop
phone blowing up met up with some people in person by that evening I was like
okay you know I'll just like go do AGI research
and I was feeling pretty happy about the future.
Yeah, you have options.
And then the next morning, I had this call with a couple of board members about coming
back and that led to a few more days of craziness. And then it kind of, I think it got resolved. Well, it
was like a lot of insanity in between.
What percent of it was because of these nonprofit board members?
Well, we only have a nonprofit board, so it was all the nonprofit board members. The board had gotten down to six people.
And then they removed Greg from the board and then fired me.
But it was like, you know.
But I mean, like, was there a culture clash
between the people on the board
who had only nonprofit experience
versus the people who had startup experience?
And maybe you can share a little bit about,
if you're willing to, the motivation behind the action, had started experience. And maybe you can share a little bit about if you're willing to the motivation behind the action,
anything you can.
I think there's always been culture clashes at...
Look, obviously not all of those board members
are my favorite people in the world,
but I have serious respect for the gravity with which they
treat AGI and the importance of getting AI safety right. And even if I stringently
disagree with their decision making and actions, which I do, I have never once doubted their integrity or commitment
to the sort of shared mission of safe and beneficial AGI.
Do I think they made good decisions in the process of that
or kind of know how to balance all of the things opening
eyes to get right?
No.
But I think that like
the intent of the magnitude of AGI and getting that right.
I actually let me ask you about that. So the mission of OpenAI is explicitly to create AGI,
which I think is really interesting. A lot of people would say that if we create AGI,
that would be like an unintended consequence
of something gone horribly wrong,
and they're very afraid of that outcome.
But OpenAI makes that the actual mission.
Does that create more fear about what you're doing?
I mean, I understand it can create motivation too,
but how do you reconcile that?
I guess why is that?
I think a lot of the, well,
I mean, first I'll say,
I'll answer the first question in the second one.
I think it does create a great deal of fear.
I think a lot of the world is understandably
very afraid of AGI, or very afraid of even current AI,
and very excited about it and even more afraid
and even more excited about where it's going. And we wrestle with that. But like, I think it is
unavoidable that this is going to happen. I also think it's going to be tremendously beneficial.
But we do have to navigate how to get there in a reasonable way. And like a lot of stuff is going to change and change is, you know, pretty uncomfortable for people.
So there's a lot of pieces that we got to get right.
Can I ask a different question? You have created, I mean, it's the hottest company.
And you are literally at the center of the center of the
center.
But then it's so unique in the sense that all of this value you eschewed economically.
Can you just like walk us through like, yeah, I wish I had taken, I wish I had taken equity
so I never had to answer this question.
If I could go back in time, why don't they give you a grant now? Why doesn't the board just give you a big option grant like you deserve?
Yeah, give you five points. What was the decision back then? Like why was that so important?
The decision back then, the original reason was just like the structure of our non-profit.
It was like there was something about yeah okay this is like nice from a motivations
perspective but mostly it was that our board
needed to be a majority of disinterested directors. And I
was like, that's fine. I don't need equity right now. I kind
of. But like,
but in a weird way, now that you're running a company, yeah,
it creates these weird questions of like, well, what's
your real motivation for us?
That's that it is so deeply on. One thing I have
noticed it is it's so deeply unimaginable to people to say,
I don't really need more money. Like, and I think, I think
people think it's a little bit of an ulterior motive.
Well, yeah, yeah, no, it's so it assumes it's like, what else
is he doing on the side to make money?
If I were just trying to say like, I'm'm gonna try to make a trillion dollars with open AI
I think everybody would have an easier time and it would save me. Well, I save a lot of conspiracy theories
So this is totally the back channel
You are a great deal maker. I've watched your whole career. I mean just great at it. You got all these connections
You're really good at raising money.
You're fantastic at it.
And you got this Johnny Ive thing going.
You're in Humane.
You're investing in companies.
You got the Orb, raising $7 trillion to build Fabs,
all this stuff.
All of that put together.
J. Cal loves fake news.
J. Cal loves fake news.
He loves fake news.
I'm kind of being a little facetious here.
Obviously it's not raising $7 trillion. But maybe
that's the market cap of something putting all that aside.
The tea was you're doing all these deals, they don't trust
you because what's your motivation you you're end running
and what opportunities belong inside of OpenAI what
opportunity should be Sam's and this group of nonprofit people
didn't trust you. Is that what happens?
So the things like, you know, device companies or if we were doing some chip fab company,
it's like those are not Sam project. Those would be like opening. I would get that equity.
It would. Okay, that's not public's perception.
Well, that's not like kind of the people like you who have to like commentate on the stuff all day's
perception, which is fair, because we haven't announced this stuff because it's not done. I don't think most
people in the world like are thinking about this, but I agree it spins up a lot of conspiracies,
conspiracy theories in like tech commentators. Yeah, in a vacuum. And if I could go back,
yeah, I would just say like, let me take equity and make that super clear. And then it would be
like, all right, like I'd still be doing it because I really care
about AGI and think this is like the most interesting work in the world. But
it would at least type check to everybody.
What's the chip project? That's a $7 trillion. And where did the 7 trillion
number come from? It makes no sense.
I don't know where that came from. Actually, I genuinely don't. I think I
think the world needs a lot more AI infrastructure, a lot more than it's currently planning to build
and with a different cost structure.
The exact way for us to play there
is we're still trying to figure that out.
What's your preferred model of organizing OpenAI?
Is it sort of like the move fast, break things,
highly distributed small teams, or is it more of this organized effort where you need to plan because you want to prevent some of these edge cases?
Oh, I have to go in a minute. It's not because
it's not to prevent the edge case that we need to be more organized, but it is that these systems are so complicated and concentrating bets are so important.
Like one, you know, at the time, before it
was like obvious to do this, you have like DeepMind or whatever
has all these different teams doing all these different things
and they're spreading their bets out.
And you had OpenAI say, we're going
to like basically put the whole company and work together
to make GPT-4.
And that was like unimaginable for how to run an AI research lab.
But it is, I think, what works.
At minimum, it's what works for us.
So not because we're trying to prevent edge cases,
but because we want to concentrate resources and do these big, hard,
complicated things, we do have a lot of coordination on what we work on.
All right, Sam, I know you've got to go.
You've been great on the hour. Come back any time. Great talking to you guys. Yeah.
It's wonderful. Thanks for being so open about it. We've been
talking about it for like a year plus. I'm really happy it
finally happened. Yeah, it's awesome. I really appreciate it.
I would love to come back on after our next like major
launch and I'll be able to talk more directly about definitely
what's going on. You've got the zoom link. Same zoom link every
week. Just same time. Same zoom link just drop in anytime. Just drop in.
Just put it on your calendar.
Come back to the game.
Come back to the game.
Yeah, come back to the game.
I, you know, I would love to play poker.
It has been forever.
That would be a lot of fun.
Come on in.
Send me an invite.
Send me an invite.
That famous hand where Chamath, you and I were heads up and you, you had-
I don't rely me?
You and I were heads up and you went all in.
I had a set, but there was a straight and a flush on the board and I were heads up. And you went all in, I had a
set, but there was a straight and a flush on the board. And I'm
in the tank trying to figure out if I want to lose this back. We
played small stakes, it might have been like 5k pot or
something. And then Chamath can't stay out of the pod. And he
starts taunting the two of us, you should call you shouldn't
call. He's bluffing. And I'm like, I'm going, I'm trying to
figure out if I make the call here, I make
the call. And it was like, you had a really good hand. And I
just happened to have a set, I think you had the top pair top
kicker or something. But you made a great move because the
world was so textured, almost like go bottom set.
Sam has a great style of playing which I would call random jam.
Totally. You got to just get out of the way. I don't know if
you can say about anybody. I don't know if you can say that about anybody else. I don't, I'm not gonna.
You haven't seen Tumoth play in the last 18 months.
It's a lot different.
I've come back to the game.
I'm so much fun now.
Have you played bomb pots before?
Have you played bomb pots in this game?
I don't know what that is.
This game is nuts.
It's BLL.
And boards. And congrats on everything, honestly.
Thank you, Chamath.
Thanks for coming on.
Thanks, Ed.
And we'll have you back when the next F the Big launch.
Sounds good.
Yeah, please do.
Cool, bye.
Gentlemen, some breaking news here.
All those projects, he said, are part of OpenAI.
That's something people didn't know before this
and a lot of confusion there.
Chamath, what was your major takeaway
from our hour with
Sam?
I think that these guys are going to be one of the four
major companies that matter in this whole space. I think that
that's clear. I think what's still unclear is where is the
economics going to be? He said something very discreet, but I
thought was important, which is, I think he basically, my
interpretation is these models will roughly all be the same, but there's going to be a
lot of scaffolding around these models that actually allow you to build these apps.
So in many ways that is like the open source movement.
So even if the model itself is never open source, it doesn't much matter because you
have to pay for the infrastructure, right? There's a lot of open source software that runs on Amazon. You still pay AWS something. So I think
the right way to think about this now is the models will basically be all really good. And then it's
all this other stuff that you'll have to pay for. The interface. Whoever builds all this other stuff
is going to be in a position to build a really good business. The interface. Whoever builds all this other stuff
is going to be in a position to build a really good business. Freberg, he talked a lot about reasoning.
It seemed like that he kept going to reasoning
and away from the language model.
Did you note that and anything else
that you noted in our arrow with him?
Yeah, I mean, that's a longer conversation
because there is a lot of talk about language models
eventually evolving to be so generalizable
that they can resolve
pretty much like all intelligent function. And so the language model is
the foundational model that that yield AGI. But that's all I think there's a
lot of people that have different schools of thought on this and how much.
My other takeaway I think is that the I think what he also seemed to indicate is there's like
so many – like we're all so enraptured by LLMs, but there's so many things other than LLMs that
are being baked and rolled by him and by other groups. And I think we have to pay some amount
of attention to all those because that's probably where – and I think, Friedberg, you tried to go
there in your question. That's where reasoning will really come from is this mixture of experts approach.
And so you're going to have to think multi-dimensionally to reason, right? We do
that, right? Do I cross the street or not in this point in time? You reason based on all these
multi-inputs. And so there's all these little systems that go into making that decision in
your brain. And if you use that as a simple example,
there's all this stuff that has to go into making
some experience being able to reason intelligently.
Sachs, you went right there with the corporate structure,
the board, and he gave us a lot more information here.
What are your thoughts on the, hey, you know,
the chip stuff and the other stuff I'm working on,
that's all part of open AI.
People just don't realize it.
And in that moment, and then, you know,
your questions to him about equity, your thoughts on?
I'm sure I was like the main guy
who asked that question, Jake Al, but.
Well, no, you do talk about the nonprofit,
the difference between the nonprofit.
Well, I had a follow up question about the,
that's what I'm talking about.
There clearly was some sort of culture clash on the board between the nonprofit. I had a follow up question about the, there clearly was some sort of culture clash
on the board between the people who originated
from the nonprofit world and the people
who came from the startup world.
And the tech side.
We don't really know more than that,
but there clearly was some sort of culture clash.
I thought one of the, a couple of the other areas
that he drew attention to that were kind of interesting
is he clearly thinks there's a big opportunity on mobile
that goes beyond
just like having, you know, a chat, GPT app on your phone, or maybe even having like a
Siri on your phone. There's clearly something bigger there. He doesn't know exactly what
it is, but it's going to require more inputs. It's that, you know, personal assistant that's
seen everything around you and helping you.
I think that's a great insight, David, because he was talking about,
hey, I'm looking for a senior team member
who can push back on me and understands all contexts.
I thought that was like a very interesting to think about.
Yeah, he's talking about an executive assistant
or an assistant that has executive function
as opposed to being like just an alter ego for you
or what he called a sycophant.
That's kind of interesting.
I thought that was interesting.
Yeah.
Yeah.
And clearly he thinks there's a big opportunity in biology and scientific discovery.
After the break, I think we should talk about AlphaFold 3.
It was just announced today.
Yeah, let's do that.
And we can talk about the the Apple ad in depth.
I just want to also make sure people understand when people come on the pod, we don't show
them questions.
They don't edit the transcript.
Nothing is out of bounds.
If you were wondering
why I didn't ask or we didn't ask about the Elon lawsuit, he's just not going to be able
to comment on that. So it'd be no comment. So, you know, and that's why I didn't go to
it.
And we're not hearing, like our time was limited and there's a lot of questions that we could
ask him that would have just been a waste of time. And frankly, he's already been asked.
So I just want to make sure people understand.
Yeah, of course he's going to no comment on any lawsuit and he's already been asked about
that 500 times.
All right. Yes. Should we take a quick break before the next before we come back?
Yeah, take a bio break and then we'll come back with some news for you and some more banter with
your favorite
Besties on the number one podcast in the world the only podcast. All right, welcome back everybody second half of the show great guests
And Waltman they coming on the pod. We've got a bunch of news on the docket. So
let's get started. Freiburg, you told me I could give some names
of the guests that we booked for the All In Summit.
I did not.
You did. You've said each week, every week that I get to say
some names.
I did not. I appreciate your interest in the All In Summit's
lineup, but we do not yet have enough critical mass to feel like we should go out
there. Well, I am a loose cannon. So I will my two guests
and I created the summit and you took it from me. So I've done a
great job. I will announce my guests. I don't care what your
opinion is. I have booked two guests for the summit and it's
going to be sold out. Look at these
two guests I booked for the third time coming back to the
summit, our guy Elon Musk will be there, hopefully in person,
if not, you know, from 40,000 feet on Starlink connection,
wherever he is in the world. And for the first time, our friend
Mark Cuban will be coming. And so two great guests for you to
look forward to. But
Friedberg's got like a thousand guests coming. He'll tell you when it's like 48 hours before
the conference. But yeah, two great guests coming.
Wait, speaking of billionaires who are coming, isn't *** coming too?
Yes, *** coming. Yes, he's booked.
So we have three billionaires.
Three billionaires, yes.
*** hasn't fully confirmed, so don't.
Okay. Well, we're going to say it anyway. *** has penciled in.
Don't say it.
We'll say penciled. Yeah so don't. Okay, well, we're gonna say it anyway. As penciled in, and that's it. Don't back out. We'll say penciled.
Yeah, don't back out.
This is gonna be catnip for all these protest organizers.
Like, if you have to get one place.
Oh God, do not poke the bear.
Well, by the way, speaking of updates,
what'd you guys think of the bottle for the All In Tequila?
Oh, beautiful.
Honestly, honestly, I will just say,
I think you are doing a marvelous job.
That, I was shocked at the design.
Shocked meaning it is so unique and high quality.
I think it's amazing.
It would make me drink tequila.
You're going to, you're going to want to.
Gotcha.
It is a stunning, just congratulations.
And yeah, it was just, when we went through the deck
at the monthly meeting, it was like, oh, that's nice.
Oh, that's nice.
We're going to do the concept bottles.
And then that bottle came up and everybody went like crazy.
It was like somebody hitting like a,
Steph Curry hitting a half quart shot.
It was like, oh my God.
It was just so clear that you've made an iconic bottle
that if we can produce it, oh lord,
it is going to be a game.
Looks like we can.
Oh, is it going to be good?
Yeah.
It's going to be amazing.
I'm excited.
I'm excited for it.
I mean, the volume design is so complicated that we had to do a feasibility analysis on
whether it was actually manufacturable, but it is.
So at least the early reports are good.
So we're going to, hopefully we'll have some made for the,
in time for the All In Summit.
I mean, why not?
I mean, it's great when we get barricaded in
by all these protesters, we can drink the tequila.
Did you guys see Peter Thiel?
Peter Thiel got barricaded by these ding-dongs at Cambridge?
My God.
Listen, people have the right to protest.
I think it's great people are protesting,
but surrounding people and threatening them
is a little bit over the top and dangerous.
I think you're exaggerating what happened.
Well, I don't know exactly what happened.
It's all we see is these videos.
Look, they're not threatening anybody.
And I don't even think they tried to barricade him in.
They were just outside the building.
And because they were blocking the driveway,
his car couldn't leave.
But he wasn't physically like locked in the building or something.
That's what the headlines say, but that could be fake news, fake social.
This was not on my bingo card.
This pro-protest support by Sachs was not on the bingo card, I got to say.
The Constitution of the United States in the First Amendment provides for the right of assembly,
which includes protest incidents,
as long as they're peaceable.
Now, obviously, if they go too far and they vandalize
or break into buildings or use violence,
then that's not peaceable.
However, expressing sentiments with which you disagree
does not make it violent.
And there's all these people out there now making the argument that if you hear something from a protester that you don't like,
and you subjectively experience that as a threat to your safety, then that somehow
should be treated as valid, like that's basically violent. Well, that's not what the Constitution says.
And these people understood well just a few months ago
that that was basically snowflakery.
That, you know, just because somebody...
You know what I'm saying?
Like we have now the...
We're getting all these great words.
We have the rise of the woke right now, where they're buying...
The woke right.
Yeah, the woke right.
They're buying into this idea of safetyism, which is being exposed
to ideas you don't like,
to protests you don't like, is a threat to your safety.
No, it's not.
So now we have snowflakes on the whole side.
Everybody's being- We absolutely have snowflakery
on both sides now.
It's ridiculous.
The only thing I will say that I've seen
is this surrounding individuals who you don't want there
and locking them in a circle
and then moving them out of the protest area.
That's not cool.
Yeah, obviously you can't do that.
But look, I think that most of the protests
on most of the campuses have not crossed the line.
They've just occupied the lawns of these campuses.
And look, I've seen some troublemakers
try to barge through the encampments
and claim that because they can't go through there,
that somehow they're being prevented from going to class.
Look, you just walk around the lawn and you can get to class.
Okay?
And you know, some of these videos are showing that these are effectively right-wing provocateurs
who are engaging in left-wing tactics.
And I don't support it either way.
Okay, by the way, some of these camps
are some of the funniest things you've ever seen.
It's like, there are like one tent that's dedicated
to like a reading room and you go in there
and there's like these like-
Mindfulness center.
Oh my God, it's unbelievably hilarious.
Look, there's no question that because the protests
are originating on the left, that there's some goofy views.
You're dealing with a left-wing idea complex, right?
But it's easy to make fun of them doing different things.
But the fact of the matter is that most of the protests in most of these campuses are,
even though they can be annoying because they're occupying part of the lawn, they're not violent.
And the way they're being cracked down on, they're sending the police of the lawn, they're not violent. And you know,
the way they're being cracked down on, they're sending the police in at 5 a.m. to crack down
on these encampments with batons and riot gear. And I find that part to be completely excessive.
Well, it's also dangerous because, you know, things can escalate when you have mobs of people
and large groups of people. So I just want to make sure people understand that large group of people, large, you have a diffusion of responsibility that occurs
when there's large groups of people who are passionate about things and people can get
hurt. People have gotten killed at these things. So just, you know, keep it calm. Everybody.
I agree with you. Like what's the harm of these folks protesting on a lawn? It's not
a big deal when they break into buildings. Of course. Yeah. That crosses the line. Obviously.
Yeah. But I mean, let them sit out there and then they'll run out their food cards,
their campus food card, and they run out of waffles.
Did you guys see the clip? I think it was on the University of Washington campus where
one kid challenged this Antifa guy to a push-up contest.
Oh, fantastic.
I mean, it is some of the funniest stuff. Some of the some content is coming out. That's just my favorite was the
woman who came out and said that the Columbia students needed
humanitarian aid. Oh my god, they overdubs on her were
hilarious. I was like, humanitarian aid. He's like, we
need our door dash right now. We need to be double dash some
boba and we can't get it through the police. We need our boba
Low sugar boba with the popping boba bubbles wasn't getting in
but you know people have the right to protest and
Piece of bull by the way, there's a word. I've never heard very good sacks piece of bull
Inclined to avoid argument or violent conflict. Very nice. Well, it's in the Constitution. It's in the First Amendment
Is it really I've never I haven't heard the word peaceable before. I mean, you and I are simpatico on this. Like I don't
we used to have the ACLU like backing up the KKK going down Main Street and really fighting for
Yeah, they were really fighting for and I have to say the Overton window is opened back up.
And I think it's great. All right. We got some things on the docket here. I don't know if you guys saw the apple new ipad ad
It's getting a bunch of criticism. They use like some giant
Hydraulic press to crush a bunch of creative tools
Ej turntable trumpet piano people really care about apple's ads and what they represent
We talked about that mother earth little vignette they created
here. What do you think, Freiburg? You see the ad? What
was your reaction to it?
Made me sad and did not make me want to buy an iPad. So, huh?
Did not seem like it made you sad. It actually elicited an
emotion meaning like commercials. It's very rare that
commercials can actually do that. Most people just zone out.
Yeah, they took all this beautiful stuff and heard it.
It didn't feel good.
I don't know.
It just didn't seem like a good ad.
I don't know why they did that.
I don't get it.
I don't know.
I think maybe what they're trying to do is the selling point of this new iPad is that
it's the thinnest one.
I mean, there's no innovation left, so they're just making the devices thinner.
So I think the idea was that they're going to take this hydraulic press to represent
how ridiculously thin the new iPad is.
Now I don't know if the point there was to smush all of that good stuff into the iPad.
I don't know if that's what they were trying to convey.
But yeah, I think that by destroying all those creative tools that Apple is supposed to represent, it definitely seemed very off brand for them.
And I think people were reacting to the fact that it was so different than what they would have done in the past.
And of course, everyone was saying, well, Steve would never have done this.
I do think it did land wrong.
I mean, I, I didn't care that much that much, but I was kind of asking the question,
like why are they destroying all these creator tools
that they're renowned for creating
or for turning into the digital version?
Yeah, it just didn't land.
I mean, Chamath, how are you doing emotionally
after seeing that?
Are you okay, buddy?
Yeah, I think this is, you guys see that in the Berkshire annual meeting last weekend,
Tim Cook was in the audience and Buffett was very laudatory. This is an incredible company,
but he's so clever with words. He's like, you know, this is an incredible business
that we will hold forever. Most likely. And it turns out that he sold $20 billion worth of Apple
shares. Which, by the way, if you guys remember, we put that little chart up, which shows when he
doesn't mention it in the, in the annual letter, it's basically like it's foreshadowing the fact
that he is just pounding the sell and he sold
$20 billion.
Well, also holding it forever could mean one
share.
Yeah, exactly.
We kind of need to know like how much are we
talking about?
I mean, it's an incredible business that has so
much money with nothing to do.
They're probably just going to buy back the
stock, just a total waste.
There were floating rumors of buying Rivian, you know, after they shut down Titan project,
the air internal project to make a car. It seems like a car is the only thing people
can think of that would move the needle in terms of earnings.
I think the problem is, J. Cal, like you kind of become afraid of your own shadow, meaning
the folks that are really good at M&A, like you look at Benioff,
the thing with Benioff's M&A strategy is that he's been doing it for 20 years.
And so he's cut his teeth on small acquisitions and the market learns to give him trust so
that when he proposes like the $27 billion Slack acquisition, he's allowed to do that.
Another guy, you know, Nikesh Arora at PanW, these last five years, people were very
skeptical that he could actually roll up security because it was
a super fragmented market. He's gotten permission. Then there
are companies like Danaher that buy hundreds of companies. So
all these folks are examples of you start small and you, you
earn the right to do more. Apple hasn't bought anything more
than 50 or 100 million dollars. And so the
idea that all of a sudden they come out of the blue and buy a 10, 20 billion dollar company,
I think is just totally doesn't stand logic. It's just not possible for them because they'll be so
afraid of their own shadow. That's the big problem. It's themselves. Well, if you're running out of
in-house innovation and you can't do M&A, then your options are kind of limited. I mean I do think that the fact that the big news out of Apple
is the iPads getting thinner does represent kind of the end of the road in
terms of innovation. It's kind of like when they added the third camera to the
iPhone. Yeah. It reminds me of those, remember like when the Gillette Mock 3 came out?
Yeah, they did the 5. It was the best onion thing, was like, we're doing five.
F it.
But then Gillette actually came out with the Mock 5.
So, like the parody became the reality.
What are they going to do, add two more cameras to the iPhone?
You have five cameras on it?
No, it makes no sense. And then
I don't know anybody wants to, remember the Apple
Vision was like going to
Plus why are they body shaming the fat
iPads?
That's fair point, that's fair point.
Actually, you know what?
It's actually, this didn't come out yet,
but it turns out the iPad is on Osempic.
It's actually dropped a lot of weight.
That would have been a funnier ad.
Yeah. Exactly.
O O Osempic.
We can just workshop that right here.
But there was another funny one,
which was making the iPhone smaller and smaller and
smaller and the iPod smaller and smaller and smaller to the point it was like, you know,
like a thumb size iPhone.
Like the Ben Stiller phone in Zoolander or in Zoolander.
Correct.
That was a great scene.
Is there a category that you can think of that you would love an Apple product for there's a product in your life that you would love to
have apples. Yes, version of it.
They killed it. I think a lot of people would be very open minded
to an Apple car. Okay, they just would. It's a connected internet
device increasingly. So yeah, and they managed to flub it.
They had a chance to buy Tesla. They managed to flub so. Yeah. And they managed to flub it. They had a chance to buy Tesla.
They managed to flub it.
Yeah.
Right?
There are just too many examples here
where these guys have so much money
and not enough ideas.
That's a shame.
It's a bummer, yeah.
The one I always wanted to see them do,
Zach, was-
TV.
The one I always wanted to see them do
was the TV.
And they were supposedly working on it,
like the actual TV,
not the little Apple TV box in the back.
And like that would have been extraordinary to
actually have a gorgeous, you know, big television.
What about a gaming console?
They could have done that, you know, there's just
all these things that they could have done.
It's not a lack of imagination because these
aren't exactly incredibly world beating ideas.
They're sitting right in front
of your face. It's just the will to do it. Yeah.
Yeah, they all in one TV would have been good.
If you think back on Apple's product lineup over the years, where they've really created value is
on how unique the products are. They almost create new categories. Sure, there may have been a quote
tablet computer prior to the iPad, but the iPad really defined the tablet computer era. Sure, there was a smartphone
or two before the iPhone came along, but it really defined the smartphone. And sure, there was a
computer before the Apple too. And then it came along and it defined the personal computer.
In all these cases, I think Apple strives to define the category. So it's very hard to define
a television if you think about it or a gaming console in a way that you take a step up
and you say, this is the new thing.
This is the new platform.
So, I don't know, that's the lens I would look at if I'm Apple in terms of like,
can I redefine a car?
Can I make, you know, we're all trying to fit them into an existing product
bucket, but I think what they've always been so good at is identifying consumer
needs and then creating an entirely new way of addressing that need in a real step change function.
From the iPod, it was so different from any MP3 player ever.
I think the reason why the car could have been completely reimagined by Apple is that
they have a level of credibility and trust that I think probably no other company has
and absolutely no other tech company has.
And we talked about this,
but I think this was the third Steve Jobs story
that I left out, but in 2000 and I don't know, was it one?
I launched a 99 cent download store.
I think I've told you this story in Winamp.
And Steve Jobs just ran total circles
around us, but the reason he was able to is he had all the credibility to go to the labels and get
deals done for licensing music that nobody could get done before. I think that's an example of
what Apple's able to do, which is to use their political capital to change the rules. So if the
thing that we would all want is safer roads and
autonomous vehicles, there are regions in every town and city that could be completely converted
to level five autonomous zones. If I had to pick one company that had the credibility to go and
change those rules, it's them. Because they could demonstrate that there was a methodical safe
approach to doing something.
And so the point is that even in these categories
that could be totally re-imagined,
it's not for a lack of imagination.
Again, it just goes back to a complete lack of will.
And I understand because if you had $200 billion
of capital on your balance sheet,
I think it's probably pretty easy to get fat and lazy.
Yeah, it is.
And they wanna have everything built there.
People don't remember,
but they actually built one of the first digital cameras.
You must have owned this, right, Friedberg?
You're like, I have to have this.
Yeah, totally.
It was beautiful.
What did they call it?
Was it the eye camera or something?
Quick take.
Quick take.
Quick take, yeah.
The thing I would like to see Apple build,
and I'm surprised they didn't,
was a smart home system, the
way Apple has Nest, a drop cam, a door lock, you
know, a AV system, go after Questron or whatever,
and just have your whole home automated thermostat
Nest.
All of that would be brilliant by Apple.
And right now I'm an Apple family that has our, all
of our home automation through Google. So it's just
kind of sucks. I would like that all to be integrated.
Actually, that would be pretty amazing. Like if they did a Crestron or Savant,
because then when you just go to your Apple TV, all your cameras just work. You don't need to.
Yes. That's the, that, I mean, and everybody has a home and everybody automates their home.
So just think-
Everyone has Apple TV at this point. So you just make Apple TV the brain for the home system.
Right.
That would be your home.
And you can connect your phone to it.
And then yes, that would be very nice.
Yeah.
Like, can you imagine like the ring cameras, all that stuff being integrated?
I don't know why they didn't go after that.
That seems like the easy layup.
Hey, you know, everybody's been talking Friedberg about this
alpha fold, this folding proteins. And there's some new version out from Google. And also Google
reportedly, we talked about this before is also advancing talks to acquire HubSpot. So that rumor
for the $30 billion market cap HubSpot is out there as well. Freebreak, you're as our resident science, sultan, our
resident sultan of science and as a Google alumni. Pick either
story and let's go for it.
Yeah, I mean, I'm not sure there's much more to add on the
HubSpot acquisition rumors. They are still just rumors. And I
think we covered the topic a couple of weeks ago. But I will
say that AlphaFold 3 that was just announced today and demonstrated by Google, is a real, I would say breathtaking moment for biology, for bioengineering, for human
health, for medicine. And maybe I'll just take 30 seconds to kind of explain it. You remember when
they introduced Alpha Fold, Alpha Fold 2, we talked about DNA codes for proteins.
So every three letters of DNA codes for an amino acid. So a string of DNA codes for a
string of amino acids, and that's called a gene that produces a protein. And that protein
is basically a long, like think about beads, there's 20 different types of beads, 20 different
amino acids that can be strung together. And what happens is that necklace, that bead necklace basically collapses on
itself and all those little beads stick together with each other in some
complicated way that we can't deterministically model.
And that creates a three-dimensional structure, which is called a protein,
that molecule, and that molecule does something interesting.
It can break apart other molecules.
It can buy molecules.
It can move molecules around. So it's basically the machinery of chemistry, of biochemistry. And so proteins
are what is encoded in our DNA. And then the proteins do all the work of making living organisms.
So Google's Alpha Fold project took three dimensional images of proteins and the DNA
sequence that codes for those proteins. And then they built a predictive model that predicted the three-dimensional
structure of a protein from the DNA that codes for it.
And that was a huge breakthrough years ago.
What they just announced with alpha fold three today is that they're
now including all small molecules.
So all the other little molecules that go into chemistry and biology that drive
the function of everything we see around us. And the way that all those molecules actually bind
and fit together is part of the predictive model.
Why is that important?
Well, let's say that you're designing a new drug
and it's a protein-based drug, which biologic drugs,
which most drugs are today.
You could find a biologic drug that binds to a cancer cell
and then you'll spend 10 years going to clinical trials.
And billions of dollars later,
you find out that that protein accidentally binds to other stuff and hurts other stuff in the
body.
And that's an off target effect or a side effect.
And that drug is pulled from the clinical trials and it never goes to market.
Most drugs go through that process.
They are actually tested in, in animals and then in humans.
And we find all these side effects that arise from those drugs because we don't know how those drugs are going to bind or interact with other things in our biochemistry.
And we only discovered after we put it in.
But now we can actually model that with software.
We can take that drug, we can create a three dimensional representation of it using the
software and we can model how that drug might interact with all the other cells, all the
other proteins, all the other small molecules in the body to find all the off target
effects that may arise and decide whether or not that presents a good drug candidate.
That is one example of how this capability can be used.
And there are many, many others, including creating new proteins that
could be used to bind molecules or stick molecules together or new
proteins that could be designed to bind molecules or stick molecules together, or new proteins that could be designed to rip molecules apart.
We can now predict the function of three-dimensional molecules
using this capability, which opens up all of the software-based design
of chemistry, of biology, of drugs.
And it really is an incredible breakthrough moment.
The interesting thing that happened, though,
is Google Alphabet has a subsidiary called
Isomorphic Labs.
It is a drug development subsidiary of Alphabet.
And they've basically kept all the IP for Alpha Fold 3 in Isomorphic.
So Google is going to monetize the heck out of this capability.
And what they made available was not open source code, but a web-based viewer that scientists
for quote non-commercial purposes can use to do some fundamental research in a web-based viewer that scientists for quote, non-commercial purposes can use to do some fundamental
research in a web-based viewer and make some experiments
and try stuff out and how interactions might occur.
But no one can use it for commercial use.
Only Google's isomorphic labs can.
So number one, it's an incredible demonstration of
what AI outside of LLMs, which we just talked about
with Sam today, and obviously we talked about other models,
but LLMs being kind of this consumer text
predictive model capability.
But outside of that, there's this capability
in things like chemistry with these new AI models
that can be trained and built to predict things
like three-dimensional chemical interactions
that is gonna open up an entirely new era
for human progress.
And I think that's what's so exciting.
I think the other side of this is Google is hugely advantaged and they just
showed the world a little bit about some of these jewels that they have in the
treasure chest and they're like, look at what we got, we're going to make all
these drugs and they've got partnerships with all these pharma companies at
isomorphic labs that they've talked about.
And it's going to usher in a new era of drug development design for human health.
So all in all, I'd say it's a pretty like astounding day.
A lot of people are going crazy over the capability that they just demonstrated.
And then it begs all this really interesting question around like,
you know, what's Google going to do with it and how much value is going to be created here?
So anyway, I thought it was a great story.
I just rambled on for a couple of minutes, but I don't know.
It's super interesting.
Is this AI capable of making a science corner that David Sacks pays attention to?
Is this AI capable of making a science corner that David Sachs pays attention to?
Well, it will predict the cure, I think,
for the common cold and for herpes,
so he should pay attention.
Absolutely.
Yeah.
Folding cells is the app that Sachs,
casual game Sachs just download is playing.
How many chess moves did you make during that segment, Sachs?
Sorry, let me just say one more thing.
Do you guys remember we talked about Yamanaka factors and how challenging it is to basically we can reverse aging if
we can get the right proteins into cells to tune the expression of certain genes to make
those cells youthful. Right now, it's a shotgun approach to trying millions of compounds and
combinations of compounds to do them. There's a lot of companies actually trying to do this
right now to come up with a fountain of youth type product.
We can now simulate that.
So with this system, one of the things that this AlphaFold 3 can do is predict what molecules
will bind and promote certain sequences of DNA, which is exactly what we try and do with
the Yamanaka factor based expression systems and find ones that won't trigger off target
expression.
So meaning we can now go through the search space
and software of creating a combination of molecules
that theoretically could unlock this fountain of youth
to de-age all the cells in the body
and introduce an extraordinary kind of health benefit.
And that's just, again, one example of the many things
that are possible with this sort of platform.
And I'm really, I gotta be honest,
I'm really just sort of skimming the surface here
of what this can do. The capabilities and the impact are going to be
like, I don't know, I know I say this sort of stuff a lot, but it's gonna be pretty for
a fact.
There's a, on the blog post, they have this incredible video that they show of the coronavirus
that creates a common cold, I think the seven PNM protein. And not only did they literally like predict it accurately,
they also predicted how it interacts with an antibody, with a sugar. It's nuts. So you could
see a world where like, I don't know, you just get a vaccine for the cold and it's kind of like
you never have colds again. I mean, simple stuff, but so powerful.
And you can filter out stuff that has off-target effects.
So, so much of drug discovery and all the side effect stuff
can start to be solved for in silico.
And you could think about running extraordinarily large,
use a model like this,
run extraordinarily large simulations
in a search space of chemistry to find stuff
that does things in the body that can unlock, you know,
all these benefits, can do all sorts of amazing things to destroy cancer,
to destroy viruses, to repair cells to DH cells.
And this is $100 billion business, they say.
Oh, my god. I mean, this alone, I feel like this is where I, I
said this before, I think Google's got this like portfolio
of like, quiet, you know,
the other day. Yeah, yeah, they hit and the fact and I think
the fact that they didn't open source everything in this says a lot about their intentions.
Yeah. Yeah. Open source when you're behind closed source, lock it up when you're ahead.
But show it off.
Yamanaka actually, interestingly, Yamanaka is the Japanese whiskey that Saks serves on
his plate as well. It's delicious. I love that Hokkaido.
Jason, I feel like if you didn't find your way to
Silicon Valley, you could be like a Vegas lounge comedy guy. Absolutely. Yeah, for sure. Yeah.
I was actually, yeah, somebody said I should do like those 1950s, those 1950s talk shows where
the guys would do like the stage show. Somebody told me I should do like Spaulding Gray, Eric
Boghossian style stuff. I don't know if you guys remember like the, the monologue is from the eighties in New York.
It's like, oh, that's interesting.
Maybe. All right, everybody.
Thanks for tuning in to the world's number one podcast.
Can you believe we did it, Chamath?
The number one podcast in the world
and the All In Summit, the Ted killer.
If you are going to Ted, congratulations for Jenuflecting.
If you want to talk about real issues,
come to the All In Summit.
And if you are protesting at the All In Summit,
let us know what mock meat you would like to have.
Freberg is setting up mock meat stations
for all of our protesters.
And what milk you like.
Yeah, all vegan.
If you want, if you're oat milk, soy, nut milk.
Just please, when you come to protest.
We have five different kinds of xanthan gum you can choose from, right, Tomas? We have all of the nut milks you you want, if you're oat milk, soy, nut milk, just please, when you come to protest. We have five different kinds of xanthan gum you can choose from right to mouth.
We have all of the nut milks you can want and then there'll be mindful yoga with those.
Can we have some soy like the thing?
On the south lawn, we'll have the goat yoga going on. So just please look at the goat yoga and move on for all of you.
It's very thoughtful for you to make sure that our protesters are going to be well fed, well taken care of.
Yes, we're actually, Freeberg is working on the protestor gift bags.
The protestor gift bags. They're made of Yakima.
Folding proteins. So you're good.
Folding proteins. I think I saw them open for the Smashing Pumpkins in 2003.
On fire. On fire.
Enough. I'll be here for three more nights.
Love you boys. Bye bye.
Love you besties. Is this the All-In Potter
open mic night? What's going on? It's basically
it. I'm going all in And instead we open source it to the fans and they've just gone crazy with it
Love you, SQ
The queen of Kim Wom
I'm going all in
What your winners like?
Besties are gone
It's past code 13
That is my dog taking notice of your driveway sex
Oh man
My habitat sure will meet me at once You're driving
We should all just get a room and just have one big huge or because they're all just like this like sexual tension But they just need to release it out to get merch. Episode 178. And now the plugs the all in summit
is taking place in Los Angeles on September 8, through the 10th,
you can apply for a ticket at summit.all in podcast.co
scholarships will be coming soon.
If you want to see the four of us interview Sam Altman,
you can actually see the video of this podcast on YouTube,
youtube.com slash at all in,
or just search all in podcast and hit the alert bell
and you'll get updates when we post.
We're doing a Q&A episode live
when the YouTube channel hits 500,000.
And we're going to do
a party in Vegas, my understanding when we hit a million subscribers. So look for that
as well. You can follow us on x. x.com slash the all in pod. Tick tock is all underscore
in underscore talk, Instagram, the all in pod. And on LinkedIn, just search for the
all in podcast. You can follow chamath at x.com slash Chamath. And you
can sign up for a sub stack at Chamath dot sub stack.com I do
free bird can be followed at x.com slash free Berg and
Ohalo is hiring click on the careers page at Ohalo genetics
calm. And you can follow sacks at x.com slash David sacks.
sacks recently spoke at the American moment conference and
people are going
crazy for it. It's pinned to his tweet on his ex profile. I'm Jason Calacanis. I am x.com slash
Jason. And if you want to see pictures of my Bulldogs and the food I'm eating, go to
instagram.com slash Jason in the first name club. You can listen to my other podcasts this week in
startups to search for it on YouTube or your favorite podcast player. We are hiring a researcher apply to be a researcher doing primary research and working with me
and producer Nick working in data and science and being able to do great research, finance,
etc. All in podcast co slash research. It's a full time job working with us the besties.
We'll see you all next time on the all in podcast.