Tetragrammaton with Rick Rubin - Marc Andreessen
Episode Date: January 3, 2024Marc Andreessen is a prominent entrepreneur, investor, and software engineer best known for his key role in the development of the early internet. In the early 90s, Marc co-created Mosaic, a pioneerin...g web browser, while a student at the University of Illinois at Urbana–Champaign. In 1994, he founded Netscape, launching the popular Netscape Navigator browser. After selling Netscape to AOL in 1999 for $4.3 billion, Marc founded Opsware, selling it later for $1.6 billion. In 2009, he co-founded Andreessen Horowitz, a venture capital firm that has backed, among others, Airbnb, Facebook, Instagram, and SpaceX. Known for his insights into technology, Marc's early work with Mosaic and Netscape significantly shaped the internet's growth, and his ongoing contributions continue to influence the tech industry. ------ Thank you to the sponsors that fuel our podcast and our team: LMNT Electrolytes https://drinklmnt.com/tetra ------ House of Macadamias https://www.houseofmacadamias.com/tetra ------ Squarespace https://squarespace.com/tetra
Transcript
Discussion (0)
Tetragrammerton. Every year there are some number of thousands of very bright people, many of them young,
some of them older and more experienced, who come up with ideas for new tech products
and new tech companies.
And they, you know, they, and typically a small group of their friends get together and
they decide to kind of throw the harpoon and start a company and try to build a product.
And, you know, they need some level of money,
and then they need some level of basically support,
like institutional support.
So they're often young, less experienced.
They often have not been through the journey before.
They're going to need to gather a lot of resources
along the way.
They're going to need to hire a lot of people.
They're going to basically build
up big opportunities they want to pursue.
They'll have problems that will pop up,
even the great successful companies,
you know, from the outside they look like everything is great
on the inside.
They're typically, you know, some form of role in disaster.
There's always something going wrong.
So basically they, you know, they're sort of at some point
looking for partners who can help them do that.
And so venture capital kind of bundles the money,
you know, kind of with the help.
And then, you know, a couple of, you know,
probably misnomers to it, you know,
one is the, you would think like the day job is saying yes
to new companies is actually saying no.
We pass on almost everything.
The sort of a black humor joke is our day job
is crushing entrepreneurs hopes and dreams.
But that said, the good ones view that as a challenge.
The US is kind of the initial test to see
if they'll be able to clear the bar,
because if they can clear the bar with us, then it'll have a good chance of being able
to clear the bar with future recruits or customers, or other kinds of people they'll need to
say yes to them in the future.
And so yeah, we sort of present that, you sort of provide the initial gate.
Another kind of misnomer would be that, you know, tech kind of has this reputation for
you know, but for being very fast moving, right?
And so things, you know, technology's developing quickly.
But it actually turns out we're one of the sort of slowest
investors, Vittra Keppel's one of the slowest
kinds of sort of investment money in the world
because to build any kind of great company is, you know,
anything that you're going to remember that's going to last,
it's a 10, 20 year project kind of at the minimum.
And so when we invest, we invest, assuming we're going
to be in for at least a decade.
And so we're just there, you know, for a very long time every day, helping them work through things.
We basically never press to get our money back.
We're always trying to help them build more value.
When you're running venture capital for them, you're basically running a family of funds
that run across basically 20 years.
So you're sort of raising money on a new fund.
You've got an older fund that has companies that are three years old.
You've got an older fund that's got companies that are eight years old and so forth.
And so you kind of have this thing where you're raising money that,
it's like planting trees, you're planting trees, you're raising money on things that
won't pay off for a decade.
But if you've been in the business for a while, you have things ultimately that are
succeeding from a decade ago and you just kind of keep the process rolling.
You see the same mistakes that founders make across the board or
they're different mistakes all the time.
What is this again?
The cliche success.
Well, all happy companies are the same.
All unhappy companies have a unique story.
For sure, they're patterns of the mistakes.
There are lots of different mistakes to be made over time.
They've made them all.
What are the obvious ones?
I mean, so the biggest category by far
is internal dissension on the team, right?
So these things are never solo acts,
it's always a team.
And this will not come as a shock to you.
It turns out when you have a team of people,
even if they like start out getting along really well,
even if they like instead of themselves,
you know, brothers and sisters,
like under the pressure of success.
That's also a success.
Particularly in success,
you see people come apart.
Exactly, right.
So if the money inputs, the money and fame
like basically reveal the true person,
in a lot of ways, right?
And so, yes, there's that, people lose their minds.
Also, it's just like it's high stress, right?
And so things go wrong.
It's very easy to develop resentments.
It's very easy, because people, these founders
are under so much pressure. And so it's very easy to develop develop resentments of like, I can't believe you're hogging the
limelight. They're like, well, I can't believe you're not pulling your weight. And then they
kind of get in this kind of downward spiral. Same as a rock band. Yeah, exactly. And so it's
like most of it, like a sort of my big conclusion is like, it's basically, it takes start up,
might or might not work. And whether it fundamentally, whether it works or not, it's basically
a function of, does it build something that people want and kind of build a business around that?
Is it as simple as that?
If they make something people want,
it's probably gonna be okay.
Well, if the team holds together, right?
So, so, so, so, so, so, so, that's the thing.
So, so, so, another is an old,
I've entered capital adage,
which is a more company-stive
from suicide than homicide.
So, like, if, look, if it's not working,
then the team is gonna really be pressing each other hard,
and, you know, that can easily go sideways.
If it does work, the team can blow up for the reasons we discussed.
This is what the speech we always give the founders is.
Look, if you guys can kind of cohere together and
stay as an integrated team and group and trust each other through the hard times,
there's almost always a way through the specific difficult thing of the moment.
There's very few not-recoverable mistakes, but you have to really stay together.
Basically, what you see with a lot of these companies is at some point, if there's a crack
on the team, that crack then magnifies out.
In some cases, it can actually destroy the company.
I would say most of it's that.
This is a speech I was just with these two young founders last night, who were all fired
up for their new AI company.
And this is sort of asking about the failure cases.
And I was like, yeah, the most likely failure cases you guys are going to turn on each other.
Right? And, you know, of course, they get legitimately, they get the stricken look on their face.
Right? And then, you know, if followed by, well, of course, that's not going to happen to us.
And so, like, yeah. But-
But also half the marriage is ending divorce. Like, it's the same.
Exactly, exactly, exactly. It's actually marriage.
I think there's an argument to be made.
I'm not like a therapy person, but I think there's
an argument to be made that people should think
about the partnerships, the way business partnerships,
the way they think about marriages.
I think there's an argument to be made that marriage counseling
might be helpful.
Some sort of, let's say, intentional process
of developing trusted communication.
That said, look, when these things get started,
it's like a marriage.
They're so euphoric and excited,
and they feel so tightly bonded
that they don't want to ever imagine
that that could ever actually happen.
And so what it does happen, it usually comes
as an enormous shock.
And that's probably the single biggest company killer
is when that happens.
Will you say the primary thing people are coming to you
for is money, or is there more to the picture?
Yeah, so I think it's a lot, I think it's more than that.
So I just tell you what we always wanted.
So my partner Ben and I always wanted when we were starting our own companies for 15
years before we started to venture firm to back other people's companies.
Basically, we knew what we wanted to build, like we knew what the product was, we knew
what vision wise we had it, but it's just all of the mechanics.
And maybe here's a difference, maybe, you know, versus like my friends like
you working in the entertainment business is, you know, if the band comes to the other
people do a movie like maybe they, you know, maybe they do an album, maybe they do a movie,
you know, maybe they do a sequel, maybe they don't, right? Whereas with tech companies,
it's more like for them to work, they really have to compound over, like I said, over
like a decade or two decades. And so there's no, like, well, sometimes they'll do it for
two years, they'll sell the company, But that's not the big success case.
Those are kind of small-scale successes usually.
Like to build something important and valuable,
it has to be a decade or two decades.
And so they have to really have a building institution.
And to do that, typically they need a level of,
sort of I would say knowledge, support, expertise,
access to resources.
Another metaphor I use a lot is that building
a tech company
is like a snowball rolling down a hill.
If it's working well, it's growing as a role.
So it's sort of the way a snowball kind of
creates snow and because it's bigger,
start up in theory as a creating resources.
And so it's kind of pulling people in,
it's pulling in more engineers, product designers,
it's pulling in the right kinds of executives
it needs to build its team, it's pulling in customers,
it's pulling in partnerships, it's pulling in, you know, press attention, building a brand,
it's sort of, you know, sort of drawing all these resources to itself. And of course there's a competition among all the different startups of that same generation to get all those resources.
And so they're fighting with the other snowballs as they rolled on the hill for all that stuff. And so it just, it has over time, it has just turned out that it's like very helpful to most of those teams if they've got somebody,
if they've got something of the form of a venture capital firm, some, some set of people behind them who have done, have done it before, know how to do it.
You know, we don't run the companies, like we're not, you know, this is not my private equity, like we don't, we don't come in and like run the show, but, you know, who's their first call?
Basically when something goes wrong or when they need something, well, I'll give you a lot of what we do. It's like, you close candidates.
They'll be talking to some engineer.
They'll be in a shootout with Google and then three
other startups to hire an engineer.
So they'll roll me in and I'll spend an hour with the person.
Basically, part of it is just like help the candidate get
over the hump.
But also, one of the tricks you can do when you have a big
venture capital from like ours is you can say,
to candidate, you might be like, well, look, the candidate
might be thinking I could go to Google and I just have,
I know everybody's gonna think I made a great decision
in my career, even if it doesn't go well,
I could Google in my resume.
If I go to the startup, who's even gonna care what I did,
if the startup doesn't work,
like, and I'm gonna be stranded if the startup fails.
And so one of the things we get to do
is we get to say, look, if it doesn't work,
we will know you and you'll be a member of our family as well.
And we have hundreds of other companies
and we know all the big,
and we will all the big,
and we will help make sure that your career
prosperous even in the wake of a failure
and we will vouch for you through that.
And so there's like a hundred different versions of that.
And so I think that's most,
I mean, that's how we win.
We don't win on price.
We win on, we win basically on being able to be their partner.
Did you have experience with VCs
from the founder side?
Yeah, very much.
Tell me, give me an example.
What was that like? Yeah, so I had Tell me, give me an example. What was that like?
Yeah, so I had really good experiences.
I had really good experiences.
I raised money in 1995.
My partner, Jim Clark, and I at the time,
raised money in actually 1994
from a firm called Kline Perkins,
which was at the time,
considered kind of the top venture capital firm.
I worked with them for five years.
We then, Ben and I started a company in 1999.
We raised money from benchmark,
which at the time was one of the top firms. We worked at the time for a long time. And then, Ben and I had a company in 1999. We raised money from benchmark, which at the time was one of the top firms.
We worked at the time for a long time.
Then Ben and I had also been angel investors.
We had invested in probably by the time we started the firm.
Over a hundred startups, we were from the side working with a lot of the founders.
A lot of the things we were working with the founders on was navigating the venture
landscape, helping them meet the right firms.
Then actually helping to kind of, we were actually the marriage counselors a lot of
the time, so we were kind of helping them kind of unwind when their relationship with
their VCs, we could all screw it up, we would kind of help them kind of unwind that.
And so, you know, we just, we kind of, you know, we became, so it's called expert customer
and then at some point we're like, okay, like this, yeah, we could probably do this.
It seems like a better position to be in coming from that,
having that experience has to make you better
at being on the other side of the table.
So our argument is very much that.
Our argument is we have been you,
we have been through everything you've been through,
we have made every mistake that you're going to make,
we have figured out ways to get through all those mistakes,
right, and then we deeply understand you,
we understand what you're trying to do,
and you know, we're gonna,
we're gonna sympathize with you along the way.
There is a counter argument to that,
but I think about a lot.
And it's the argument that VCs who have not started
companies use against us.
And basically the argument goes that if you're a former
operator, founder, like we are,
you will tend to get emotionally entangled with the
companies, and you'll become not objective and not
clinical.
And at some point, when you're investing money,
and even when you're actually advising companies,
at some point you need to get clinical.
Like, you know, there are certain moments in time
where you need to like actually recognize the truth
and tell the truth both to yourselves
and to the companies, right, and to everybody else.
And so the argument goes that basically,
you might be a better founder,
you might actually be a worse investor
as a result of the inability to get clinical.
We try to offset that by being very conscious of that and trying to retain our critical
faculties.
We do internal portfolio reviews every quarter and so we force ourselves to tell the truth
about everything.
Look, having said that, at the failure cases, we get emotionally entangled with somebody
working hard to realize our dream over a decade, you know, fair enough.
You know, I think it's, I think it's much better to err on that side.
And get the payoff of that, which is like, we're really there.
Like, we're really deeply there.
Like, we don't quit.
We don't walk away.
You know, we care tremendously.
We have all of the added motivation that comes from caring tremendously.
Pick an example of a story of a company that you've either invested in or
been part of a success story and walk me through all of the stages of what happens.
The ups and downs along the way, a case study.
Yeah, I'm going to use code names for the companies.
I don't want to be on to speak for the individual companies.
And so I mean, look, I have a company right now.
I'm going to call this company S. I don't know what to say. I don't know what to say. I don't know what to say. I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say.
I don't know what to say. I don't know what to say. I don't know what to say. I don't know what to like, all right, we want to do the new thing.
And they just, I don't know, it's like watching
Babe Ruth or something,
point to the place in the outfield
where he's gonna hit the home run.
And they just like, hit the ground running.
You know, they've had issues along the way,
but it's mostly been this incredibly smooth,
it's just like this well-willed machine.
They've just hit like a huge,
kind of revenue milestone.
They're increasingly important in their industry.
Every once in a while, you get one that goes that well.
A lot of what happens is just like,
in the beginning, everything is a dream.
It's a clean sheet of paper.
It's this incredible moment of sort of,
what do they call it?
It's like a liminal moment where you can basically design
your dream.
And your dream is the product you've always wanted to build.
And your dream is the company you've always wanted to build. And your dream is the company you've always wanted to have.
And you think of those in parallel.
And you bring in all those smart people you know.
And it just seems like everything's going to be great.
You kind of expect it's going to be hard,
but like you're all fired up.
And then basically what happens is just
this reality just punches you in the face
over and over and over again.
And generally that takes the mode,
the mode of basically people telling you know.
And so VCs tell you, they want fun. Do you employ a stall, you know they want to join your company,
customers tell you know they want your product and kind of all the way through.
And then disaster strikes every month or two in one form or another.
We talk a lot about the emotional treadmill, which is sort of the founder sort of psychology.
My famous land on that was it is alternates between euphoria and terror.
And then it turns out that lack of sleep enhances both of those.
Right.
And so it's the four in the morning.
OK, well, there's another thing which is related to this,
which is the founders have to put on a brave face.
So they have to always basically act like everything's
going great, because otherwise it'll shake confidence,
right, and they'll lose team members,
and they won't be able to raise money or whatever.
So they always kind of have to act like it's going really well.
And in fact, you're going to like a party
with like a lot of founders,
it's really funny watching them
because everybody's asking each other,
well, how's it going?
And everybody's got this like very forced grin in their face.
And they're like, oh, everything's going great.
And like everybody's just dying inside, right?
But like nobody can say it, right?
And so I always think about like it's the four in the morning
kind of thing where you're staring at the ceiling.
And yesterday it felt like you had the tiger by the tail.
And today it feels like it's all going to fall apart.
And just like, oh my God.
And then the other thing I find is that it doesn't really
ever moderate.
If you talked to a lot of people who were still
founders running their companies 10 or 20 years later,
a lot of their life is every morning they open up
their email inbox and it's just like a descent into hell.
It's just like one person after another with an issue
and it complains on a problem and I quit and fuck you.
Just on and on and on and they gotta get up in the morning
and put on the hoodie and head in there.
And just like in front of all these things head on.
Some of them love to do that and battle their way
through every step.
Some of them check out,
some of them just get a few years into it
and they're like, I just hate living in this way.
I'm becoming, I'm becoming unhealthy.
People develop, people develop alcohol,
problems, drug problems, all these things.
It was probably striking about this.
It's like, wow, this sounds like
it's mostly psychological, right?
And it's like, yeah, I think it's mostly psychological.
Well, it's pressure.
It's tremendous amount of stress.
That's right.
How small is the world of VCs and founders?
Like altogether, is it hundreds of people?
Do you know everybody?
Is it thousands of people?
Is it hundreds of thousands?
Yeah, so say the following.
So one is it doesn't really ever stabilize.
And so many other industries, and I think music
was probably like this for a long time.
I don't know if it still is,
but you know, at some point in a lot of businesses basically
think stabilized, and you just have like a certain number of record labels or certain number
of movie studios and you know, at some point in those movies.
And then everyone's in a wild.
There's like small changes.
There's what's an evolution they call it punctuated equilibrium, right?
And so things are kind of bruising along in it.
Every once in a while there's like a disruptive moment, right?
So if somebody develops hip hop or something and like everything changes, but like you can
kind of count those those over time.
Those are specific moments people write books about,
and they're really big deals every once in a while.
A movie studio will go under, but it's actually very rare.
Warner Brothers is still in business,
a 100 years later.
Tech never stabilizes like that.
It always looks like it's stabilized,
and then basically they're somewhere quick
in the form of disruptive technology change.
And then basically everything gets tossed up in the air and kind of redone.
And these platforms just happen like actually quite regularly, it's like every five years.
And so, you know, it's like for computers, it was mainframe to PCs, to mobile, right?
And then for, you know, internet, you know, it was one point, no internet in the internet,
and then cloud, and then social. Now there's this AI thing, right?
And so every time you have one of these kind of earthquakes,
it sort of recalibrates everything.
It's like the meteor strike hits
and the dinosaurs die and the bird
and the birds take off kind of things.
So because that's sort of a more common thing in our world,
there's just more of this pattern of,
and then the new thing is often led by people
who were not important in the previous way of, right?
Because it's sort of a lot of its kids,
a lot of its kids who kind of grew up with whatever the new thing is
and they just have a different take on how the world should look.
I was an example of that.
And so, yeah, so there's basically,
there's actually a lot of turnover.
Like, there are venture firms that are 50 years old,
but they're on generation five or six of partners.
And a lot of their peers from when they were younger,
no longer exist.
And so whether they're in the same firm anymore
is like an open question.
The company's going to last for a long time.
Generally, at some point, the company's basically become,
at some point, Elvis leaves the building.
At some point, this sort of innovation spark
leaves the company.
When the founder is basically at some point
push out or retire, at some point,
even the most innovative company just kind of becomes
a big normal boring company.
So it's still in business.
It's like, I was a little bit like a new tram bomb.
It's these things for something.
It's like the building's still there.
There's still people, the parking lot's still full of cars.
It still has customers,
but like it hasn't invented a new product in a decade, right?
And so like, what even is that?
And by the way, it has like a hundred times
I never employs it had when it used to develop products
all the time.
And so like what's happened there?
And so those companies kind of,
you know, they're still important for a business standpoint, but they're no longer vital. They don't do
the new things, and that's when the news startups show up. So, anyway, so back to your question,
is like, yeah, it's basically, you know, sort of continuous turnover of people. Everybody
is highly aware that a 22-year-old can show up at any moment and up in the entire thing,
and that has happened repeatedly. So, people are kind of very open to that possibility. It's
still jarring when it happens, because, it happens because it's like number one,
the changes seem so weird in the beginning
and then number two, like 22-year-olds seem really young.
So it's still a lot of-
The last one of those.
It's hard to process that.
I mean, we're going through it right now.
That's the big one right now.
This one's not so much 22-year-olds.
This one's more people who have been toiling
in research labs for decades without really anything
to show for it.
And their stuff just started to work.
And so you actually have a lot of scientists who never thought they would be in business
who all of a sudden are like basically top entrepreneurs.
And they're kind of emerging, blanking out of basement labs into the real world, trying
to build companies.
So that's happening.
But the classic recent one, social networking, our second birth shows up, these 22.
We had a party actually, when I was on,
I'm on his board and we had a party
when he finally became old enough to rent a car.
Right.
It would be a big deal on business trips, right?
Amazing.
You know, it was the CEO of this incredible new company.
So yeah, there's a lot of turnover.
I would say it's, and then I say
with everything as, and this is the uncomfortable part
of the topic I think is, it's an elite occupation, like it really is.
It's still a small pool, even though the characters are changing.
It's a small pool.
It's a small pool.
And every once in a while, you get somebody who just comes
completely at a left field who went to some college
you've never heard of and came from some random country
and doesn't have connectivity, and they just show up
and they're just really amazing.
But that was you.
That was me. That was me, yes.
There's not a lot. Yes. But, was you. That was me, that was me, yes, there's not a lot.
Yes, but I would say that the more standard pattern
is there's a small number of universities,
Stanford, MIT, Berkeley, a few others.
There's a small number of kind of important companies
in a point in time where young people are getting trained
by working there and kind of gaining the skills.
There's kind of, it's actually like music,
there are scenes, there's like, there's social scenes, there's, it's actually like music, there are scenes, right? There's like, you know, there's social scenes, you know, there's these, you know, kind of,
you know, loops of people who all know each other, you know, are kind of coming up.
Like whenever I read interviews with like either comedians or musicians, it always turns
out, it was, you know, they were with all of the other people of their cohort at that time.
And with they all kind of had a common as nobody was taking, nobody was taking any of them seriously,
right? And so that same thing happens. There's also a small number of geographic locations.
San Francisco Bay Area just ends up being the center
of the world for a lot of this.
Why is that?
So San Francisco, and when I say San Francisco,
there's San Francisco, the city,
which has its own idiosyncrasies,
and then there's just the Bay Area generally.
So a lot of it's just office parks.
Well, so practically, it's where Stanford and Berkeley are,
and it's where Facebook and Google and all these other companies are. And so it just kind of has just office parks. Well, so practically, it's where Stanford and Berkeley are. And it's where Facebook and Google and all these other
companies are.
And so it just kind of is already ground zero.
And so there's a natural kind of continuation of
the things that takes place.
And then there's also a long history to it, which is Silicon
Valley really started in like the 1920s, 1930s, even with
technologies before the computer.
So it's running on like a hundred years of what you might
call a network effect, where it basically meaning the next
really bright person whose technical, technologically oriented is more likely to come to the place
where all the others, more people are, than to go anywhere else.
So there's like a positive feedback loop that kind of just keeps spinning.
And then, quite honestly, I think there's just something really, I mean, but California
has its problems, and they are profound.
But there's something magical about California.
There has been something magical about California
that predates the entertainment industry,
the predates the tech industry,
and it's basically the California is the frontier.
And the same mentality that led to the original settlement,
California was the last,
sort of the furthest thing to the west that was settled.
It was sort of, it was ungoverned for a long time.
You had the gold rush.
The Wild West.
Wild West, Wild West.
And then what happened was there was this just like selection
process where if you were oriented around status and respect,
you wanted to succeed on a East Coast in New York and Boston.
And if you wanted to go carve your own path and create something new,
you went to, you went basically as far away
as these coasts as you could get.
And that's basically right right right right right
right now.
You can call it like creative, you know, people experiment, right?
Like they blaze new trails in like every way, including like how you live your life,
you know, California is famously the home of, you know, thousands of cults.
You know, I've been developed here.
I think it's actually no accident.
The California is sort of a, we call it sort of the stack, you know,
California builds the technology of dreams.
And then we also, you know,
Hollywood and entertainment business,
actually makes the dreams, right?
And so it's like this sort of integrated dream factory.
You know, is it ever really quite real?
You know, Los Angeles is like famously a fake city, right?
It was just like a desert.
And then they just basically, it was the Theranos of cities
when it first got started.
And they ran newspaper ads in East Coast cities
with like drawings of like palm trees,
like you know, this lush paradise.
And then people would buy plus a land and they come out here,
and it was just desert, and then they famously had to go get the water from the central valley,
and that led to...
Yeah, you lie in palm trees.
Yeah, and it turns out the palm trees are imports.
Like, it turns out palm trees is this,
actually one of my big breakthrough moments in understanding California,
which is the palm trees are not native.
Yeah.
Like, it's just like this made up, you know, basically.
They're the most iconic kind of thing for California, basically.
It's just like, it's like a made up import.
And so there's a, you know,
there's like an artificial,
American idea though.
Yeah, very much.
Yeah, well, so, and, and well,
I mean, here we sit in Shangri-La, right?
Like paradise, right?
You know, and, you know,
the weather, like the weather,
like, you know, if you have a choice, right?
Fundamentally, you know,
or you can live, you know,
wouldn't it be nice to live someplace
where it's like 70 and sunny every day?
Like, it wouldn't that be awesome. You know, wouldn't it be nice to live someplace where it's like 70 and sunny every day? Like it would that be awesome.
You know, it's the thing the Bay Area and LA kind of have
in common, which is kind of the spirit of adventure.
You know, the original movie industry people came out here
for two reasons.
One is they were fleeing Edison's patent enforcers.
He had patents on movie,
movie recording equipment projectors.
And so he was sending the picker tints to like breakers,
break up your studio if you weren't paying him
his patent fees.
So they came 3,000 miles away to get out of the orbit of Eastern power, and then they
also came here for the weather because they can fill me around.
And that spirit exists.
And it's actually an exciting time for that because for a long time, the sort of north and
south California were sort of pretty starkly divided.
And you didn't have a lot of crossover.
I mean, a lot of your predecessors in music just never had any.
Your technology is a threat if they even thought about it at all
and had very little interest in what was happening.
And quite frankly, vice versa.
And there's just a lot more crossover.
It turns out the valley is more creative than we thought.
It turns out LA, actually, there's a little people down here
who are actually quite interested in tech
and have done a lot in tech.
And so there's a real magic happening.
But having said that, California also has all the downsides.
And Silicon Valley also has all the downsides of a place
where basically people are making up dreams from scratch.
You know, there's dystopian elements to it, right?
Like, San Francisco, real city,
has always got an interesting question.
Because like, you know, they no longer arrest criminals.
How's it working?
Right, like you might just get likes to have
and kill walking down the street.
But a friend of mine got a job working for actually OpenAI
and he moved to be close to the office
and he got an apartment of block away
and they're in the mission in San Francisco,
which is sort of famously the hub of AI
but also just like incredibly violent
and basically was not enforced.
And he said, oh, I love living near the office.
He's like, I wake up in the morning, I go to the stairs,
I'm block away from the office,
the key is I run at full speed as fast as I can
from my apartment to the office. And then run at full speed as fast as I can from my apartment to the office.
And then at night, I run as fast as I can.
Because I'm trying to not get like,
basically stabbed or killed on the way in.
And so it's got both of those.
The drug thing has always been big deal.
Who's in a jans are like a big thing,
up north, and it's the good and the bad.
They're opening people's consciousness and horizons.
And there's a cultural creativity flowering thing that's
happening just like it happened in the 60s and 70s.
But there's also the downside, which
is you see people whose lives are getting wrecked by drugs.
And so it's also got that side of it.
And so it's got this very organic.
It's just like this perpetual cycle of cultural creation,
people designing their lives, and then building
these products and these experiences
that people all over the world consider of consider to be the best that
there are. So it's a it's a L-M-N-T.
Element electrolytes.
Have you ever felt dehydrated after an intense workout or a long day in the sun?
Do you want to maximize your endurance and feel your best?
Add element electrolytes to your daily routine.
Perform better and sleep deeper.
Improve your cognitive function.
Experience an increase in steady energy with fewer headaches and fewer muscle cramps.
Element electrolytes.
Drink it in the sauna.
Refreshing flavors include grapefruit, citrus, watermelon, and for a limited time chocolate
medley, which you can enjoy hot.
Formulated with the perfect balance of sodium, potassium and magnesium to keep you hydrated
and energized throughout the day.
These minerals help conduct the electricity that powers your nervous system so you can
perform it your very best. Element electrolytes are sugar-free, keto-friendly, and great tasting.
Minerals are the stuff of life.
So visit drinklmant.com slash tetra.
And stay salty with Element Electrolyte.
L-M-N-G.
It felt like in the early days of the valley, and when I say the early days, I mean, seven years ago. The idea of move fast, break things, disruptor awards were really good things and you know Apple had
to think different campaign and something seems to have changed.
What do you think is changed? I don't know. I don't understand it because it seemed
like the whole tech revolution was about you could finally be yourself. You
could learn what you want on the internet. You didn't have to get what was fed to
you. You know I remember at get what was fed to you.
You know, I remember at one point in time, Twitter was called the free speech wing
of the free speech party.
That's right.
So what changed?
What changed?
What changed?
So let me start with, I actually think that,
I actually think a lot of that is still there.
And, you know, it's not there as much in like the big companies,
but like there is still an anarchic spirit in the startups.
And so we meet with people every day
who basically are just like, yeah, screw it.
Like I'm just gonna like,
I'm gonna break a lot of glass,
and I'm gonna just do something brand new.
And people,
Isn't that necessary to make new things?
I believe so, yes.
I think so.
Seems like.
I believe it is.
Somebody say I'm very much in favor of it,
and we can talk about why that is.
But yeah, it's obvious.
So that's how you do new things.
Like you have to.
Like the past will crush you, right?
If you let it.
But most cultures throughout time and most cultures
in the world today, they're just,
their thoughts are dominated by the past, right?
And there's a good to that, right?
Which is they have cultural continuity, right?
They venerate their ancestors.
They, you know, I mean, literally,
like the natural form of human society
is like literally ancestor worship, right?
And like you just like,
and look, there's a lot to that,
because like your ancestors like learned a lot and they, you know,
tried to pass it on to you through these kind
of cultural transmission things, you know,
started with like poetry and then made it's way through
to, you know, kind of all the, you know,
books and stories that we have today about people in the past.
But like, you know, most cultures are just like
completely dominated by the past and they just don't do
new things and anybody who tries to do new things,
there's this thing called, right, tall poppy syndrome,
you know, the tall poppy gets chopped off, right?
And there's tall poppy syndrome and the west, there's tall, tall poppy syndrome. The tall poppy gets chopped off. And there's tall poppy syndrome.
And the west, there's tall poppy syndrome in the east.
It's like this very natural kind of tribal thing.
And so anybody who's really going to do something
that it was going to upset the Apple cart
and is going to make people upset.
And then I would say, technology is maybe even
the most advanced version of that, or the most dramatic version,
which is a transgressive piece of content,
like a transgressive song or movie, can really make people mad by suggesting that there's a different way to live, and that
obviously can have a big effect.
But tech does something, I think, even more than that, which is when tech, when something
disruptive in tech changes, it doesn't just change tech, it also changes the sort of order
of status and hierarchy of people.
This is sort of the thing.
So why does the media hate tech so much?
And there's a lot of potential explanations for that.
And one of them is just like we cut the legs off
from under the media business.
Like we have literally obliterated, like for newspapers,
we have obliterated basically most of their advertising
revenue because most of that, people
used to advertise the newspapers.
Now they just advertise online.
And so there's a big reordering.
Well, actually in music, this happened.
Napster file sharing cut the know, cut the lights off
from under the economic structure of the music industry,
that caused a massive turnover
and who was running the music companies.
And now there's a new generation of people
who have figured out streaming and digital distribution.
And so like there's been a complete reordering of status
that has come from that.
So I think that's a lot of that.
Yes, look, I think you have to do that.
I think there's a couple things,
a couple other just broader things that have happened.
You know, this one I'll kind of pin on things, a couple other just broader things that have happened.
This one I'll pin on us, which is just like, because the dog has caught the bus.
A lot of us who've been in tech for a long time, it was always like, wow, what we're doing
is actually really valuable and important.
People don't understand it.
We have to tell them how valuable and important it is.
And now it's like, oh, they actually get it.
And now they're mad.
Right.
Right.
We want.
Right.
The dog has got the bus. Like everybody gets it now.
Everybody gets the tech as powerful and important,
and now they're really upset about it, right?
And so some of that is-
Now they want to control it.
Now they want to control it, exactly.
And so the dog has got the bus, you know,
turns out the bus has its own opinions on where things should go.
You know, the dog is going to get dragged behind the bus.
And so there is some of that.
Look, the other thing that's happened that you know well,
but the other thing that's happened is just,
you know, look, the world, I think, really changed in 2016 and like,
why and how and, you know, how much of it was one guy and how much of it's a global phenomenon.
You know, we sit here, you know, sitting here, sitting here today, a very Trump-like figure
just wanting the Netherlands, you know, and this kind of huge surprise.
Trump-like figure just wanted Argentina, which is this kind of huge surprise.
But, you know, the world, like, the world as I experienced it in the United States,
and this is looking at value, like it really changed in 2016,
and a lot of people really got psychologically altered, shattered,
broken, reformed kind of through that.
And I bring it up just because tech got pulled into that shift,
just like everything else has gotten pulled into that shift.
And that's why I tell our founders a lot of the time
who try to grapple with this stuff is just like,
look, like what we're doing is valuable and important we are not we're not the bus we are the
dog like society writ large has energy and momentum of its own and we are wrapped up in it just
like everybody else. So you know we are changing it in some ways but also we are part of it and
we're getting we're getting kind of dried along with whatever the prevailing trends are. And so
tech has become politicized.
And I would say socially energized in a way
that it never had been.
I don't know, maybe this is a big difference between this
and L.A., which is maybe in music and movies.
You could say that music and movies have always been intertwined
with politics, probably going back forever.
But for sure, going back to the 60s,
kind of very deeply intertwined.
And music may have actually caused sort of political and social change in the 60s and 70, kind of very deeply intertwined. And music may have actually caused sort of political
and social change in the 60s and 70s and 80s,
but it was also reacting to it, right.
And it was like a feedback loop.
And I think in tech for a long time,
we didn't have that.
We were building like tools,
you know, we were building like fun tools
that people could play with and use,
but they, you know, when they would get engaged
in politics, whatever, they would put down our stuff
and then they would go out in the street
or, you know, go on TV or do whatever they would do.
And now, tech is integral to how society works.
But even beyond tech, there was a time
when big companies were focused on making the best product
they could and having the best bottom line they could.
And that was all they were focused on.
They were focused on the business that they were running.
Yeah.
And it seems like somewhere along the way, the idea,
it changed where now corporations whose obligation
to their shareholders is to do what I just said,
now feel like they have some moral imperative.
How did that happen?
Yeah, so, you know, look, I should start by saying
for people who haven't really thought about this hard, you know, there is a, you didn't do this, but there's a misnomer that people apply to companies
which is they'll say, this is usually criticism, they'll say companies are only focused on
their shareholders and they're only focused on making money and they're legally, and they'll
say people sometimes say they're legally required to optimize profits no matter what, right?
And so, and this is why they say companies are sort of going to be sort of intrinsically
morally neutral or even evil because like if it makes sense to pollute, or if it makes
sense to have horrible policies, or whatever, to build products that deliberately break,
so that people have to buy the new thing next year, like they're going to do all these evil things,
because they're optimized for profits. It actually turns out there's actually no legal
requirement for companies to purely optimize for profit. In fact, quite the opposite,
you're supposed to optimize for kind of a long term value. And management has a broad latitude to be able to decide whether to engage in social issues,
whether to have different kinds of policies inside the company.
You're perfectly protected legally as an executive of a company if you trade off short-term
profits for some longer term.
Just a brand value.
Even though doing this thing, we were thinking of starting a new wailing division, we were
going to kill a bunch of whales and sell their meat. We could make money doing that.
We're going to decide not to do that
because it would impair our brand.
Like as a corporate executive,
you're completely, legally covered.
Nobody will ever question that.
And so corporate executives have always had
a fair amount of latitude in terms of how they,
how they want to steer their companies.
Look, for a long time, one is just like
what business people were just focused on business.
But the other thing that happened was the received wisdom that you got when you were trained
up in corporate America, I would say, between the 70s and around 2016.
And the way I was trained up was, you actually don't want to get involved in political
and social issues because you're going to make people mad, right?
You're going to summon the demon of you becoming involved in politics or social change, so in politics and social change, you're gonna you're gonna you're gonna summon the demon of you become involved in politics or social change
So in politics and social change you're become involved in you and so you're sort of inviting a level of backlash that very well
My destroy you that was something Michael Jordan famously said early on when they were asked they asked them about politics
He said I want everyone to buy my sneakers. That was his answer about what his political
Affiliations were right and you know is that a capital to statement in part?
But is that also a social and political and moral statement, which is, look, I don't
want to be a divisive figure, right?
I don't want to have some kids love me and other kids hate me, right?
Because I'm on the wrong side of some divide, right?
And so, yeah, so to your point, like, I think that was very common.
Look, I think there's this, I don't know, this vortex opened up and, you know, a very
large number of executives felt very guilty
about some set of social and political things. There was tremendous peer pressure that developed
to be able to stand up on things. Employee bases changed the millennials and
particulars showed up in the workforce with a lot more demands on their employers.
Even my socially activated older friends who, you know,
who in business, like even they are shocked
that the young employees show up kind of so fired up
and so demanding that companies, you know,
take all these different positions.
And so that happened.
And then, you know, like I think some people think
they can differentiate, you know, I think some people,
some companies have deliberately decided
that they'd rather have half the market hate,
love them and have the market hate them
than have nobody care.
And so I think maybe that's, you know,
just you might say hypothetically,
and Nike, for example,
actually might be quite happy if liberals buy
twice as many shoes and conservatives don't buy these shoes.
And, you know, maybe that works.
And then quite honestly, I think people are drunk.
I think they're drunk on sanctimony
and they're drunk on politics
and they're drunk on failing powerful
and they're drunk on, you know, grandstanding on failing powerful, and they're drunk on grandstanding.
And I think it's like a chemical thing.
It's a drug, and it's very easy to get,
hooked on that drug, it feels great, right?
And then people on your side are praising you
and talking about how wonderful you are.
And you even say it also probably
to some extent it feels good to be hated, right?
Because it's like, wow, I'm really important.
All these people hate me, they're all mad at me.
Like I'm in the mix, like right,
and in the fight, like I'm really making you different, it's like people are doing it. That's unbelievable. And so I'm really important. All these people hate me, they're all mad at me. Like I'm in the mix, right? I'm in the fight. I'm really making you different.
It's unbelievable.
And so I think a lot of it is just,
like I think a lot of it is just this emotional thing.
It's like a heroin kind of thing.
And I think there's a hangover from it.
I think actually a lot of you know,
a bunch of companies have that hangover right now.
A bunch of companies have been damaged very badly by this.
And they have, you know, severe internal problems.
I mean, look, there are big companies
where the CEO and owner runs the company. Like the company is run by the mob, right. I mean, look, there are big companies where the CEO
no longer runs the company.
Like the company is run by the mob.
I mean, just the employee base can just freak out
and threaten to protest or threaten to quit.
And like the CEO just rolls over every single time.
And so at some point, what is that?
Is that a company, or is that a social movement,
wearing the skin suit of a company?
So I've said optimistically, you might say this was a moment of time phenomenon
and that, you know, mean reversion will kick in.
And then you might also say, you know, no, look, like we just live in a different world.
Now, we live in a different kind of culture.
We have a different kind of media, you know, this is never going back.
And I think it's related to, it's very much related to the question of like, do politicians
quote unquote become normal again?
Or do they actually become kind of stranger and stranger?
More and more unusual, I think.
Probably things just get weirder.
I think so too.
I think it'd be hard to go back,
because if you see now when you look at the politics
of the old days, it was a very lawyerly,
it felt less connected to the people.
It felt like a different elite class.
And now it feels more like whether it's AOC or MTG,
it feels like the people.
Yeah.
Well, and by the way, just the fact you can name them,
right?
The fact you can name them, the fact that they,
those two, I already have like iconic,
three letter names, right?
It's like they're right up there now with MLK.
Like how did that happen?
Right, the fact that they like, if you pull,
if you did like, unated surveys,
like they would pop up, those are probably,
those two people might be, you know,
other than Trump, like, and Biden,
those might be the top respective,
you know, Democrat Republicans today
for unated awareness.
Or, you know, if you talk institutionalists in DC,
they're very frustrated by this,
because they view it as like, okay,
the people who are purely focused on media presence
are getting all the, you know,
they're basically getting lined up to be future, okay, the people who are purely focused on media presence are getting all the, you know, they're getting,
they're basically getting lined up to be
future presidential nominees
whereas the people focused on substance are.
Nobody knows who they are.
And it's kind of like, okay, maybe that's just the world,
right, and maybe if you're gonna be a politician,
that's the world you have to live in,
and you have to be willing to,
you have to have a strategy on that,
and maybe if you're in business,
this is just the world you live in,
and you have to, you know, just,
nah, I don't know, pick up people,
but you know, look, Disney went through a version of this
with their, you know, with Bob Chaffek, who I know I'm like, you know, pick up people, but Disney went through a version of this with Bob Chaffek,
who I know I'm like, he tried very hard to keep them out of politics.
First thing he did when he came in was I said, we're staying out of politics, we're
on everybody by the sneakers, and the company just was not having it.
And Bob Eiger has come back, and he very much doesn't have that view.
He's keeping the mental, these things.
And who had the right strategy?
I don't know.
But it's going to tilt one way or the other.
And so I think there's a lot to that.
I also think, look, I think that I'm not a big fan.
I think there's a lot of simplistic kind of accusations
that social media is ruining everything or the internet
is ruining everything.
And I think those are generally simplistic and sort of
incorrect things.
But there is no question that media forms society.
Society forms media, media forms society.
McLuhan wrote extensively and worked
that holds up incredibly well about the role
of media takes in shaping culture.
And look, we live in a different media landscape now.
And again, we're the dog that got the bus.
We made it.
We invented all this stuff.
But you might say, a critique I might apply to my own
thinking on this is, what did you think
would happen
when you connected everybody together into like
a single global media sphere, right?
McClue and use the term the global village.
And what people forget about the term the global village
is he wasn't saying that was good.
What he was saying is basically the entire globe
was gonna revert to the behaviors of a village.
And the behaviors of a village are very different
than the behaviors of a city.
The behaviors of a village, you know, the city
are like cosmopolitan and open-minded
and like embracing of new ideas and so forth.
The values of a village are like very narrow-minded
and moralistic and sharp and that villages like ostracized people
for slight differences and belief and everybody's watching
everybody all the time and everybody's super critical.
And so we actually invented the global village.
Everybody's acting like a villager now.
McLuhan would say, yeah, no kidding.
Great job, guys.
The great benefit of the villages in the past was that you'd go to another village and
they'd have their whole own way of doing it.
That's right.
And if it's a global village, your biodiversity of villages is gone.
Yeah, it just gets all turns into the same thing.
The same thing.
And so, look, I think we've been, I think this is a big problem.
I think this may be my interpretation of the drunk on politics thing, which is just like we
became drunk on being sort of domineering members of a global village, they thought they became
in both company CEOs, but also a lot of activists became sort of convinced that they were in a
position to kind of skear the totality of human civilization, society, you know, through
by using these technologies.
You know, look, I think there's some truth to that. So like, we've all gotten basically wrapped up
in a collective, you know, single collective psychodrama.
But look, like, I also don't think like 8 billion people
want to be wrapped up in a single collective, like psychodrama.
And I mean, if anything, there's just like,
at some point, adrenal fatigue, like, it's just at some point,
it's just tiring to be like, onto the next, like,
howling outrage, right?
Like, like, how many of these,
how many of these can you actually go through?
It's actually even funny sitting here today
is like there's still people trying to push
like the same buttons they were pushing in 2020,
they'll attack companies or whatever.
And in 2020, some accusation will be leveled
on something racer, whatever related.
And in 2020, it would have led to a huge crisis.
And today, people are pushing the button
and like just nothing's happening.
And it's just because like people,
it's just like the 80th time
you're going through a race panic.
The boy who cried wolf.
The boy who cried wolf.
And it's becoming so clear that so much of that
is like astroturfed and is cynical manipulation ploys
and like using emotional guilt to try to manipulate people
to do what you want.
And people trying to like make money by selling their
newsletter or whatever it is,
where they're talking about how evil everybody is.
At some point, it's just like, all right, I got the message. Okay, I guess
we're all evil. You know, I don't know, like, I'm tired of feeling bad all the time.
And then by the way, at some point, you get the actual full on backlash, right? Because
the other thing the internet does is it's the opposite of a global village. It also makes
outlying ideas much more accessible. And so if you've had it with a prevailing view on something
in the internet, it's the best medium you've ever had to be able to go basically find the
other side. And so you find, like, I think there, I think there is, there's a power lock curve, this thing where there's
a small number of global village, dominant ideologies that basically subpeople in, and
mass numbers.
But then there's also this long tail thing where there's also a thousand, there's a thousand
entrepreneurial efforts to have new kinds of cultures, new kinds of art forms, new kinds
of creativity.
And those have already existed, but they've always been fragmented and scattered, and it's there's a thousand like entrepreneurial efforts to have new kinds of cultures, new kinds of art forms, new kinds of creativity.
And those have already existed, but they've always been fragmented and scattered, and it's
always been hard to find your little micro tribe of the things that are interested in what
you're interested in, especially if you don't live in the same place.
And the internet now makes it possible to find the alternate view of basically anything,
and it makes it easy to find the other people who like that alternate view.
And so you'll have like-
Same with products, like Etsy.
It's fun to go to Etsy instead of Amazon,
so it's a different experience.
Right, and by the way, you can pick and choose, right?
As a consumer, that's a great example.
You can pick and choose, where you can, you know,
you can buy 80% of your stuff on Amazon,
because you say, you know, it's a garbage bin,
I just don't care that much,
versus like your water bottle,
you know, like you want something handcrafted, right?
And so the choice side of it is there,
and it's really being blown out.
And I just, yeah, I continue to have a lot of faith
in people, and I think people don't want to,
I don't think people want to live in a vortex of just
uniformity and sort of mutual shaming forever.
I think that at some point people
want to do different and new things.
And I think that the internet also makes that possible.
Welcome to the house of McAdamia's. and it also makes that possible. source of Omega-7 linked to collagen regeneration, enhanced weight management, and better fat
metabolism.
Macadamia's art healthy and bring boosting fats.
Macadamia's paleo-friendly, chitol and plant-based.
Macadamies, no wheat, no dairy, no gluten, no ghee M.O.'s, no preservatives,
no palm oil, no added sugar, a house of macadamies.
Thy roasted with Namibian sea salt, cracked black pepper, and chocolate dips. Snack bars come in chocolate.
Coconut white chocolate and blueberry white chocolate.
Visit houseofmacademias.com slash tetra.
Do you think the big tech companies are too big to fail?
No.
Could something come and replace Google for search?
Yeah.
Well, so let me start with, the goal of every company is to become too big to fail.
The goal of every company is to become an monopoly.
The goal of every company at some point is to get the government to give them an monopoly
status and protect them.
And so there's this term, regulatory capture.
So big companies hire all these lobbyists.
And at some point, what they're trying to do basically is they're trying to write laws
and then get their own people into key positions at regulatory agencies so that the government
basically becomes their overseer and their protector.
To prevent people from growing to compete.
Yes.
To prevent people like us from backing founders,
like we back who are going to develop the disrupted thing
that's going to ultimately howl of them out.
And so by the way, this is a cycle.
All of our companies grow up and they want to do that too.
So this is a cycle.
And this one, we're both a dog and the bus.
Venture Capital funded Google.
Google is now trying to do what I just described.
They have this active effort underway
to try to get government regulatory protection.
And then we are back in the next generation of companies
that are trying to basically screw that up.
I mean, look, Google, this is very public right now.
Google is going through exactly the process
you just described right now with search, which
does AI make search irrelevant, right?
But does it make any sense to go search and click on links
if you just have an ad, and you know, a shared GPT
that can just give you the answers?
And that's a very big product question for Google.
And then there's also a very big revenue question in there,
which is 99% of Google's revenue comes from ads.
The ads are these keyword ads on search results.
Does that happen?
And if the user's not clicking on links,
are they clicking on ads?
And so will there be ads in AI?
Or even if there are ads in AI, will those ads work the same way?
And what's your prediction?
Like, how do you see that playing out?
Yeah, look, so, and I would say Google
was in this inter-asturbed position.
We're number one, they invented a lot of these AI technologies,
right?
And so they keep breakthrough for JetGPT.
Actually happened at Google in 2017.
In classic, the company formed a settlement.
They didn't use it.
If you talk to people in Google, they will tell you
that Google could have had JetGPT4 the way you have it today.
They could have had it in 2019.
Have they been focused on it,
but they decided not to do it.
So this is like a classic big company thing.
Now they've woken up.
Now they feel threatened.
They understand everything I just said.
And so they've got this internal effort called Gemini
and they're going to try to leap frog to GPD.
And they've certainly burned one of the founders
to come back to work on that.
They've done it.
How will it work for the ad model?
That is an open question.
That is a very open question.
I would give two assumptions.
My assumption is number one, yes, there will be a transition
from search and templates with ads.
There will be a transition from that
to just ask a question, get an answer.
And I think that transition is underway right now.
I think Google's gonna,
they're already starting to launch their own version of that.
So like, when you do Google searches now,
and a lot of those searches,
at the beginning, they'll actually just try to give you
the answer. They're trying to lean into that.
So it's that, look, on the ad side,
the argument for ads and AI is actually quite similar for
the argument for ads and search,
which is basically like, if you're searching,
like I don't know if you're searching,
where's the application?
It's like, well, how about this beach in Thailand?
It's like, okay, well, the very natural next thing to do would be,
I'm going to click and buy a ticket,
and I'm going to click and buy a hotel reservation.
And so presumably, people are going to be talking to the AI about things that
they ultimately want to buy and then there will be an opportunity in there to actually be helpful
to actually have an app that actually makes it possible to do the thing that you're talking about.
Seems like if you're going to something for information, if they have an interested in selling you a particular thing that
undermines your ability to give you the best information.
So the the history on here is that the search ads was actually not
invented by Google. I was actually invented by a different company in the 90s.
The sky bill gross actually down here in LA. I call I think it was called go to
.com at the time. And they were the first company that rolled out a search
engine where there were ads and but you know there's sort of ads in between I think it was called go2.com at the time. They were the first company that rolled out a search engine
where there were ads and they're sort of ads
in between the links.
It was actually viewed out of the gate
as unethical for that reason.
It was like, oh, this is bad because it biases the results.
By the way, there's some truth to what you're saying.
Now there's a commercial incentive.
I can't imagine a version where it's not 100% true.
Well, but now here's why it's not 100% true
because there's utility value to it, right?
Like it's like you don't know,
you don't know if you're, because of bias,
we don't know that.
Yeah, but at some point you wanna buy something, right?
Like, yeah, but how do you know what to buy?
If the thing that you're trusting
to give you the information is trying to sell you one of many,
that's a problem.
Well, so here's the other thing that, and you might view this as making the situation
better or worse.
They don't sell these ads on fixed price basis, they're options.
Ads are price on options, and so the ad that you actually see is the guy who's willing
to pay the most to present the ad.
So it's not based on the best service.
It's not based on the best service, but it's also not just the lowest common denominator
thing either.
It's the person who can justify paying the most for the ad, and it turns out there are
a lot of product categories where the guy who can pay the most
for the ad has the best product.
That's a real stretch. That feels like a real stretch.
Well, Mercedes spends a lot more money on the form of advertising and marketing that does
to see what it's product.
Who says that? Who says that? That's the best product.
I'm going to tell you what it does because generally speaking in the world, the argument
goes, the better the product, the more expensive the product, the more the company is going
to spend on advertising.
Do you believe that?
I think there's a point to it.
Generally speaking,
I'm not saying can you argue it, do you believe that?
Well look, a lot of times,
when I'm looking, okay I'll give you something,
a lot of times when I'm looking for something,
and I just wanna buy something,
and I don't wanna spend the next week trying to research it
and navigate it, trying to correct for all the biases,
and I just need the thing.
It's like a pretty good proxy.
It's like okay, if they're willing to spend $40
to get this thing in front of me, click.
Right?
One day about that.
Because the guy, because the key is self interest,
like if they weren't making money with the $40,
then they wouldn't be spending the $40.
Like they spent the $40 even before they knew
whether I was gonna buy it.
And so it's costing the advertisers
real money to run the ad.
And so there's some proxy there for they have a healthy,
there's some mean thing maybe the best funded. It may so there are moments, there are times, there are times
when the categories get over funded. But like, but there are, there are most businesses over,
this is where self interest kicks in and helps you, which is most businesses need to make
money over time. To do that, they need to calibrate how much money they spend on marketing
versus everything else. And if they kind of, over time, if they can afford to spend money
on marketing, they probably have a pretty good product because a lot of people are probably
buying it, right? And if they don't, they don't. And so, if they can afford to spend money on marketing, they probably have a pretty good product because a lot of people are probably buying it, right?
And if they don't, they don't.
And so, right.
So, the look on your face is the look that everybody had in the 90s when this idea first rolled
out, and it was exactly this argument.
It turned out it worked.
Like, it worked in that people have gone along with it.
Yes, they have.
People have given up the search for the good thing in exchange for the convenience of somebody wants to sell me this
and they're telling me what to buy.
Well, yeah, you can see here would be a question.
Are people more or less likely to buy the better version
of the product out of all the choices today
than they were before the internet?
Depends on the person.
Yeah, okay.
But like a lot of people,
it was part of the, there's also just a fading memory thing
that I think that happens,
which is free to the internet,
it was actually really hard to get
any information on products.
It's true, right?
Like, and so like,
you know, God help you,
like try to find like,
you know, well, this is a big issue
in the car industry for a long time.
It's like, you know,
some cars were just like,
much more fundamentally
pro-underbrucked,
break down that of the cars,
but it was actually really hard
to figure that out.
And it was hard to aggregate,
it was hard to either have
an authoritative voice that would tell you
or it was hard to actually have
an environment for people could tell each other. Right, so a lot of consumer reports. It was hard to either have an authoritative voice that would tell you, or it was hard to actually have an environment where people could tell each other.
Right, so a lot of consumer reports.
It's hard for you to do it.
Look, they built a big business.
They built a big business because of that.
And there was a certain, as you can sure you remember this,
there was a certain kind of consumer
who was glued to consumer reports and used it as the Bible
for everything they bought.
But there were a lot of people who didn't,
because there's a lot of people who just have other things
going on in their lives.
And so, I think on balance, the internet has made it. So the better products have done better, relative to worse products. I mean, there's just there's a lot of people who just have other things going on in their lives. And so, I think on balance, the internet has made it,
so the better products have done better, relative to worse products.
I mean, there's just there's basic examples, which is remember,
when you used to mail order things,
wait four to six weeks for delivery.
Yeah, and I had no idea what you were going to get.
Right, yeah, exactly.
And there was some drawing and some catalog somewhere,
and whether it was even the product.
Right, and so, today, like everything's overnight,
there's reviews and ratings. And again, is it, today, like everything's overnight, like there's like reviews and ratings.
And like, it again, is it perfect?
Like, are there like fake reviews?
It's definitely better.
And you can order several different options.
And you can see when that works for you.
And everyone understands, because it's mail order,
things get sent back.
I definitely prefer it now.
That's a different question, though,
than the corruption built into the system. Yeah
There's some disconnect. Oh, look having said that Google
I'll defend Google's honor. We just want one more step which is I Google doesn't care. Google's running an auction
They're optimizing on price of what the person's willing to pay to put the end of you
So they're not to to my knowledge. They're not putting their thumb on the scale like they're not basically saying we're gonna rig like Amazon is well
So this this Amazon so Amazon's in Well, so this, this Amazon,
so, Amazon's in an interesting spot.
Are Amazon ads surfacing the high-quality products?
No, not even the ads.
Amazon, if you're looking for whatever,
often the first recommended choice from Amazon
is the one that Amazon makes.
Oh, that's also true.
Yes, exactly.
And the FD, there's a government investigation hour
that, you know, the FDC is trying to basically force them
to stop doing that. Is that true? Yeah, for that reason. And the FD, there's a government investigation hour that the FDC is trying to basically force them to stop doing that.
Is that true?
Yeah, for that reason.
I don't know whether or what will happen
with that case, but they're trying to.
So yeah, it looked like, but take all of your points.
I think, well, well, made.
You know, look, this is all going to get rethought
in the AI world, right?
So, you know, look, should the AI have a view,
should the AI have opinions of product quality,
the way it has opinions on everything else?
Well, so Elon, this is also happening
in social media, right?
So Elon has this thing now, community notes, right,
for X, where it has this thing.
And it's this.
What is it, and how does it work?
OK, so it's a very clever thing.
So all these social media companies,
these social media companies all used to be free speech.
You know, free speech, free speech,
going in the free speech party.
And then it just turned out, like a lot of people
said, a lot of things that made people mad.
And so these companies all created what they called trust
and safety groups.
And of course, that is like a super or well-earned terms.
It means you can't trust them at all,
and they're totally insane.
Right?
Right.
Right.
And so what you ended up with in the trust and safety groups
was just basically, you have some well-many people.
And then you had a lot of people who were in there
to basically put their thumb on the scale.
And this really kicked in around politics, where they would
have just obviously different standards for different
political parties that everybody could kind could see in plain sight.
And so that model never worked very well.
And so the people at Twitter,
and I think this actually started before Elon got there,
but he embraced it when he got there.
They came up with this new idea called Community Notes.
The idea of Community Notes is that anybody,
there are people who have the button to be able to submit a community note,
which then there's a way that you kind of apply
and you become one of those people.
It's like being a Wikipedia editor or something like that, right?
And so there are certain people
who can propose community notes,
but a community note does not get approved and used
unless people who have a history of disagreeing agree on it.
So for example, we see a statement,
somebody says something, and it's just like,
it's wrong, you and I are two community note editors. We have a history, somebody says something, and it's just like, you know, it's wrong. You and I are two community note editors.
We have a history of political disagreements.
We have a history of conflict
where we really don't see eye to eye on things,
but then this time we see eye to eye.
I see.
And then that's the one that gets posted.
Right, and so I would say it's been working
shockingly well.
Like, it's not, and by the way, it's not perfect
because it still goes wrong,
and people try to trick the system,
but it's by far the best system that I've seen
that actually works.
So he's done this very clever thing, interesting thing
where you can now do community notes on ads.
Right?
On ads.
Interesting.
So if somebody runs an ad that makes false claims
or exaggerate something, it can get community noted.
And so the platform itself is like,
oh, actually the claim in that ad is actually not correct.
Here is the actual truth and here are the links for the truth. And you can
imagine the reaction of the advertisers. I mean, right, you know, they're just like
absolutely furious every time this happens. But it's, but he's running this experiment
basically saying is the faith and trust of the user base, you know, being improved by
the fact that the community knows can actually be legitimately factored, or the, the as can
be fact checked, is the value of that greater than the loss value of the advertisers
who get alien.
And by the way, he would say the advertisers who get alienated by that are probably the
worst advertisers.
Anyway, there's some issue in service.
If you're making a product and if people have a problem with it, if you're the maker of
it, you want to know that.
If you're good, right, if you're honest.
Absolutely.
Absolutely.
Yeah, if you want to improve your product to always be the best it could be, of course, you would're honest. Absolutely. Absolutely. And if you want to improve your product,
it could always be the best thing.
It could be, of course, you'd want to know that.
And if you don't, maybe you should not be advertising.
Exactly.
Maybe you should not be allowed to advertise.
And so I think that's a very clever experiment.
And I think there's a lot more of this kind of thing.
There's a lot more experiments like this people can run.
I think in the next 20 or 30 years, we'll figure out
actually what, this is all around trying
to make the global village work better.
Like this is how to get basically 8 billion people to kind of share a mind space and not
want to kill each other.
And, you know, this is the kind of technique that I think you can start to figure out.
So much of today's life happens on the web.
Squarespace is your home base for building your dream presence in an online world.
Designing a website is easy, using one of Squarespace's best in-class templates.
With a built-in style kit, you can change fonts, imagery, margins, and menus. So your design will be perfectly tailored to your needs.
Discover unbreakable creativity with fluid
engine, a highly intuitive drag-and-drop editor. No coding or technical experience is required.
Understand your site's performance with in-depth website analytics tools. Two airspace has everything you need to succeed online. Three to blog.
Monetize a newsletter.
Make a marketing portfolio.
Launch an online store.
The Squarespace app helps you run your business from anywhere.
Track inventory and connect with customers while you're on the go.
Whether you're just starting out or already managing
a successful brand, Squarespace makes it easy to create and customize a beautiful website.
Visit squarespace.com slash tetra and get started today.
Can a company grow too fast?
Oh, yeah, for sure.
Tell me about it.
Yeah, very common.
Very common.
Yeah, well, so especially when either the revenue just starts to pour in or when they
raise too much money.
And the basic mechanic is very straightforward, which is just, you know, you have a problem.
If the answer is you can always hire somebody to fix the problem, then that's what you'll
always do, because it's the easiest thing to do, right? I'll just go hire
another engineer and they'll just go do that. The problem is, it's like any human organization,
like organizations behave very differently when they're large versus when they're small.
And there are big issues involved in running large organizations that are very complicated
and it's very easy to run them badly. And it's very easy that I use the metaphor of
obviously leaving the building. It's very easy that I use the metaphor of obviously leaving the building. Like it's very easy for
almost to leave the building, company grows too fast, the good
people quit, the bad people stay. We have another thing we call
the law of crappy people, which is in any organization. So
organizations have levels, you know, so there's like levels of
promotion levels, you know, level one through eight or whatever.
And there's always an internal process to try to figure out how, you
know, when people should get promoted. And so the law of crappy
crappy people says that the quality of any level in the company will degrade to the worst person at that level.
Right. And it's a very natural human thing, which is just like, well, I want to get promoted level three.
And like, I know I'm better than, you know, that other guy who just got promoted.
Right. And so whoever is the worst person at that level now sets the bar.
And so basically as companies grow, basically, performance, test to collapse, you know, kind of level by level, you know,. And so basically, as companies grow, basically performance tends to collapse,
kind of level by level.
Meetings, communication overhead,
and overwhelms everything.
People are just sitting in meetings all day.
Or another thing that happens is you hire too many people
from another culture.
So you think you have your unique culture,
and then you hire 300 people from Google,
and you discover all of a sudden that your Google
all over again, but without their business,
just with all their problems.
Yeah, and so that's, I would say that's,
this is part of the more companies die
of indigestion and starvation.
Right?
Because starvation is no fun, but starvation
is highly motivating, right?
If you don't have any money and you only have a few people,
you have to be smart.
If you are flush with cash, you can try
to spend your way out of all your problems.
And you could say there's a metaphor here for individual lives, right? Which is like a lot of people getting a lot of trouble.
Yeah, I think that's why I was asking about if people mainly came to you for money, because it seems like money is not at the highest level of what it takes to make great things.
It's a piece of the puzzle, but it's not the main piece of the puzzle. I don't think.
It hasn't been in my experience. It has been. That's right. And I think that we find ourselves in
people don't believe this. We find ourselves in practice often trying to get our companies to
not raise as much money as they can, down the road. Look, we just tell them, look, it's just
bloat, just simply bloat. Look, you know, you know how to run this company in 50 people. Everybody
feels great. You just need to close your eyes and imagine that,
that's not how your day is going.
Your day is going.
Organizing meetings on calendars and cross-cars.
And if you can grow the company with the 50,
why would you hire more?
Yeah, yeah.
Now usually you do.
So there are some companies, there are small number
companies that become big and important
with very small numbers of people.
And there are these kind of very magical successes.
You mean examples?
So the great one was Minecraft.
So Minecraft at the time that it sold was, I think, three people total.
Wow.
And what did it sell for?
Remarkable.
I don't know, $1.5 billion or something.
But it was this giant global phenomenon.
And now it's, you know, it's on my Microsoft.
And now there's a lot more than three people working on it.
But it's an amazing thing.
I mean, look, Bitcoin was like one guy.
It's worth like $800 billion to there, something like.
So there was that WhatsApp when they sold it to Facebook.
It was 50 people Instagram when they sold it to Facebook was, I think, 11 people.
So you have these kind of very kind of special things.
Majority is one guy, plus basically a bunch of contractors.
It's this huge kind of force network, AI.
So you have these very special things.
They do it at some point,
this is a pressure on the other side.
At some point, there are things that are just obvious
that these things should do, that they just don't do.
Like Minecraft was like a great viral phenomenon,
but it really should be what Microsoft's turning into today.
It should be like an entire world,
and there should be, it should be like,
you should be able to use it for like education,
and it should run on all these different platforms,
and it should have all these different, you know,
kinds of all this new content,
and you know, it should be this thing
that people can be in for 20 years.
And so, you know, it's much like deeper and richer
and more built out than it was.
You know, same thing, what's that?
What's happening at Meta right now?
What's that?
What's that is, you app is very widely used.
It's one of the main ways that small businesses communicate
with their customers.
But there's no capability, there's been historically no capability
inside what's app for a small business to be able to like
maintain a customer database or whatever inside what's app.
It's just like a very obvious thing,
like for a restaurant or something.
And so, you know, there are things like that that make sense
to have that are actually valuable to the users
that just require more people.
So yeah, there's pressure on the other side to grow.
It's just there's some theoretically optimal rate of growth of heads,
and it's very easy to, it's one of those things where nobody ever quite gets it right.
Well, and then look, the other thing that happens is the economic cycle,
these companies all tend to over hire during an economic boom.
And then at some point there's a crash and there's a rationalization,
and you kind of wish, you always wish that you didn't have to go through that
because it's bad to do layouts, but it's very hard to just keep that.
It's very hard to over time have a sustainable model
where you don't make mistakes.
How did you meet your partner Ben?
Yeah, so I met Ben.
We were actually trying to staff up at our first company in Netscape
in 1984. Yeah, we were desperately, we had the Tiger
about the tail. Was the internet was going to take off and we knew it.
So we had that kind of get in position.
And so we were trying to hire and good people.
And he was one of the first people who came in from an existing successful software company to join us.
And he was a young guy.
Jim and his was called a product manager.
So somebody who kind of orchestrates things.
And then very quickly, he just became clear that he was one of the sort of sharpest young people we had.
And so we promoted him very rapidly over the course of the next few years.
And I ended up working very closely with him.
And I always thought, had we sold our company in 1998,
in 1998, four years in, but just an eternity.
That was the turn out was only four years.
If we had stayed in the pen and I think he would have
ended up being the CEO.
Like he was on track to do that.
So I spent a lot of time with him
and we got to know each other really well.
And then it sort of became obvious
after we sold our company.
I wanted to start a company and so it became obvious
you would be the right person to do it with.
Is it as much a friendship as it is a partnership?
There's an old thing around founding teams
in just generally, which is what it's a friendship
for a business relationship works better
than a business relationship for our friendship.
Like when two people who are personal friends
going to business together often destroys the friendship. Whereas people who have a business relationship for our friendship. Like when two people who are personal friends going to business together
often destroys the friendship, whereas people who have a business relationship
first and really learn to trust each other in business can become very good friends.
And so for us, it's been that latter.
It was business first and then it's become a very deep friendship.
And we talk all the time and we get along great and we,
you know, we're always teasing each other.
It's all good.
Having said that is now almost 30 years.
Wow.
And so there also is an old married couple,
you know, kind of aspect to it, right?
And some of that is, you know, we can finish each other's
senses and, you know, some of that is we get
on each other's nerves.
So we still argue all the time.
It's interesting.
We'll be something you would argue about.
Just constant arguments.
I mean, about it basically everything.
Well, it's obviously a big benefit of the partnership is you have somebody you can actually
talk to, who you trust, who can actually tell you things that when you're getting things
wrong, so we argue about, you know, the biggest argument is always around people.
You know, is this person good or not?
You know, should we, you know, should we promote this person, should we fire this person?
You know, so that's around our firm, a lot of that's around the companies we work
with.
You know, he works with a lot of our CEOs. He's kind of our, he's our, among other things, he's like our management. A lot of that's around the companies we work with. He works with a lot of our CEOs.
He's among other things.
He's like our management guru.
So he's the guy who basically works with all the
most high potential CEOs to help them kind of develop.
And so he develops kind of the book on it.
He wrote the book on it.
He's actually a hard thing about hard things.
His consequence, like he's legitimately
very opinionated about people kind of coming out of that.
I would say he's more focused on culture.
He's extremely focused on the culture of the company
that a CEO is developing.
And then he applies that in our firm,
but he also applies that to the CEO's that he works with.
And so he really wants to understand,
when he's evaluating a founder,
he really wants to understand what culture
the person's building is in their company.
I'm a little bit more open to variations. I'm a little bit more open to variations. I'm a little bit like I'm more
open to like what he might call recklessness or chaos or you know original thinking that
maybe doesn't work and might blow up in your face. And so I'm always a little bit more
on the side of you know maybe we should like you know.
Would you say you're more of a risk taker than he is?
I would say it's different kinds of risks. I think we're pretty similar in aggregates.
Different kinds of risks. Here's, I'd be curious whether I actually
he would agree with this, is I think he thinks, and I think he's right, but I think he thinks
they're kind of time-less ways to build great cultures. Like, the leadership is not a new idea,
right? It's something that has existed for a long time, cultures. The second book is actually
on culture, right? And he goes through, he talks about like Angus Con of the Mongols, and he talks about
like, you know, the code of the Bushito and the samurai and, you know, kind of these
cultures, he's easier into that kind of thing.
So he draws a lot of historical examples for cultures, and he's like, look, these are
these continuous threads of like, what does it mean to have people trust you, to be able
to bond together into a team, write all these things.
The other side of it is, you know, look, people who come up with new ideas
often have new ideas on everything.
And so a lot of the people who have new ideas on like technology
also have new ideas on like management and culture
and how to be a CEO and how to start your company.
And do we really need all these meetings?
And can we just run everything on video conference?
And do we even need to have those?
Or can we just do everything on Slack?
And so a lot of these founders will have these very creative
original ideas on how to actually run their companies.
And I would say, most of the time, because speech
to them would be, stop, focus your creativity in the area
that you actually understand, which is building products
through designing things.
And then just get good at running the company the way
the companies have been running for hundreds of years.
Like don't try to innovate on everything.
And I would say he's usually right on that,
usually those experiments, and a disaster.
You know, that's it.
Every once in a while they don't.
Every once in a while there's a totally new approach.
And I just say, case study of this right now is Elon.
We work with Elon.
I've known Elon for a long time, but we were Tesla and SpaceX predated us as a venture firm,
but we're involved in his Twitter acquisition.
And Elon has a completely different way of running companies.
And you can read about this, press,
the press has covered it as a length, right?
Like it is a much more blunt chaotic hair trigger,
direct engagement way.
He is very quick to fire people.
He's extremely aggressive.
He's gets micromanage, he micromanage
to the Nth degrees, involved in everything.
And it's a playbook that for most CEOs
would lead a disaster.
What's in that the case with Steve Jobs as well?
Yes, Steve Jobs is a good point.
So Steve was a genius.
Steve was a genius with it, Arkside.
And the genius was real, and the Arkside was real.
Everybody's seen the results, and the results,
you know, obviously make the whole thing worth it,
but like Steve could be really rough on people.
And there are many true stories of us
being really rough on people.
And this would be a great example.
As Ben's V would be, yeah, you just don't do that.
You figure out how to be great without basically
being rough on people like that.
And I would say, in general, that's the best advice.
In general, if you're coaching a young kid
on how to run a company that you want them
to try to calibrate that.
Because generally, if you're really rough on it.
Because you don't really get anything for that behavior.
Well, Steve did. Well, we don't know that. We don't know that because you don't really get anything for that behavior. Well, Steve did.
Well, we don't know that.
We don't know that.
Because he didn't have Ben coaching him.
Okay, so this would be the argument.
Okay, so this would be maybe the argument that Ben and I
would have about it, which is Ben would say,
Ben would say what you just said, Ben would say,
no, that was purely destructive.
If Steve had not done that, he would have been even better.
Well, we don't know.
It is no way to know.
But I do think that's a real question.
It's a real question.
I can make the other side of the argument.
Tell me.
The other side of the argument is,
A-players want to be on a company of all A-Players.
Like, the people want to be around other great people.
Yes.
Most companies are way too tolerant and indulgent of mediocrity.
Most companies are good at getting rid of just like,
what we call shitheads.
They're good at getting rid of people
who just are clearly no good,
but they have trouble with this mediocrity.
Good but not great. And Steve just like could not tolerate mediocr are clearly no good, but they have trouble with his mediocrity. Right, good but not great.
And Steve just like could not tolerate mediocrity.
And so when he identified that you were meeting,
he actually, there was a term called
flipping the bosom bit.
And when he flipped the bosom bit,
he just, he fired you that day
because what he, what he had learned over time
and his view was that you never
had flipped the bosom bit, right?
You just need to call it right in front.
And Ben would disagree with that idea?
Yeah, yeah, he would say, look, it's's just too hair trigger like you're in a meeting with somebody
um and they say one thing that's wrong and you fire them on the spot like how do you know
like you're not telepathic like you're just happened to be in that meeting and look maybe his wife
yelled it in this morning or maybe his kid just died. I mean you don't know what.
And also maybe he's actually right because it does happen sometimes.
Exactly. Maybe he's right. Yeah exactly. I'll tell you the conversation we had on this that really made a big impact on both of sometime. Exactly. Maybe he's right. Exactly.
I'll tell you the conversation we had on this
that really made a big impact on both of us.
It was with Andy Grove, who was best away.
But when he was running Intel, he was considered
kind of the best CEO and kind of the history
of the tech industry.
And when we were young, we used to meet with them
and he would help us on things.
And so one of the questions we had for him was like, wow,
it always feels like we're firing people too late, right?
It's like every time we fire somebody, it's like, wow, it always feels like we're firing people too late, right? It's like every time we fire somebody,
it's like, wow, we wish we had done it like nine months
sooner, right?
Because they had done damage in that nine months
or because you missed an opportunity.
Yeah.
It's something better.
Well, it's this flipping the boze a bit thing.
It's like, okay, we identified that they had an issue,
a problem, you know, every once in a while people
can turn it around.
Most of the time in a workplace environment, if you've got a competent management, when they identify as somebody has an issue, like while, people can turn it around. Most of the time, in a workplace environment,
if you've got a competent management,
when they identify as somebody has an issue,
generally people don't turn it around.
They just statistically, they just generally don't.
And so you have this thing where, like,
if you're running the standard management playbook,
you put people on what's called performance plans,
and you try to more aggressively kind of coach them
for improvement, and then they normally
don't make it through.
And then at some point, three months or six,
or nine months later, you kind of make that call. And And actually the reason you put people on performance plans is because most
companies are too indulgent of mediocrity and so most companies don't aggressively perform as manage along
the way, they don't document things along the way and you need to put people on the performance plan to
have the legal justification for firing them because you need that paperwork that you actually like did
it with them but like you know you're just like shit, I'm wasting time
this person is doing a mediocre job.
They're infecting the organization and mediocrity.
I wish I can move faster.
So we asked Andy.
We asked Andy, basically, is it normal to always
feel that you're firing people three months
or whatever night must you lay?
And he's like, yeah, he's like, that's what it always feels
like.
And he said the reason is because if you did it more
aggressively than that, the organization would do.
You as a sociopath, right?
And it's like, well, Steve didn't care about being viewed as like,
you know, he was just like, yes, we're just not,
I'm not optimizing for what people think of me.
I'm not optimizing whether people think that I'm nice.
I'm optimizing that the people around me are going to be at the top of their game.
And if they're not, they're going to go, right?
And you want to see exactly the same way, right?
And then the results, and again, this is the argument
that Ben and I have is like, okay, Apple,
it's a lot of space, you see,
a rocket's at land of their butts, right?
Like, you know, like, oh my God, right?
Is this TV, Elon, Playbook, like the more aggressive
Playbook actually the one everybody should be having.
And Ben's comment to that is, no,
most people will destroy their organizations,
but they try to do that, like it doesn't actually work.
I go through all of that just to say, yeah, that's the kind of debate that we'll have.
Look, it's not resolved, right?
Quite honestly.
Well, it's unknowable.
It's unknowable.
I don't think it ever will be resolved.
Tell me why a company would choose to stay private and why it would choose to go public.
So, most of the great business institutions that are around for many decades, most of them
are public companies.
And the reason is because they end up with a lot of employees. They end up with a lot of
customers. They end up with a lot of constituents. They end up with like, you know,
governments, you know, getting very trusted.
Any that are private?
Yeah, there are big ones. I mean, there's a bunch. There's one prominent one to be
Coke industries, Charles Coke. It's a giant industrial company and they do like, they're huge,
like they're giant, they're big. I mean, I don't know the numbers. They're just,
numbers are just, technically large. They'd be giant, they're big. I mean, I don't know the numbers, they're just, numbers are just technically large.
They'd be a Fortune 50 company tomorrow if they went public,
but they've, they've, he's kept in private the whole time.
Bloomberg is another Bloomberg,
the company Bloomberg is another example of a big private company.
Because private company has always been private.
Mike Bloomberg just knows it outright.
So yeah, they exist, but most of them end up public.
And I think the reason they end up public is just
because if you're an institution at some point,
you kind of have to act like an institution.
You have to be trusted as an institution.
So we have to be transparent.
People have to know what you're doing.
Public companies carry with them.
This is responsibilities to report to the public.
You have to explain yourself in a way that's very open and you're under these very stringent legal requirements to do so accurately.
So people tend to trust the things that public companies say more than private companies.
How accurate are those results?
So in the United States, I would say quite accurate.
Either they are wholly accurate or if they're off a little bit.
Outside the US, it is still the Wild West.
And so we went through a series of scandals in the US business world.
We went through a series of scandals in the 2000s around Iran and companies, and World
Common Companies, that era.
And the stringency of the accuracy of the reporting and the level that you get reviewed by the government
and the penalties to lying are quite high now.
And look, most people, most responsible people want to run something that is like legitimate
and genuine.
There's an old thing, there's been old thing, there's been old thing I used to hear when I was a kid,
which is like, remember there was this gangster named Myrlansky, the famously ran the big part of the Mafia in the US,
and people used to say, well, Myrlansky was so successful
as a gangster, if you know, just imagine how successful
he would have been if he was like, you're running
General Motors, right?
And it's actually, I think that's actually untrue, right?
I actually think like, you know,
The Myrlansky is actually very ill suited
for running a big public company because you get caught lying,
you know, and you break somebody's legs,
and like, it's a big problem.
It's not what a trusted institution does.
And so the best and the brightest actually I think want to be legit.
They want to do something that's actually genuine.
And that's true for companies as well.
Look, having said that outside the US, like,
boy, the scandals, I mean, even the scandals in Europe
are like mind bending.
And then once you get outside, the developed world, like things get really hairy.
So most of the world is not well developed on this stuff yet.
But anyway, so there's the transparency, kind of truth telling component to it.
There's also the, at some point you want to have a lot of shareholders, like at some
point you want kind of the world at large to be able to invest in your company.
You know, you want ordinary people to be able to have a stake in your success.
You want everybody to have your stock in the retirement plan,
because then it sort of gives everybody a reason to kind of root for the company.
At some point, you want to be able to,
you've got all these employees that you're paying in stock.
At some point, you want them to be able to sell their stock and be able to buy houses
instead of getting to college.
You also get what's called a currency.
So your stock becomes a currency.
And so you can use your stock to buy other companies.
It's easier to raise debt when you're public.
And so by the way, a lot of employees just want to work for,
they want to work for an institution.
They don't want to work for some fly-by-night startup.
They want to work for something that's trusted where they can.
One of the things I do with candidates a lot of the time
is especially immigrants, kids, some kid
who's a first generation immigrant.
I'll literally get on the phone with their parents.
And this has happened repeatedly.
I sort of explain to the parents, like no, actually, it's okay.
It's okay if your kid doesn't go to work for IBM, right?
Or Microsoft or Google.
It's okay if they go to a startup because actually in the US,
that's not actually, because the parents are worried about career death
if it doesn't work out, right?
And so there are a lot of people who just want to work for a stable company.
So anyway, so those are all the reasons to go public.
The reason is not you, it's just like,
look, you're supposed to all the scrutiny, right?
And so you, you, you, you, is it only scrutiny
or anything beyond scrutiny?
Well, so it's a consequence of scrutiny.
So you have a stock price, the stock price trades every day.
But do you start making decisions based on the stock price?
Big time.
But that's not your core business.
Correct.
Changes your whole business model.
Exactly.
100%.
That seems bad.
Yes, well, it depends on...
Yes, so it destroys a lot of this.
This ruins a lot of companies.
So the easiest failure case is that you've been running your company, just running your
business the way you run a business and then all of a sudden you've got this daily
scorecard and you're optimizing the daily scorecard.
Or even if you can get through the day, you're optimizing quarterly results. You're reporting every 90 days and you're got this daily scorecard and you're optimizing the daily scorecard. Or even if you can get through the day,
you're optimizing quarterly results.
You're reporting every 90 days
and you're optimizing for that.
And so, yeah, so when this goes poorly,
basically what happens is time horizon contracts, right?
And so instead of planning things
one, three, five, 10 years out,
you're planning 90 days out
and nobody can do anything great in 90 days,
get the scalability companies.
And so you just basically,
like this is when Elvis leaves the building, right?
So, and by the way, often it's coincides with, this is when the founder stepped down and
then they hire professional CEOs, professional CEOs and sort of optimizing for their own compensation,
they're optimizing on these short time frames.
So that's a big downside.
There are ways to deal with that, but that is a big downside for sure.
In your position when you're investing in companies for 10 or 20 year trajectories,
if they go public, how does that change your position?
Because it's now they're playing a different game.
Yeah, that's right.
They're no longer playing the 20 year game.
Well, they might be.
There are public companies that do, right?
So Amazon played the long game
the whole time, still does.
Apple played the long game.
They were public, the whole time Steve Jobs did
the turnaround of Apple.
They were public company. Oh, Look, Tesla has been public for,
I don't know, a decade. SpaceX is private, but Tesla's public. And Elon runs Tesla
of the exact same way he runs SpaceX. Netflix invests for the long term. There's a lot of these
companies that have, I think, done a, I can tell you Mark Zuckerberg, you know, it met a,
this hasn't changed. I'll give you these things. You know, look, it can be done. It's a higher
competency, you know,
it's a little bit more of a high wire act.
Like you're getting graded by the world
on what you're doing and like, you know,
oh, this is a speech we give to CEOs
because they're like, oh, well, I'll just do a speech
out of state, I'll just do whatever I want.
It doesn't matter, I'll be like, yeah,
but you just need to imagine what happens
in your stock drops like 97%.
You're on the front page of every business newspaper
in the website in the world talking about
what a turkey you are, right?
Have you experienced that?
Oh, of course, yeah, multiple times.
You're not picking me, sniffing you,
is that I'm not going to like?
Oh, yeah, it's just so horrible, it's awful.
Yeah, no, it's happened multiple times.
Most of these companies, there's this great form of a chart,
financial chart called the drawdown chart.
And the drawdown chart is basically,
it's baseline is zero.
And then the chart is the percentage drop
that the stock is experienced in different points of time.
And so it's zero to like a negative 100%.
So it's like this.
And then it's like, and what it looks like,
it's a little bit like somebody having a heart attack.
It's a cardiac arrest.
And so I just give it the drawdown chart for Amazon
is really interesting because like for a San Francisco,
this just trying to success.
But like there have been like,
I think five different times in the last 20 years
where the stock has dropped like 97% or something like that.
I mean, just like these massive crises of confidence
for basically everybody's just like, yeah, this is-
Well, it had no profits for how many years?
For a very long time, very, very long time.
For a very long time, exactly.
And so you were running, yeah, right,
you were running into it.
What Jeff had was, he talked about it publicly,
he's like, look, we're investing for the long run,
we're re-investing every penny of internal profit
back into the business, we're building
what they call intrinsic value in the business
and we're just not gonna hand out, you know,
dividends to shareholders.
And the investors who went along for that ride did great,
but it's easy to say that,
it's harder to do it when your stock graph's 97%.
And the headlines,
I mean, there was a famous cover of Barence Magazine in 2005
and Amazon at that point was already like 10 years old
or coming up on it and it was literally,
the headline was literally dot bomb.
Wow.
Like Amazon is going out of business.
Wow. They're gonna be worth, it's gonna be worth is zero. Wow. Like Amazon is going out of business. Wow.
Right?
They're going to be worth this is zero.
Wow.
I mean, I went to investor conferences in the early 2000s
where people were just openly laughing at Jeff.
Just like laughing in the meeting.
Like you're just like completely full of it.
And look, everybody, part of being a CEO
as people are doubting you, whatever,
when the entire world is doubting you.
Like it's just, because what you know,
it happens is like anybody who's in bad career,
they've had one of those moments,
it has that which is like,
all of a sudden you're talking to your friends,
your friends are like,
are you okay?
Like how are things going?
And then you go home,
your family's like, are you all right?
And you're like, yes, I'm fine.
I'm the same person I was.
I'm not.
Did you know Jeff at that time?
Yeah.
Was he fine?
Yeah, he was fine. No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, him and his wife at the time. So, that I don't know, but to the outside world and to all of his friends, he was finding the entire time.
And he was finding the entire time because he's just like, we have a plan, we're executing the plan,
we're not going to get shaken off of this.
Now look, you could also say, a fair response to what I just said, a survivorship bias.
Like, here I sit talking about the ones that work, like, what about all the ones that didn't work?
Because like, a lot of times in the stock market, you know, drives a stock to zero, it's because the company sucks about all the ones that didn't work, right? Because like a lot of times when the stock market drives a stock to zero is because the company
sucks and like it's going to fail, right?
And so like that's the other side of it, right?
There's no substitute for the thing.
I think every time when the market loses faith in a company and it goes to zero, but the
company still has value and then it comes out back out of the ashes and reinvents itself.
Yeah, so our company, so our company, our company,
which, sorry, loud cloud, sorry,
sorry, company, Ben, I started in 1999.
We went public in 2001 and then by 2003,
by 2003, our market capitalization of our stock
was half the amount of cash that we had in the bank.
Right, and so had we just simply liquidated the company
and given the cash back, we would have made twice
your money on the stock.
And so what the market basically said was, yes,
these guys have what that meant, what the market was telling us,
the message implicit in the price was, these guys suck so bad,
that even though they have this cash in the bank,
they're just going to burn the cash.
And there's not going to be anything left to show for it.
And then we actually then get most of the credit. He was running the company. to show for it. And then we've actually been, been, been, been gets most of the credit he was running
the company.
He turned it around, and then we ended up selling.
I think the stock went up 40X off the bottom.
We sold the company for 40X that amount.
And yeah, that was sort of a quote unquote turnaround.
Look, Steve Jobs, Apple, so when Apple, when Steve took Apple over in 97, when he came
back, Apple had less than 90 days of cash in the bank.
Like they were about to go bankrupt.
Like that's how bad it was.
In 2009, I had a chart I was carrying on my pocket
all through 2009, 2010, we were starting,
we were starting to firm.
And I think Apple was trading it up.
I think they've bottomed out in a price earnings ratio
of like six.
And what does that basically mean?
So a price earnings ratio of six basically,
it's like what a steel mill trades out
if it's about to go out of business.
It's like trading for the liquidation value
of like the plant equipment.
Like it's a super low, it's basically the market hates you
and thinks you're an idiot kind of thing.
And this was like, this was like right when the iPhone
was taking off.
Right.
And so it was this like, and there's a loose relationship
between PE ratio and growth rate.
We're roughly speaking PE ratio and growth rate
should be about the same.
And so if the company's growing earnings 10% a year,
or should have a PE of about 10,
sort of a loose relationship.
And Apple had a phase there where they were,
that PE was six and the growth rate was like 40%,
and then in some periods through their size 80%.
And so it was like undervalued by like a factor of 10,
just on like basic math.
And it was obvious to see that.
Well, I thought, I mean, I wasn't running public money.
I didn't put money on my mouth was.
I gave a lot of interviews at the time where I pulled out this chart.
And because the point I was not trying to make a stock call, the point I was making is
everybody hates tech irrationally.
Like, if a financial crisis, like everybody got negative about it.
This was after the dot com.
Well, it happened twice.
So it happened after 2000.
So what happened was the 2000 crash was like a real tech crash.
And tech really fell apart.
And there was like actually a lot of carnage in a lot of companies went under and then what everybody thought what happened was the global financial crisis of 2008 would cause that to happen again, but it didn't but they thought it was going to so they acted as if it would and they traded the stocks as if that was what was about to happen. And so basically 2008 to 2011, 2012 was just this extreme level of irrational hate
and fear. And again, it's not like a super genius thing to be able to say, looking because
you're looking at it, you're like, well, I don't know, this iPhone seems like they're
going to sell a lot of these things. Right. And it's the same thing. Google was growing
super fast, Facebook was growing super fast. And the world at large had just gone negative.
Well, there's this famous thing in the metaphor for the stock market is this famous thing. They say, if you think about the stock market as a person in Mr. Market, there's this famous thing, or the metaphor for the stock market, is this famous thing, they say,
if you think about the stock market as a person in Mr. Market,
and he's like full on, you know, clinically manic depressive.
Right, and there's just like certain times
that Mr. Market is just euphoric about everything,
and there are certain times when he is just terminally depressed about everything.
And then we all collectively are Mr. Market, right?
So it's a group psychology thing,
and so it's very hard to be a participant in the world
and not get pulled into the group psychology. But as a consequence, the market
goes through these wild swings, and it does regularly go through periods where people are just
rationally negative. And then of course, you know, and then it's like, okay, read the investor
textbooks, and it's like, well, that's when you buy the stocks. And it's like, well, yes, but that's
when everybody's in a horrible mood. And anybody who buys these things looks like a complete idiot,
because everybody knows that they're all going to go out of business.
And so you talked about the consequences of being public. This is one of the consequences
of being public is your companies get caught up in this. And you feel it on a daily basis
in a way that you might not, if you're basically...
You could be in the tech company that's not public. And you're just looking at your bottom line
and everything's fine and you know your business is doing well, whereas another the same company public,
all of a sudden gets caught up in this wave of things
crashing and you crash with them,
but nothing is changing your company.
That's right.
Well, this goes to this relationship between, you know,
the sort of metrics and management, right?
And so there's this whole thing in management,
which is you manage what you can measure, right?
And so if you have a number that you can optimize on,
you tend to optimize on that, you tend to run your company
around that.
It's the same thing politicians do with polls.
I've got a poll number, and I'm just
going to try to optimize around that.
Now, is that the optimal way that people are actually
going to vote?
Like who knows?
But like you've probably seen this in political speeches.
If you've seen this in political speeches on TV,
or the political speeches, or debates,
and they'll have this thing where they'll have a focus group,
and they'll have a dial that they can go to 100%, negative negative, 100% positive and then there'll be these red and blue
lines.
And it shows word by word, it shows the mood of this focus group watching the thing.
And so it's like a stock price, right, for every word coming out of a politician's mouth.
And so if you're a politician, do you use that as a tool to try to like optimize every
single word coming out of your mouth and like basically become the master of the craft
of political speech giving or do you say, well, that's crazy.
Like if I get wrapped up in that psychology, I'm going to drive myself nuts and I'm going
to end up being incredibly, you know, unauthentic and I'm just going to be like a pure opportunist.
And it's the same thing with the stock prices.
You read a lot of history.
Is this just a passion or do you see some other use in understanding the past?
It's a desperate attempt to predict the future.
So, look, for 30 years, like I've been,
you know, I've been doing this now for 30 years,
starting companies or finding them, you know,
statistically with what I do,
it's like a 50% success rate, 50% failure rate,
basically, is what the number is.
Which pretty good, actually.
Which is pretty good.
It's pretty remarkably good.
Well, it's pretty good. It's pretty pretty remarkably good well it's pretty good it's pretty good I mean it
feels terrible it feels awful it's like your business I mean it's like your
business too it's like you know sometimes you know 50% is remarkable yeah I
well you know for baseball it's great you know there are other you know
fields you know for test taking it's terrible right like you know driving on
PCH you got to score 100% right so so yeah so like it depends like look and here's another thing like so you're betting on thingsH, you've got to scroll 100%. So yeah, so it depends.
Here's another thing, like specific.
Like you're betting on things that are one in a million things.
Well, yeah, let's say one in a thousand or something.
Okay, one in a thousand things, 50% really good.
Yes, and just like in your business, the upside of the winners is bigger than the downside in the losers, right? And so if you have asymmetric upside on the winners
and contain downside on the losers,
that's 50% does well over time.
But the failures are just always horrible.
That doesn't get you, statistically you can know that.
And eventually you can know that emotionally,
every failure hurts tremendously.
And Anna's wrapped up with people.
So these are people that you care about, right?
And like one of our company's fails,
it's not gonna take our firm down because of the 50-50 thing. And these are people that you care about. And one of our company's fails, it's not going to take
our firm down because of the 50-50 thing.
And our investors understand that.
But it's a founder who has poured five, I just
re-edged that company.
You know, guys poured five years into this.
And it's been a big part of his life.
And some of these people bounce back,
and they go on to other things.
Other people just, at some point, they're just like,
I can't take it anymore.
And then they put in that everything works.
Not everything works, exactly.
Exactly.
That's real.
Yes, very much so.
Right.
And then there's another thing in tech,
and venture, which is called the Babe Rift Effect,
which is the home run hitter strike out more often.
And so, right, the people who are really trying
to do something new and radical,
actually fail at a higher rate, which
would make sense.
Could you speak into the fence,
which is the thing that makes sense?
That's right.
Fordicting the future, these things is absolutely impossible.
Like that said, like, boy, I sure wish that I could.
And so how do you ever possibly predict the future?
And I do think there's some wisdom that comes from understanding, in particular, the human
dynamics.
So I think people do change, and people have changed.
I don't actually believe that, like, we're the same people we were a hundred or a thousand
years ago.
I think actually, the people themselves might be changing,
but in a lot of ways. But look, there are constants to human psychology, sociology,
behavior of human beings and crowds. There are cycles in history of different kinds. And so at least
in the past, you can kind of go back. The risk of reading history is always that you know the outcome.
And so the outcomes look inevitable after the fact.
But if you can kind of get yourself away from that, and if you could,
especially the history works that I really like are either contemporary accounts
of what it was like in that moment to actually experience that.
Or really the best historians are very good at recreating what it actually felt
like to be there when it was all very uncertain.
And then, look, there's also just a lot of tools.
You can just learn, you know, there've been a lot of great people who have navigated through
very difficult situations like how did they do it? Like, what's the toolkit? So yeah,
it's a desperate region of the past to try to learn whatever lessons they have to give me.
Can you think of an example where something you learned from reading history impacted a real
world decision that you made in the present? Well, look, I was just like rallying people after
disaster, right? Which is like, okay, there's been a catastrophe at a company Well, look, I was just like, rallying people after disaster, right?
Which is like, okay, there's been a catastrophe
at a company like, okay, now you've got to like
re-coher the team.
Like, how do you do that?
And you know, how do you do that?
Well, you got to get up and talk to them, okay,
well, how do you do that?
Well, how did Churchill do that, right?
Like, you know, that's a great idea.
Yeah, that kind of thing works really well.
Yeah.
And look, you know, these are things
that most people have not done, you know,
in a lot of cases.
And so, you know, being able to learn from,
this is somebody say, you know, it's the history, you know, the best of history is this, you know, incredible intellectual things that most people have not done in a lot of cases. And so, you know, being able to learn from this is somebody say, you know, it's the
history, you know, the best of history is this, you know, incredible interlational conversation
across brilliant people over time, you know, who have kind of learned from each other,
by basically by reading.
Are the top VCs more of a group of colleagues and friends or rivals in enemies?
Yeah, both.
Co-opitation, as we like to say.
So we probably work together more than we compete.
And the reason is because most successful companies
raise from multiple VCs over time across multiple rounds.
So we end up on boards together and working with companies.
But we do compete and head on for deals.
And those competitions can get,
can get we punch each other.
And then it's pretty hard.
We're in one of those right now.
And we're going to try to punch
another firm of the nose as hard as we can.
And so that happens.
And then I think it's like any business,
it's like, I should have, you know,
the movie business down here is famous for it
or I don't know if music probably is also,
which is, you know, you do end up with grudges,
you know, you do end up with like two prominent figures
who are like really hate each other.
And it's like, you know, well,
20 years ago one of them said something, you know,
in a meeting.
And I've got my list, Ben doesn't hold grudges.
Ben's great at holding grudges. I hold grudges and Ben doesn't hold grudges. Ben's great at holding grudges. I hold grudges.
And Ben's wife holds grudges. And so when Felicia and I get together, we're like, we're like
the Anja Stark character in Game of Thrones, every night, we recite the list of all the people
that we're going to be amazing at some point. Yeah, and like I always, you know, Ben's kind of
always going to, on me on this is like, you know, maybe you should let somebody say, go and I'm like, well, no, actually they're quite motivating.
Like, I'll tell you one thing that gets me out of bed in the morning is, you know, the opportunity
to really stick it to somebody who I feel like did something wrong. You know, 20 years ago.
That's unbelievable. So I like my graduate. They're very close, they're very important to me.
Do all the VCs do the same thing or does each house have a particular style or strength?
I would say the commonalities are there's a few
universals which is based in a sort of this triangle.
It's basically team product and market
and it's when you keep coming back to so,
are the people really good,
are they building a product that people want
and then is there a market that they can sell it to
and it's sort of the most simple form of the whole thing
and those probably are, you know, they were the most important things 50 years ago.
Those are probably the most important things 500 years ago.
By the way, there's a long history of DC that predates all of tech, which we could talk
about if you want, but it could be about it.
So, you know, Christopher Columbus shows up in the, what is the core of Queen Isabella?
And he's got this like crazy idea to discover whatever it was, the new route to India.
And he needs, you know, X, whatever, swive, whatever Spanish pesos, whatever it was to the new route to India, and he needs X, whatever Spanish base was,
the time when it was to be able to kid off the boats.
Like he was making a venture pitch,
contained downsides, like, you know,
on what's the worst thing that can happen
as he brins all the money and the chipsink
and everybody dies,
a constrained upside, like what if he discovers,
you know, the new world, right?
And then of course, our membership bias,
we remember that story, because it worked,
we don't remember the thousand others that failed, right?
So he was raising venture capital.
There's a famous story.
J.F. Morgan was an investment banker
who mostly dealt with debt for building out big things
like railroads, but he sort of dabbled in venture
on the side and he was a, this is like 120 years ago now.
He originally was Thomas Edison's first investor
for indoor lighting.
Wow.
And so he wrote Thomas Edison a check
for the new lighting business.
And the first indoor lighting system, electric indoor lights,
were installed in Jacob Morgan's famous library in New York,
by Thomas Edison personally.
And then he threw weeks later, they caught on fire and burned his library down.
And then he paid Thomas Edison to do it again.
He built the library and put in lighting lighting at work. So he did it.
The one that I find so fascinating is actually the wailing industry.
So the structure of the modern venture capital industry is basically
is very similar to how wailing expeditions are
funded in the 1600s, like off the coast of Maine.
And it was a very similar kind of thing where you had basically
these captains who were the entrepreneurs,
and they would put together a business plan for a boat and a crew and they'd actually have an equity model for
how the crew members get paid. They get paid as a basically on a portion of the
whale and then they would come and pitch basically the way the people who
financed whaling journey, the VCs at the time and the VCs were specialists
in evaluating the captain and the boat and the plan. The captain was a specialist
in light figuring out questions like,
well, do we go to the place where there have been lots of whales spotted?
But those are the places other people are going to be at,
or do we go to this other place that we think nobody's discovered yet?
And then, you know, I felt like a third of the boats never came back.
And then there's this concept, the way that VCs get paid is this concept called,
carry.
So the term is carried interest and then the sort of colloquialism is carry.
And it's basically, and the idea is basically,
it's like 20% of profits.
For the ones that work, you kind of make 20%
of the profit profits, or some number like that.
And it's called carried interest.
And the reason it's called carried interest
is because that's how the captain's got paid
on the successful wailing expeditions.
And it was literally the percentage
of the w whale meat and fat
that the ship could carry.
This is where the term carry comes from.
And so on a successful voyage, the captain would get 20% of the whale.
It's interesting, both examples he gave are about light,
because the reason the whaling was such an important part
was that's what was the fuel for the light before the electricity.
Yep.
Very fundamental.
Yep, exactly. That's amazing.
Yeah, exactly.
So look, it's always been basically what you find.
You have this kind of entrepreneurial personality.
And it might be the captain, or it might be the founder,
or whatever, or by the way, it's a movie producer.
You have an entrepreneur personality who has a vision.
But they're not going to be able to realize it on their own.
They're going to need to be able to gather resources to do it, and then they're going to need money
and partners to be able to do that, and then there's going
to be some evaluation process, there's going to be some
professional class of people who are trying to evaluate
that, they're going to be operating in this domain where
they're wrong a lot of the time, but the success is
make up for it.
And so it's kind of this universal pattern, and I think it's
been running actually for quite a long time.
But you know, the best guess would be this will run forever.
It's for the rest of the thousands of years.
You know, the kinds of startups that you'll have
in a thousand years will be totally different
than what we have now, but they'll still have the model.
They'll have the same property, the unknownness of it.
And you'll look, they'll be reading histories
of what we did in being like, wow, I hope that we can learn
from all their failures, right?
It'll be the same cycle.
There's one more thing, it's like people get mad,
it's always fine and interesting, but you know, because it. This one more thing, people get mad.
I always find interesting, but it's become very popular to get mad at venture couples
right now or kind of be mad about this whole process.
You can mention the move fast, break things, get mad about disruption.
It's like, well, it's like, fundamentally, do you want there to be new things in the world?
Because if you want there to be new things in the world, they're not going to show up
predictably from well-mannered people who are going to behave well in every aspect of their lives.
And then the new thing is not going to disrupt or change anything.
Like that's not what happens.
Change doesn't enter in this kind of peaceful call.
It's always a revolution.
Exactly.
And if it's not, it's not change.
And so would you really prefer, to critics, I say, would you really prefer to live in a world
of total stagnation?
Or are there like nothing changes?
Is that really what you want? And by the way, what do really prefer to live in a world of total stagnation? Or like, nothing changes? Is that really what you want?
And by the way, what do you think would happen in that world?
Like, what would the politics be like?
What would society be like?
And so I think basically everybody would hate that.
And so, but to live in the world in which the revolutions happen,
you need to have a perspective.
And it's easy to say this and it's hard to do, but you need to have a perspective that says,
yeah, these revolutions like, look, they're going to be wild, right? They're going to be wild. They're going to upset a lot
of things. They're going to upset a lot of people. They're going to upset power relationships
in society. Hierarchies, gatekeepers are going to be furious, right? Like, old, you know,
incumbents are going to be furious. Governments are going to get freaked. Like, all that is going
to happen, but it is a direct consequence of the fact that it has actually changed, you
know, dynamism happening. And I just, like, yeah, like, and nobody has ever figured out
how to do this in a way that makes everybody happy.
It's just a question whether it happens at all.
And by the way, there are many societies in which,
historically, this didn't happen.
And what we know of those societies is they basically just died.
If you look at the Fortune 500 over history,
how much change has it had over what period
and what are the inflection points?
It changes a fair amount, although a lot of the changes I have to do with mergers.
When two big companies merge, has anything really changed?
I'll just give you an example, time order here in town, time order discovery have merged.
It's a high-drama in the immediate business right now.
I was going to have one of that.
It's like Warner Bros. Studio has been a business for 100 years.
If Warner Bros. Discovery works, it'll still be Warner's Brothers Discovery in 20 years.
If it doesn't work, it'll get bought.
I don't know, maybe Apple buys it.
And then Apple will run the Warner Brothers Studio for a while.
And then at some point, they'll get tired of being
in the media business, and they'll sell it to,
I don't know, Disney or so.
And so it's just like, but it's still
the Warner Brothers Studio, right?
And so I think in big company land,
there's a lot of what looks like drama in reality.
It's just kind of assets is trading cards being traded around.
Are the movies any different than they were 10 or 20 years ago, maybe a little bit, but
not really.
So, I think a lot of the change is actually not real change.
I mean, said that look, the sectors change a lot.
Right, and so, you know, look, when there's like build out happening in a sector, I mean,
when we're all roads were new, they were most of the stock market, right?
We'll pre-tech.
It feels like it didn't change some.
But like cars, we forget what was new at the time, right?
So like there was, you know,
from the 20s through the 50s, cars were new, right?
And so the car companies became, you know,
the car companies were not big in the 20s
and they become huge and dominant by the 50s.
You know, GE is Edison, basically,
like he had to like invent all that stuff.
And before it existed, GE wasn't a big company
and then it was.
So, and actually, this is one of the things in history.
One of the things is useful in history,
which is most new industries look like tech
in the beginning.
So, like I did an example of the car industry.
Actually, it's actually funny.
The car industry actually didn't grow up originally
in Detroit, it grew up in Cleveland.
And the stories of the first 20 years of the car industry
basically are these hobbyists and tinkers
and entrepreneurs in Garajas in Cleveland
and like 1890, 1919, time trying desperately to figure figure out how to get these car things to work.
And by the way, the car was greeted with an enormous amount of fear and trepidation.
Like people were not happy about the car.
And a lot of states have these only different stories, but people reacting to attack is a lot
of states actually in the US did not want cars to be on the roads because what was on
the roads was horses and people.
And cars were dangerous and scary and loud and they freaked out the horses.
And so a bunch of states actually have what they call red flag laws in that time period
where you could have a car.
But you needed to have the car, the car would break down all the time.
So you had the car, you had your mechanic that would go in the car right with you.
The car would only go like 20 miles an hour.
And that was sort of, which was like super fast because it was faster than a horse.
And then you had to hire a third, you had hired a third guide
to be 200 yards in front of the car with the red flag,
waving ahead of time.
So that the horses would know, the riders on horses
would know to pull them one side.
And then the horse lobby got really mad about this.
And so they passed along Pennsylvania,
where they said if a car and a horse encounter each other
on the road, the owner of the car has to stop the car,
disassemble the car into its pieces, transport the pieces and hide them behind the nearest hay bale.
So it's not to freak out the horse.
And so my point being like, what was that?
That was the tech industry.
Like that was the disruptive tech industry of that time, the personalities of the people.
Oh, you've an example, the car industry GM is like this, you know, giant company, 100
years old.
There was this guy, Alfred Sloan, who was like the famous CEO of the whole thing,
who built it, but actually there was a guy named Billy Durant,
who was actually the founder early on before that.
He was like the Elon character.
And he basically, and his name is Laugh's History
for whatever reason,
but he basically created the modern car industry.
And he was, and he read the stories of him,
and it's just like reading a must-eat jobs, right?
Or Ted Turner, right?
Or, you know, their page or Sam Almanor,
he's one of these people.
I actually think these patterns are,
I think these patterns are actually older patterns.
Who do you view as your biggest competitor?
I mean, the answer to this always sounds incredibly cheesy,
but it actually is true, which is me, by far.
This is the speech I give inside our firm,
which I very much believe in.
I give the speech to myself all the time, which is like, look, if we screw this up, it's our fault.
It's suicide not homicide.
And it's basically because we were not as good as we needed to be.
We screwed it up.
There was a way to do it and we blew it.
And maybe we blew it out of ignorance, but probably we blew it out of arrogance.
And you go, and you know, hubris.
And then, I think for what we do,
it is fundamentally a people business,
more than a money business.
And-
That's interesting.
Being in a people business, like every conversation matters.
Right?
And you're dealing with people's lives, right?
So this is the speech I give internally,
it's you're dealing with people's lives.
And when you're dealing with people's lives,
you have to talk, you have to be very serious about what you say,
and you have to be very careful about the consequences of what you say.
Really think hard about every conversation.
And you know how it is, it's like, okay, it's year, you know, we're entering year 15,
I've had 400,000 conversations.
The 400,000 at first conversation could go really wrong, right, as I'm not like on the
ball.
And I'm much more, I was saying, I'm much more worried about that than I am about somebody
coming in and like stealing our lunch money.
Other than financial returns,
how do you know if you're good at your job?
I, at that point, I think it's basically
what are people saying about you.
Yeah, look, the thing is though, like look,
it's easy, there's a paradox at the heart of that,
which is like it's really easy to get people
to like you just by always telling them
that they're smart and they're good
and that their ideas are good and like we don't do that.
And so the trick is, do they still like you and trust you in respect of you after you
have spent telling them the truth?
Tell the truth.
Telling them the truth, exactly.
And that's really your job.
It's your job.
It's like that.
It's when you tell somebody the truth and you're giving them good news, they can trust
the good news because they know when you had bad news, you gave them the bad news.
If you just say everything is always rosy, it means nothing.
That's right.
It's the bad news that gives you credibility.
That's right.
But it's hard because people are under a lot of pressure and they get you probably, you
know, probably how much do you live if they get upset in the moment.
And then maybe a couple days later, they're like, actually, that was a pretty good point.
You know, and look, if they know, if they trust you, right? I mean, the other thing that Ben and I talk about a lot, Ben says this a later, they're like, actually, that was a pretty good point. And look, if they trust you, right?
I mean, the other thing that Ben and I talk about a lot,
Ben says this a lot, as he says,
trust and communication are opposites.
Everybody thinks they're the same thing they're not.
If I trust you, we don't need to communicate that much
because you know that I have your best interest in mind.
Like, Ben and I have this, Ben and I could go off
and not talk for three months.
And we would come back, we would have the exact relationship
on the other side because I have so much trust in him that I would know that whatever decisions he's making
are in my best interest. Whereas if you don't trust somebody, you really got to communicate.
You've got to cross-check everything they say and roll them in a tarot.
I think that the best relationships are, this is what you try to develop, is actually,
look, I'm going to tell you some things you're really not going to like.
I'm doing that because I care.
I'm doing that because I am doing that.
And you're also doing that because you're being true
to yourself in saying, this is who I am,
this is how I see it, which is really valuable.
Yeah, and look, I'm not gonna step in and run your company.
Like I'm not gonna fire you.
I'm not gonna like replace you.
Like this is not the thing that's gonna make or break anything.
I'm just gonna try to help you get to the get to the truth
Right and have the trust relationship over time where you believe where you believe that that's what I'm trying to do
When you were in school in Illinois, there was a super computer. Yeah
How many super computers were there in the world at that point?
A few dozen maybe total
Most of them would have been in government labs most there
There's like those are the kinds of supercomputers used
for nuclear weapons development.
So that why was it there?
Cuptography.
So the government would have had a bunch,
but not really anywhere else.
I think I've got the government of firm on a credit
for that one.
So I remember Al Gore got in trouble years later
for saying that he invested the internet,
and of course that's not what he said, and what he said was he had played
a role in the Senate in creating the internet, and of course that was actually true.
So that whole thing was actually a smear the whole time.
That was actually true, and specifically what he did was he sponsored these bills in the
early 80s, which did two things.
Number one is they created what we're called the National Supercomputing Centers, and that
was four universities that were given basically these grants to buy these very expensive and rare kind of things at the time.
And you give you a sense of how rare, especially these things were, like in those days, like
we had one of the computers at Illinois, they literally built a building for the computer.
The computer was so big that they built the building and they left the roof open and
they lowered, after the building was built, they lowered the computer by a crane.
We made the computer. There was actually, actually, it building was built, they lowered the computer by a crane. Made the computer.
There was actually, it's a company,
it was, it actually is incredibly a company in Wisconsin.
So it was a company called,
there was company called Cray.
There was a guy named Seymour Cray who did a lot of it
and then there was this company called Thinking Machines
and there's this guy Danny Hillis
who you might have encountered at some point.
So they were, yeah, he's kind of very,
you know, kind of special entrepreneurs
who were good at this.
They became sort of famous, you've seen them in movies.
They, one of the, you know, they were so expensive, this good at this. They became sort of famous. You've seen them in movies. They were so expensive.
This was like $25 to $50 million and this was, you know, this is 40 years ago.
So this is, yeah, this is like equivalent of like $100 million or something, today per unit.
But they actually, they, one of the things they really valued design.
And so they actually looked really cool.
And they had like, they were, they were water cooled.
He, he has always a big problem with, with any kind of advanced computer. And so they, they, they, they was called water cooling. So they would have these very elaborate, you they were water cooled. He has always a big problem with any kind of advanced computer.
And so they did what's called water cooling.
So they even have these very elaborate water cooling systems.
There's a guy who actually bought one of these years later off of eBay
and converted the water cooling system into a beer keg.
I see the cold's most expensive beer tap.
So they were like works of art.
And so you've seen them in different movies over the years.
Yeah, so they were just like very, very rare,
exotic things.
And so anyway, so the Senate, the government funded
these four centers at these four state universities
because these computers made do kinds of science possible.
So these were used for like a different astronomy,
astrophysics, decoding, secrets of the universe,
stuff, and then a lot of biomedical,
protein folding, developing your drugs,
curing cancer, kinds of research. So it was sort of becoming key to a lot of biomedical protein folding, developing new drugs, scurrying cancer, kinds of research.
So, it was sort of becoming key to a lot of areas of science.
And then they had enough money to put for these centers in place,
but they wanted to give scientists all over the country access to the computers
and to do that.
They needed a high-speed network to people can log in remotely.
And so, they funded what was called the NSF Net, National Science Foundation Network,
which basically was sort of the internet,
pre-the internet.
And yeah, and so I sort of, yeah,
my big stroke of luck was,
it turned out Illinois where I went,
was a top computer science school at the time,
and still is, and they were one of these four centers,
and so they just had this,
did you go there knowing that was there?
I did, yeah, I did, I didn't know,
if I didn't know that I would play any sort of inform,
you know, I don't know how, I didn't know how well it was any sort of inform. I don't know how it was.
It was me to me, but it was me to me.
Seaked it out.
Yeah, well I knew what was happening.
So this was in the late 80s.
And so it was big enough in the computer.
The computer industry was being covered in like, you know, newspapers and magazines.
There were articles written about this.
Was how we experienced it at the time.
But I knew it existed.
Yeah.
And I had a home computer at that time.
Yeah, although when I got to college in 89,
the home computers in those days
weren't actually useful in an academic setting.
They weren't powerful enough.
But you had experience on a computer
before you went to college?
Yeah, but on really simple computers.
So I sort of went from working in computers
to costs like $400 to work in computers
to cost like $40,000 in like one step.
And so at that time, it was a completely different kind of thing.
So the computers that I worked on at college were just like, they said they were like $40,
$40,000, $60,000 baseline cost just to have something on a desk.
And then these supercomputers were like I said $25,000,000.
So these were not, one of the advantages of being at a UI you see at the time was that they
had these resources, they had this equipment.
But like all of my work was not done in my dorm room, it was all done in the computer
lab with like fluorescent lighting and like drop ceilings and all this stuff because all
this hardware was like super exotic.
You know, these days that doesn't exist as much your phone today is the equivalent power
of that super computer that I were done.
Your laptop is like equivalent power of that super computer that I were done.
Your laptop is like more powerful than that.
And so today that doesn't happen if you just have a modern laptop, you have basically
a full-fledged computer that you don't want to anything on.
And then there's this cloud idea where you've got these grids of millions of computers
up in the sky if you need more power.
So the sort of, I don't know, the romance or whatever of this exotic thing in a building,
being taken care of by people on white lab coasts, those days are kind of over. In terms of what the supercomputer was capable of, how does that compare to like your laptop at home now?
Yeah, so your laptop at home, like my laptop right now is a MacBook, I think it's an M2 or M3
processor, and it's, yeah, I don't have an check, but it's probably somewhere between 10 to
100 times more powerful than that, so for computer at that time.
Yeah.
Well, and in fact, you can ask a cynical question on this actually, or I could, which is,
if those computers in those days were so rare and exotic and they were able to be used for
things like coding black, the secrets of black holes, and everybody has a laptop that does
that today, like, where's all the creativity, like, where's all the science, or is all the
creativity, which I think is a actually a very excellent question.
But yeah, look, in theory, everybody has on their desk today
and increasingly just in their pocket,
they have the ability to basically do what in those days
we would have considered to be absolutely
breakthrough scientific work, or by the way, artistic work.
By the way, another thing that happened at Illinois,
that was just kind of a lot, lost history.
Illinois was the, there were a set of universities
in Illinois was one of them because of this
that actually developed basically what we now think of as 3D computer graphics and ultimately developed what became CGI in the movie industry and the whole idea of computer graphic design.
And so when you see a rendered tornado or whatever in a Hollywood blockbuster, a lot of that is actually techniques that were actually developed also at Illinois in a few other places like that at that time. And the supercomputers, originally it was so hard to do computer graphics.
It was so process or intensive that it was only those supercomputers that could do that
back then.
So that was another thing that actually was invented at that time.
What was Mosaic?
So basically I ended up at Illinois.
I ended up working at this supercomputing center after a few other things.
And then they had a group in that,
in that super computing center that was building
the software tools to make it possible for people
to use these computers.
And particularly, remember I said the link between
the big centralized computers and then the internet,
basically the purpose of the internet
as fed by the government was for scientists
to be able to access these computers remotely.
But then there needed to be a new kind of software tool
that was built to actually make that possible.
And so there was a, what we call, SDG, the software development group at Illinois that was in business to do that.
I had government grants to build that software.
And so Mosaic was a project that basically I, a group of us did at that group.
And it's nominal purpose, funded by the government.
And it's nominal purpose was, and by the way, not funded for a lot, I was making $6.25 an hour.
So it was not a lot of tax money.
But the purpose of it was basically remote scientific work.
And so the original purpose of it,
nominal was a scientist who wants to basically publish information,
have other scientists be able to read it online.
Mosaic was the browser front end of being able to do that.
And then we also had a certain way with basically one of the first web servers
that made it possible to store host things.
And so that was the nominal purpose.
But then because it was government funded,
we were able to give it,
we actually had to give away,
we're not allowed to make money on it.
And so we released it as open source.
And then the nascent internet was starting to get big enough
for there were people on it downloading it and using it.
And then they had ideas for other things that they wanted to use it for
other than scientific papers. And then that led to the fundamentally the creation of the
web. When did you understand it could be used from my then scientific paper? Oh, right
up front. I don't even think this was like pressions or anything. It was just sort of obvious.
Like it was just like, oh, you could, the thing it just was immediately obvious was, oh,
this could be used for newspapers, this could be used for magazines, this could be used
for books, anything, anything, right?
It's a new communication tool.
Yeah, and like literally, I worked on a lot, I didn't build all this myself, but I worked
on a lot of the early code for doing music online.
I remember when we first figured out how to do music in the web browser.
I remember how we first figured out how to do video in the web browser.
And so I remember how, when the internet radio first started working, like there was this project
that we were not working on, but I knew the people
working on it called Embo at the time,
which is the first music broadcasting.
And so there were a set of us where it's just like,
always just sort of obvious that this is gonna be used
for everything, like it's the McCluban thing.
The content of each medium is the previous medium,
right, the content of a movie is the stage play, right?
You know, the content of the music video is the music track.
And so it was the same thing here, except this is the one
where it's going to be all of them.
And so I just thought that was obvious.
A lot of us actually thought that was obvious.
That said, there were a bunch of purists who disagreed.
There was actually a big fight early on about whether there
should be images in the web browser, images and web documents.
Instead of just text.
Because the argument was, but you can predict the argument, which was, if it's just text, it all has to be serious.
We're going to introduce images and it gets frivolous. And then the frivolous will drown out the serious, and then everything will go to shit.
And I was like, that's what happened.
Yeah, well, hey, that's what happened, and B, I'm glad that it did.
Like, who wants to live in a world where you don't have images? And by the way, there's a logical flaw, right, which is it turns out there's a lot of shit text too.
So it's not the text actually gets you
guarantees of quality either.
It's true.
And for better or for worse,
I always bias on the side of openness and creativity.
I want more experimentation in the world, not less.
And so anytime anybody says no, we need to constrain this.
I'm like, yeah, no, we're not going to constrain this.
We're going to blow it out.
You never know.
Because you never know.
That's what can't predict. But you get with the bad with the good, right? Like, oh, no, we're not going to construct this. We're going to blow it up. You never know. Because you never know. That's what I can't predict.
Yeah.
But you get with the bad with the good, right?
Like, oh, OK, so guess what?
So we rolled out images in web pages.
Guess what?
I guess we're watching some of the first images
that people put in web pages.
Well, you know, let's say dirty fixtures.
I don't content.
I don't content.
And so as my age, you're old to say, special parts. And so that as my eight-year-old would say, special parts.
And so, that started, and by the way, it was a cliche that, like, internet was used for porn first.
That's not really the case. It was always kind of a large, edge thing.
But, you know, people did start to post adult stuff.
You know, and this is a government-funded program at the time.
And so this is actually the first free speech issue.
This is the first trust and safety issue, which is my boss at the time said,
well, you have to like filter that stuff out. And I was like, filter
what stuff out? And he's like, well, like, nudity. And I'm like, like, how am I going to
know what, which pictures have nudity? And I'm like, there's no way to do that. And he's
like, well, you have to develop an algorithm that like detects nudity. And I'm like, what
like, through what like shapes, like, like, booby detectors, like, is that what you're asking
me to make? And he's like, yeah, I can't you do that. And I was like, no, like, booby detectors, like, is that what you're asking me to make? And he's like,
yeah, I can't you do that. And I was like, no, I can't. And furthermore, I won't. And I just,
like, put my foot down. And I said, like, we're not doing, we're not going to build censorship
into the web. And, you know, that had, I would say, like, potentially, civilizational consequences.
You're at mosaic. You're building what you're building. you could see a lot of things in the future, but how did you imagine the world in the future
then versus how the world is now?
What did you see and what didn't you see?
First is like, look, the day job,
there's this kind of presumption,
often, I actually went into this a lot,
there's this presumption that the people develop
in the technology are somehow in a position
to know the consequences of this use.
And I think that's actually untrue on several levels.
And one of them is just a practical level,
which is most of what I was doing was just trying to get code to work.
So I had a day job, which was all consuming,
was 18 hours a day, just writing software,
and trying to fix bugs and software.
And so most of what I was doing,
it's the old thing at art, which is when artists get together,
they don't talk about art they talk about,
where to buy the cheapest paint.
So it's like that.
Like, most of what I was doing was like mechanical, trying to just get the stuff to work.
You know, that said, look, I, had you asked me then what I would have said was,
look, I like, I think this is going to be something that a lot of people are going to be able to use.
And in those days, that was a very radical concept because people didn't have the computers to use it.
They didn't have the network connections.
There was no broadband, there was no mobile, right?
There was just, people didn't work on full-of computers in the same way.
It wasn't clear that there would be any good,
who would ever publish any content?
It was like an open question at that point.
And so, it was radical enough, I would say,
at that time to say, this is something a lot of people
are going to use.
And a lot of people are both going to publish content
on the internet, and a lot of people
are going to consume it.
And by the way, it's not just going to be fixed content,
it's also going to be experiences and databases,
and they're going to interact in different ways and chats,
and discuss things, and so forth.
We didn't have social networking,
but we knew we had chat boards and forums and stuff.
So we knew there'd be a lot of communication.
There'd be a lot of groups forming.
And could you spend time in a lot of groups?
Yeah, what was that like?
Oh, it was great.
So in my world, at that point,
the dominant thing was what was called use net. And the system UZNET, and then the groups are called news groups. And
there was a busier version of the system that ran across the internet. And there was a
period of about 1985, the predated me, starting in 1985 to about 1993. So I saw four years
of it where it was like digital Nirvana. It was like the smartest million people in the world.
We're like talking about everything under the sun and...
Text only?
Text, you could like embed images, you could like attach images,
but it was mostly mainly text.
Mainly text.
Would it be conversations or more like essays?
Both, both, both.
A lot of essays and a lot of conversations around essays.
And then there was a folder, there was a whole hierarchy,
so you'd have all these different domains,
and so some of them were technical conversations,
but there were like lots of political conversations
where lots of art.
And you could find the topic you were interested in.
Yeah, yeah.
And you would pick the news groups that had the topic you're
interested in.
And some of the news groups were unmotorated,
so you could say anything.
Some of them were moderated.
They had a human who would keep them under control.
Similar to social media, really.
It was basically right.
It was the ear form of social media.
But what was fascinating about it in retrospect,
it's a lost golden era that's been impossible
to recapture sense, which is basically
grew to be the millions smartest people in the world
with basically no, for no idiots or assholes.
And so it was totally like anybody in theory
could be on it, anybody could in theory
say anything they wanted.
It just so happened that the only people
who had access
to it were like the best and brightest.
And so, like there was no spam problem,
there was no abuse problem, there were occasional
flame wars, but like there was nothing,
you know, there was no hate speech, you know,
and then it's just the, the content quality
was just incredibly high, and the communities of
form were like incredibly high, and the trust level
of the form was like incredibly high.
You know, people became very close, you know,
across as they do across this with people
they never actually physically met.
And it was like this, it was like this Nirvana
of like, you know, what if you could just have
the million as far as people connected to nobody else?
And then of course, then what happened was like
everybody else showed up.
And there's this term in the internet culture called
the eternal September.
And so it's based in the fact that it was September 1993
as when AOL connected to USNet for the first time.
And all the AOL, the 25 million AOL users
or whatever it was at the time were able to be on USNet
and they just like bear it in shit.
And it's just like completely destroyed the quality,
and just swapped it.
And then USNet basically died in September of 1993
and never came back.
That stuff's still online. Can you find it? A lot of 1993. And never came back. You saw that stuff still online.
Can you find it?
A lot of it is.
So it's been preserved.
There's a thing called Google Groups.
Google has a thing called Google Groups.
And they have archives in Google Groups
of a lot of these original things.
And for what you use kind of things, you're interested in.
It would be what we're called the Alt Groups.
And so, like, Alt. Music.
So if you go to Google Groups, you
could read Alt. Music Discussions from 1990.
And I bet you would find it to be quite interesting quite interesting and so eternal September is sort of this idea that basically now the internet basically consists of September
1993 in being perpetuity
Which is like no matter what good things there are is just gonna get swamped with basically people with you know
Either dumb people or people with bad motivations
So anyway, so it is this like it is this it is it was the Shangri-La of our
motivations. So anyway, so it is this like it is this it is it was the Shangri-La of our of our experience. I think ways to create gated communities online. Yeah. Well so there
was a famous there was another famous one of that era called the Well which was it was
an internet it wasn't an internet system it was a bolshem board system and it was
Stewart Brand ran it and I think it had a total of the peak of like 3000 people and there
were two tricks to how he did the Well one was I think he if I recall correctly I think
he vetted all the new members.
So it was like a club and then two,
as he charged membership fee.
And so you had to pass both those hurdles to get in.
And again, for many years,
it was apparently really amazing, spectacular.
By the way, this is an idea that nobody's really
cracked the code on this, but this is arguably
an undiscovered idea.
It's a known idea that nobody's
figured out how to implement, which is like, how would you recreate
that kind of thing today?
Because it sounds really great.
Yeah, exactly.
Right, it sounds like we spend a lot of time scrolling
through things we might rather not if we had that more curated.
But you have to be, you have to be willing to violate the
dominant conceit of our time, which is you have to be willing
to say that not everybody's the same.
Right, and like I generally have a fan of openness,
I don't like the idea of like getting people
by mic you or anything else or social acceptability
or whatever, but like yeah, there are within the universal
within the universal global village, yeah, I think
it may just be that more people should start to carve out
these more specialized areas.
There are some, there are a few of know, there are a few of these.
They're a mailing list that are like this.
There's often actually something that happens.
Often, a new social media product will first take off with being incredibly high quality
to start.
Facebook was like this early on.
Because Facebook started out just being Harvard kids.
And then when they expanded, they expanded the top 10 universities.
And look, the kids at Harvard have lots of issues.
But at least they're, at least in those thoughts,
especially in those days, like they were 20 years ago.
Things have changed maybe even since then.
But generally speaking, if you want your group of like,
5,000 really bright young people,
the people going to the top universities
are pretty good cross-section.
And so there's this thing, which is there's a pattern
that we've all noticed, which is new social networks start.
Well, a friend of mine puts it this way, which is the quality
of any group can only decline over time.
Because basically, you only want to join groups that
on average are better than you.
You never want to down select.
You never want to deliberately join a group that is.
It depends.
Well, it depends what the access is.
But generally speaking, it's the groucher marks that I don't want to be a member of a club that will take me. generally speaking, you want, it's the, it's the Groucher Marks.
I don't want to be a member of a club that will take me, right?
Generally speaking, you want to sort of, you want,
you want to go higher status,
but joining the group, not lowering your status,
but joining the group.
And so I have this friend who argues that,
basically the thing with social networks is
they're not technology platforms,
they're groups, their communities.
And the thing is, on day one, they're the best
they're ever going to be.
And then they will inevitably decline.
But then there's a whole bunch of things you could do
to try to basically arrest that decline if you tried.
But you have to grapple with the fact
that it's going to start out as very best,
which means the selection process of who you start with
is incredibly important.
And of course, the same thing is true of a company
or any other kind of community.
It's the same thing when you're planning parties.
It's human dynamics.
And so arguably, there's an unexplored design space
for modern social networks that actually acknowledge that
and didn't try to be everything to everybody
and just try to be specialized like that.
So I think it'd be nice for everybody
to have the one that they want to belong to.
You know, you need to opt in.
That seems to sound like a good thing.
Yeah.
Well, look, there are versions of this.
Facebook groups, there are some people have this experience of Facebook groups.
Twitter, if you use Twitter in the right way and you customize lists and you pay a lot
of attention to who you follow, I've got a couple of Twitter lists that I think kind
of count like this.
So you can back your way into it, but it's not.
I mean, your turn-al-subtember has dominated.
For better or for worse, the openness has resulted in.
How has writing code changed from when you were at school,
when you were a code writer versus writing code today?
Is it the same language?
Could you do it now the same way you did it then?
You could, yes, you could.
So all those tools still exist, those languages still exist.
So I wrote all my code in what was called language called C,
and C is sort of the native language of the operating system Unix. It's one of the great and kind of
universal programming languages that people with deep tech hoist particular expected to
know how to do. It's an older kind of programming language in that it is very the semantics
of the language are very linked to the hardware of the chip. And so when you're programming
C, you are directly talking to the underlying hardware.
Like your classic thing is, you know,
so chip processor and then this memory.
And like you can see, you have to do what's called
managing memory yourself.
And so you allocate memory on the memory card, you fill it,
you have to unallocate it.
If you don't unallocate it properly,
you get what's called a memory leak.
The program runs and gets slower and slower
and then ultimately crashes.
So you have to do all that.
And it sounds like a lot of work.
Because a lot of work.
And you end up in a, I would say,
communing very deeply with the machine.
Like you have to really understand
how the whole thing works all the way down.
We call them bare metal, the actual physical silicon.
Like you have to really kind of understand that.
It sounds like a really good tool to learn.
Either way, whether you stay that way or not.
Yeah.
So the thing for a very long time, I think,
and I had the benefit, the benefit of this,
I think the thing for a very long time that made
a computer program really good was when they understood
every aspect of how the machine worked,
all the way up to the graphics and everything,
but then all the way down to the chips and the metal
and the design.
And I spent just as much time in school learning about how
to make chips and all this stuff as I did try
to make me suffer, because it was an integrated system.
I think there's a critique, which I think is a valid critique,
which is probably in the last 10 or 20 years,
a lot of programmers now become actually very good programmers,
but they never actually learn how to do that.
And that's fine for a lot of things,
but like any time things get complicated,
where you need things to be super fast,
or you need them to be very secure
or you need them to get scaled to get really big.
You do tend to need to bring in somebody who understands what we call the full stack,
the whole set of things. That's not as common anymore.
So the overall trend that's happened in the last 30 years, 40 years is
most programmers don't do what I was doing. They're not programming at the bare metal the way I was.
Mostly what they're doing is higher level,
absolutely called abstractions.
So they're in these languages that,
like languages like kids learn at school,
where they're much easier to code in.
You don't have to worry about any of the hardware,
the memory, whatever.
New stuff called scripting languages.
Python is an example.
You'll hear where JavaScript is.
Where you're, it's easier to get into.
They're more powerful languages. The language does more for you.
So you can write, you know, like a new app faster than you could in the old days,
but you, you don't have that connection to the machine anymore.
That was the big trend for the last 40 years, and then it just changed again, basically last year.
And this, and this change last year, this year is the biggest change that any of us have ever seen,
which is the AI, the shift to programming with an AI. And in particular, basically, the model that people have right now,
it was one of either two things either. You just tell the AI what code you want and it makes it for
you, which works for like examples today, but doesn't work for building full programs yet,
although it will at some point. But the thing that programmers do today is they have this
model called a Copilot, AI Copilot, rightilot. So the new model of programming is you're writing code
on the left half of your window,
and then you've got an AI chatbot UI interface
on the right side of the window.
And as you write code, the chatbot is inspecting your code
and talking to you about it,
and then you can ask it questions.
And so if you're writing code, you do a typo
or whatever bug, and the AI is continuously
analyzed the code as you're writing it, and you can say, oh, that was a mistake you should fix that right now. And you're just like, you do a type or whatever bug, and the AI can continuously analyze the code as you're writing it.
You can say, oh, that was a mistake you should fix that right now.
And you're just like, wow, that's great.
I don't have to discover that the hard way later.
That's great.
Or you could say, I have this code that's going to render something.
I need it to be faster.
How should I make?
How should I perform as optimized in an AI?
We'll say, well, here's how you do it.
And here's how I would rewrite it.
Or you can tell the AI to make changes, right?
And so it's like, I want everything to be, I don't know,
it's like you want to do a translation for English to Spanish.
And you can use the AI to find all the places
where you have an English language word, and you can swap
into the Spanish translation and the AI could do that for you.
And so it's like, it's co-pilot.
It's like a super assistant kind of thing.
And so that's the, that's like a radical change.
And so the, the coders that are using that
versus not using that today, they're pretty much universally
kind of saying that's a night and day kind of thing.
And then that's just with today's AI
and what everybody expects is the AI's
to the future are going to get much more sophisticated.
And so the, the sort of with the AI people basically
say is in five years or 10 years, you're not even
going to have that.
What you're instead going to have is like the of like a thousand AI programmers working for you.
So you're not even going to be writing code yourself.
You're just going to be like basically managing the AI's to write the code.
And you can basically say, you know,
you wouldn't go off and do all this design and just coding and graphics and like whatever it is.
And you basically hand out assignments and then the AI's go off and do it and they report back.
And then you kind of oversee the entire process.
And so if that vision plays out,
that's a complete revolution, right?
And then the way to think about that is
to think about this in terms of productivity,
how much software functionality can one person make
in an hour a day or a year?
And what all of these changes mean
is sort of an explosion of productivity.
You just get to make a lot more code.
If AI learns to code, that really changes things.
Yes, exactly, right.
Well, and then that raises all these questions
that you get into on AI topics, which is like,
OK, well, then is AI going to do bread of coating AI?
Right.
And so this gets into all the AI topics,
which we could talk about.
But it's a very, very, very simple thing to bring it up
is because it's a very fertile moment for our entire world
of technology, rethinking how all this stuff works.
This might be the biggest change
that anybody's ever seen.
What's different about AI?
Why is it so different?
It's different because,
so it's a very fascinating story.
So, the idea of the computer, you know, goes back pre,
you know, the computer was invent,
as we know it, the computer was invent in the 1940s
during World War II,
to basically crack Nazi-India and Japanese codes by primarily the US and English computer scientists,
people like Alan Turing.
And so, you know, that's the true story, that's the conventional story, that's the true story.
But the ideas are older than that.
The ideas have to do with machines that can calculate machines, right?
So they were mechanical calculating machines before they were electronic computers.
There was something called the...
Abacus.
The Abacus was a form of this. There was also something called the Jakarta loom. Textiles
used to be weaved by hand, and then at some point you built a machine, a loom, to do
it, and then the Jakarta loom actually, you could program it.
I've seen them.
Yeah, so you could end you could, you could, you have literally punch cars, and then
you could, you could do, and then patterns. And so they're running basically a very rudimentary
computer program in order to basically do patterns.
And it's a completely mechanical process.
A piano?
A piano would be another good example.
Yeah, exactly, right.
It's not like Jakarta, Lune, player, piano,
or not what we call touring machines, which is like,
there's no concept of a loop.
Like you can't run any program on it,
but you can run the program that generates beautiful textiles
or beautiful music, right?
And those were both big advances.
And so, so anyway, so there were a lot of these ideas
in those days, which people were thinking,
like, business guy Charles Babbage,
and his woman Ada Lovelace, who had a design
for basically an electronic computer in the 1860s,
that they were never able to build,
called the difference engine, which is like,
and you read the stuff they were,
so Charles Babbage designed a computer
called the difference engine that he fully designed.
Great name.
It is a great name, and it would have been a great thing
to build. There's this genre now called Steam Punk. It's a great name. And it would have been a great thing to build.
There's this genre now called Steam Punk.
It's think about like the TV show or the movie Wild Wild West.
There's an alternate reality genre called Steam Punk
where like all this stuff actually started to work
in the 1800s instead of waiting longer.
And so like there's an alternate version
of the universe where the difference is.
Is that what Steam Punk is?
That's what Steam Punk is.
It's like living in the future,
but it's a lost future where like, you know,
you had flying cars and mechanical cars.
But everything's retro.
And everything's retro.
Everything's retro with everything's built out
of what they would have had in the 1860s.
Everything's out of wood and chrome and steel.
Cool idea.
Glass, no plastic, right?
It's everything's out of the old materials.
Yeah, so some of that stuff is really good.
But it's, anyway, these are ideas.
And if you read the letters, Charles Babbage and Ada Lovelace would send these letters back and forth, then Ada Lovelace
was basically the first programmer. And she was this young woman and like literally 1860.
She actually had a tragic life story. She died very young. But she was like writing software
for the difference engine in like 1860. And they never built the difference engine, which means
she never saw the software run. But like they saw it. Like the idea is existed. And so anyway, so by the 1930s, there was this big debate
that was already playing out.
And this is even before the invention of the computer.
And the big debate was, do we model the computer
after a calculating machine, right?
So do we model it after the card loom,
the cash register, the player piano,
or do we model it after the human brain?
And they knew just enough about neurology and the function of the human brain. And they knew just enough about neurology
and the function of the human brain.
And they knew the human brain was obviously capable
of doing things that a calculator
at calculating machine couldn't do.
And particularly, they knew the human brain
is really good at patterns, right?
So the human brain is like really good at like image recognition,
really good at like language.
Like here's a feature of the human brain.
You can take a piece of text, you can take out all the vowels,
right? So you take a paragraph of text, the and as I've seen before, you remove all the
vowels and you just leave the consonants. The human brain, you can still read that because
your brain knows the patterns of words and letters and is able to fill that in. A calculating machine,
based computer, can't do that. The human brain can't do that. So there's some difference,
it's like sometimes the term fuzzy is used, so the human brain is fuzzy. And the problem with the human brain, by the way,
is that it's fuzzy.
And so I will, I remember tomorrow,
whether you were wearing that color shirt or some other color
shirt, who knows.
But you and I will have been able to have this conversation
in a way that a calculating machine never would have.
And so there's the fundamental difference, basically,
in there.
And so these people in the 1930s knew
that there was this difference.
And so they said, should we model these things
after calculating machine in which they are hyper literal.
Like, you know, say almost autistic, right,
which is like, they're just like,
it's like, so hot like machines where they're like,
really good at running large numbers
of mathematical calculations very fast.
And then we'll give people ability
to write programs based on that.
But they're never gonna be good at patterns.
They're never gonna be good at language.
They're never gonna be good at,
they're never gonna know what anything means. You know, they're always gonna be hard to talk to. they're never going to be good at, they're never going to know what anything means.
They're always going to be hard to talk to,
you're never going to be able to use
natural language interface,
they're never going to be able to know the difference
between the difference between a cat
and a cinnamon roll and a photo.
They're just not going to be good at that.
So they'll be like hyperliteral in that way.
It's super fast, but hyperliteral
and then humans will just still be different.
Or should we try to build computers
that are modeled after the human brain.
And so it actually turns out the first paper on the concept
of the neural network, which is the architecture of CHEDGPT,
was actually written in 1943.
Wow.
And the AI systems we used today are still
based on the ideas in that paper.
So just 80 years ago, right?
So they knew enough about neural structures and synapses
in the brain that they knew there was different.
1943.
1943.
No, but they knew.
And by the way, look, the field of AI started in like 1943,
like that actually fired the starting gun.
And actually people have worked for the last 80 years
trying to get neural networks to work.
And they finally just started to work.
But my point is like, they knew from the very beginning
there were these two totally different ways
of making computers.
And they knew what the trade-offs were.
And they just turned out that historically,
they were able to make the one kind of computer
for the last 80 years.
And that created the computers we know today.
And then it turns out there's this completely other way to make a computer.
And that's based on the, it's inspired by, it's not the same as the brain,
but it's inspired by the structure of the brain.
And as a consequence, it's a new kind of computer.
And a way to think about it is, it's a computer that's actually,
an AI computer is actually very bad at all the hyperliteral stuff, right?
And so, for example, Gengipot has this thing called hallucinates. And so, if you ask a question, it knows the answer, it gives you the answer.
If it doesn't know the answer, it just makes one up. So, it's more creative and less accurate.
Exactly. Somebody once said, somebody said one of the guys who studies this says,
AI is not like a computer, it's like a pretty good person. It's not like the best person.
But it's like a pretty good person. And what do we know about pretty good people?
They're right a lot of the time.
But a lot of the time they're not.
And can you always tell the difference?
Not necessarily.
You know, truth is, sound is confident when they're wrong is when they're right.
Yeah.
Do they know?
No, they don't.
If you ask.
It sounds like a real issue.
If we've spent 80 years establishing the fact that what you're getting back from a computer
is more like a results from the results from a calculator.
But now we're getting these fuzzy results that are more like mediocre human results.
Even though we've had 80 years of what we think of as accurate, that could create confusion. Yes. So there was a court case about three months ago, where a lawyer had to had GPT-Rite illegal argument to be presented to a judge in a court,
and it did it, and it hallucinated several court cases.
Press that it's that don't exist, and the judge caught it.
Just made them up.
Made them up.
Yeah.
And it sounds great, by the way.
They sound exactly like court cases.
The whole thing hangs together logically.
It's just literally not true.
It's basically cases that didn't happen.
And so, and it turns out if you submit false,
it made up court cases in court, you get disbarred. It's a lawyer, like you're done being
a lawyer. And so the judge basically, like, came very close to disbarring a lawyer on the
spot in the lawyer's, like, basically, the judge is like, did you use Chai Chi PT to do
this? The lawyer basically fessed up and the judge basically said, if you ever do this
again, I'm going to disbar you, destroy your career. But that's exactly for that reason. But however, hallucination, creativity.
Yeah, being great seen in a movie, for example.
Exactly.
We're finally in a movie.
Well, so and it actually turns out,
so there are companies now building AI for lawyers.
And actually, we did a bunch of work.
We haven't invested yet, but we've done a bunch of work
in this space, because one of the things that AI can do
is it can write, you can write legal briefs. And if it doesn't hallucinate, they're actually really good legal briefs. And so we've been talking bunch of work in this space, because one of the things that, hey, I can do is it can write, you can write legal briefs.
And if it doesn't hallucinate,
they're actually really good legal briefs.
And so we've been talking to like professional lawyers
about this and what the professional lawyers
will tell you actually is,
you actually don't just want accuracy
when you're thinking about writing legal briefs,
you actually do want creativity.
Because there are different ways to make legal arguments
and maybe the way that you've thought of on your own
is not the best way to do it.
And maybe if you had a copilot, right, think of you writing a legal brief, you're best way to do it. And maybe if you had a copilot,
think of you're writing a legal brief,
you're a legal breer, you're writing a legal brief,
you have a copilot, and that copilot
is just giving you ideas.
And some of the ideas are going to be written,
some of the ideas are going to be terrible ideas,
but they're all new ideas.
The ideas where you don't have to sit
and come up with them on your own.
And so what the lawyers are saying basically
is in that case, you actually want some hallucinating.
You don't want makeup or court cases to happen, but you want, oh, here's a different
way to make the same argument.
You might also view it as if you're writing a closing argument to be presented to a jury.
There's a storytelling exercise, and so you might want some brainstorming.
You don't want the thing to do it for you, because you're the guy who has to stand up there
and actually present it, and you have to really be willing to stand behind it, but it might
be helpful to have a writing partner that can actually help you do that.
And so there's this sort of double edged.
Like the fact that it hallucinates is both a big problem,
but it's also magical because we've never had computers
that make things up before.
Like that's a brand new thing.
If you had told me three years ago,
we're gonna have computers that make up,
you're like, it's never, it's never happened,
there's never been a way to do that. It's the same thing, and now you're seeing it, it's the way it's never happened, there's never been a way to do that.
It's the same thing, and now you're seeing it,
it's the way to see this really clearly,
of course, if it's now visual to zine visual art,
you know, coming out of like mid-Jernie or Dalier,
these things where it will make up all of these
crazy art things, and you know, look half the time,
it will make up like it will,
and you know, there's this famous thing
that they've figured it out now, but for a long time,
the way that you could tell that computer art
was being made by algorithm was,
it just, it would give that to extra fingers.
It just turned out that the training data it was trained on is just like it just turns
out like human bodies relatively straightforward except that there are these like detailed
finger appendages and if you are looking at billion photos or pictures of people they
have fingers and all kinds of different positions.
And so the early versions of the AI art basically just didn't know how to do fingers
accurately.
They fixed that now, right,
where it no longer does that.
But by the way, if you want it to, it still will.
Right, and so if you tell it,
render me a scene where everybody has seven fingers,
it will happily do it for you.
And computers never used to be able to do that, right?
Or if you just want to tell it to use this,
one of the things you can do is really fun with these things
is you can do it, you can say like use your imagination.
Or you can say another thing you can do, as you can say.
So there's this thing called prompt engineering.
So it's how do you write out the prompt that tells the AI what to do,
which is true for both a text AI and for an image AI, do the prompt.
And it turns out there was a research thing done by Google a few months back
about what's the optimal prompt that optimizes the chance
that there won't be hallucinations that it's going to be the most likely
to be what's called factually grounded.
And it turned out the optimal prompt starts with take a deep breath.
Really?
Yeah, it gives you the best results, right?
And so, and this gets to the amazingness of what's happening.
This is why we're also transfixed by this.
It doesn't have lungs.
No, it doesn't breathe.
It doesn't breathe.
Right.
So, it's not that.
But also, look, if I tell you, if I ask you a complicated question, I say take it to
breath.
I'm also not telling you to take a deep breath.
What I'm saying is pause and think, right?
What that's code for is pause and think.
So it's like, okay, so what you're telling the computer is pause and think, okay, that makes
more sense, because, okay, pause and think.
But then it's like, wait a minute, why do I have to tell a computer to pause and think?
Like, why would that matter?
So it turns out why that matters is because the way these systems are built is they're
trained on these, you know, giant billions and billions and billions of files of text
and images that other people have created over time throughout all of human history.
Like all of that stuff's been fit in there and it just turns out that in the total material
and all text that everybody's ever written on any topic, any time anybody ever says, take
a deep breath, pause and think, it means that they're more careful
in how they do their work.
Right, and they actually act differently
in how they do their work.
They go more slowly, they go step by step,
they double check all their assumptions.
And so that's like encoded deeply
in the sort of total collective unconsciousness
of how we express describing human thoughts,
such that when you tell the machine to do that,
it kicks it into a very similar mode as that.
Very interesting. And everything I just described, human thoughts, such that when you tell the machine to do that, it kicks it into a very similar mode as that.
Very interesting.
And everything I just described, I would have been committed to an institution five years
ago if I had said that this is what we were going to be doing, and now all of a sudden
this is actually happening, and so that's the...
Yeah, so the breakthrough is a completely different kind of computer that is able to basically
synthesize and deal with patterns in human and human related expression, language
and photos and images and videos and all these things, right?
The humans deal with eyes and ears like all this stuff in like a fundamentally like better
way that is based on and analogous to how human brains operate but also very different.
And so it's like this brand new frontier.
Tell me the open AI story.
It started as a nonprofit.
It is and it continues to be a nonprofit.
Tell me that story because there was a story about it becoming a for-profit.
Yeah.
So it's a nonprofit that owns a for-profit.
So it's a nonprofit parent company with a for-profit subsidiary.
Is it a common?
No.
That is not common.
Has it ever happened before?
It has happened before, yes.
What was the case?
I'll give you an example.
The Guardian newspaper in the UK is owned by a trust.
Johnson and Johnson, the consumer products company, I believe is owned by a nonprofit.
I think the LEGO company, I think is owned by a nonprofit.
I actually double check all these, but I think these are all examples.
There have been a bunch of examples like this.
So it has happened before it is allowed.
Having said that, there are very stringent tax laws
that apply to this because you're not allowed,
nonprofits are not allowed to pay like ice alleries,
they're not allowed to do what's called self-dealing,
you're not allowed to like extract money out the other end,
because the whole point of being not profit is you don't have to pay taxes.
And so the IRS supervises nonprofits that own businesses
actually quite strictly.
And there have been people who go to jail when they cross those lines.
So you have to be careful in how you do this.
But yeah, so basically, OpenAI started out
as a non-profit research institute.
It actually didn't even start with the for-profit sub,
it just started with as a non-profit.
It actually started with started by Elon Musk
and a group of people kind of that Elon brought together,
including Sam Holman, who's now the CEO.
He's here to see you.
He is sitting here today.
He is once again. He is the CEO. He was fired and re, he is once again. Today he is the CEO.
Today he is once again the CEO.
So he was fired and re-hired.
He was fired and re-hired within five days.
That's interesting.
And they had two other CEOs in the meantime.
Maybe you'll tell me that story.
We'll get there in the history.
Sure, yeah, just out there, just in.
Yeah, so basically, and so the true story,
Elon has talked about this now in public.
So basically what happened was,
Elon, so Google obviously makes all the money on the search
action, but the guys who started Google,
Larry Page and Sergey Brenn, were
came out of the AI group at Stanford.
And so they got trained up on all this AI stuff.
When it wasn't even working, right?
They just, they got trained up.
They were PhD students at Stanford studying AI
before they built Google.
And so their kind of orientation in the world is basically AI.
And they basically always use Google as a simple form of AI.
And so they always aspire.
Like if you read their interviews, they always said Google
shouldn't have the time blue links.
It should just give you the answer.
And so they started doing AI research early on
if that company first got started and they did it for many years.
Yeah, so they basically launched an internal research
group called Google Brain.
They launched that, I don't know, 15 years ago or something,
and the goal basically was to develop AI.
And they actually developed the actual breakthrough,
the specific version of the neural network
that makes all these systems work,
and that was called the transformer.
And that was actually invented by a guy,
two guys I know, wonderful guys, and 2017.
And so that was like the key theoretical breakthrough
that like finally made all this stuff work.
But they were, was it owned by Google?
Yeah, well, no, so it was a, it was, this was considered research not development.
And so the, the, the way that it was, it was like an internal scientific unit at a company
and so they actually published it as a paper.
I see.
And there actually, there's a long history of this where this actually a lot of the great
breakthroughs over time have actually come out of like industrial research labs like this
and then, then, then, then the company develops it publishes it, and then they don't realize until later that
they should have kept it secret.
But also the reason that they were able to hire all these great researchers out of all
these universities is they promised them that they can publish their work.
And so sort of part of the deal with these guys was that they would get to publish.
So they had this key breakthrough in 2017.
But it sort of became clear in the 2010s that like there was finally progress being made
and some of these systems were going to start to work.
And Elon had some conversation with Larry Page who was running Google at the time.
And Larry said to Elon, you know, this AI thing is really going to work.
And we're going to end up with AI's that are like, you know, much smarter and more powerful
than people.
And Elon said, well, aren't you worried that they're going to like have their own goals?
And they're going to take over.
And they're not going to want us around anymore.
And Larry's response was, you're being speciesist.
You're being racist, but towards your species.
And if they're a better form of life, then they should take over and we should all die.
The humanity should go away.
Now, knowing Larry, I think there's at least a 50% chance he was joking.
But Elon couldn't tell and took him seriously.
And so Elon had a visceral reaction and was like, oh my god, like the big risk here is not just developing AI.
The big risk here is Google develops AI and Larry Pages
and Control of it.
And he does horrible, horrible things.
And so he started, he's like, oh, so he called all these people
who he knew.
And he said, we need to start the competitive effort
to that today.
And we need to call it, it needs to be,
as opposed to Google's closed AI, and needs to be open AI. And we need to call it, it needs to be, as opposed to Google's closed AI,
it needs to be open AI.
And this was to protect the world.
Does it protect the world?
And Elon's here to protect the world.
So what Elon says is we need to go higher
all the best researchers we can.
It seems to be that's what he's always done.
It's like Tesla was carcin' to be electric.
SpaceX was, if anything happens to the Earth,
we could live on Mars. He's always motivated by saving the planet.
Yeah, that's right.
And humanity.
And humanity, right, exactly.
And saving humanity.
I think that's true.
I mean, Tesla has been a climate story the whole time and still is.
For sure, from the beginning it was.
Yeah, that's right.
In fact, Tesla, as you know, like Tesla isn't just cars, they're also batteries, right?
They also do.
And he also, I remember him from the beginning of the first Tesla announcement was, and
I'm hoping every car company steals our technology. That's right. That's right.
No patents, he opens versus everything. Exactly. That's for everyone. He was always for
everyone. That's right. That's exactly right. Yeah. That's right. So that's what he did
here. And then what he did basically was you said, look, if you're interested, if you're
an AI researcher and you're interested in money, then you can go to work at Google and
they can pay you a lot of money. But if you care about the mission of having it be open, then come and work with me, and
he called it OpenAI, and he made it a nonprofit, not a for-profit.
And he said, you know, and then he said basically, he said everything we do in OpenAI is going
to be open sourced in the same way.
Right?
So he said, basically, if you come here, your work is all going to get published, everything
is going to be open.
He even said it early on.
He said the mission of OpenAI is to make sure that AI is happens and is safe in his university available,
all of humanity.
And he said it would actually be fine if somebody else does
that and makes it available, in which case OpenAI will just
shut down and it'll be far mission will be complete.
So the setup is not profit.
His register is not profit.
He donated it, what it's reported to be something like $50
million to get it off the ground.
And then a group of a group of people, including
Ilana Samaltman and others,
then brought in a lot of these people,
the names now, Greg Brockman and Ilya Sutskiver
and a bunch of these like really smart guys.
And they formed this thing and they got underway.
And then basically, it's a long story,
a long detailed story, but in the beginning,
this was like 2014 or 2015,
they didn't have the transformer yet,
so they didn't know how to make like Getsupt work.
They were primarily working on trying to have AI's
that can play video games in those days as a proxy
for being able to make decisions and so forth.
But it didn't go that well,
and so it was sort of start to stop,
and some things worked, and some things didn't.
It was kind of a little bit easy
along the way as to whether it would work.
Then basically, at some point,
stuff really started to work.
Like, things all started to actually perform really well.
And then, in particular, there was this breakthrough,
there was a large language model breakthrough.
And the way I've heard the story is there was one guy there,
whose name is Alec Radford, who had this idea
for these language models.
And the rest of the organization thought he was nuts
and didn't want it to do it.
And he's like, no, I think we might actually be able
to make this thing work and this transformer thing came out.
And then they started to get, then they did GPT-1,
which was the first version of the text all. And then they started to get, and they did GPT-1, which was the first version of the text L.
And then they did GPT-2, and they were like,
okay, this is really going to be a thing.
And then basically what happened around that time is this guy,
Sam Aldman basically came.
He had been a federal and original founder, but he kind of
disengaged, and then he kind of came back in.
And sort of he took control of it.
Elon, there's controversy over this, but Elon,
Elon became less involved. Sam took control of it.
And then Sam did this very important thing,
which is he created a for-profit subsidiary
under the nonprofit.
And why did he do that?
So what he said, and I'm sure this is true.
What he said was to make these large language models work,
we need a lot of computer capacity.
And we need a lot of data.
And we're going to need billions of dollars.
Basically, he said, I know how to raise $100 million for nonprofits.
I don't know how to raise $10 billion for nonprofits.
It's just hard to do that, not many of those running around.
So, he said, we're going to create a for-profit subsidiary so that we can basically sell shares in that for a for-profit subsidiary and generate revenue and generate profits.
And then that's what will raise the money because if we can't raise the money, we won't be able to keep going on this research.
Which means, by the way, ultimately,
we'll all be done by an actual for-profit company, right?
Like Google or Microsoft,
and then open A, I won't win.
And so he turned what had been a pure nonprofit
into a nonprofit that owns a for-profit.
The employees of the nonprofit became employees
of the for-profit, salaries went up a lot.
The amount of money that they raised went up a lot.
Their ability to invest went up a lot.
What was Elon's participation for the original 50 million?
So Elon has said publicly that he had nothing for it.
What he says is that it was a philanthropic donation.
You're under tax laws.
You can't just turn a for-profit.
You can't just turn an unprofit into a for-profit because it's a violation of tax laws.
So I think legally they probably couldn't give him anything.
But anyway, he says he got nothing for it.
And he has said in public basically, he's like, wow, that seems like a new trick.
Why does everybody do it?
And so he has suggested over the years that like there was something wrong with how they
did this and who knows, we'll see.
But Sam made that change.
And by the way, that worked, right?
They were able to, they were able to, all of a sudden, start paying competitive salaries
to Google for engineers.
They were able to buy all the computer power they needed.
They were able to buy.
They have lots of money going into making training data.
And so that part of it worked.
And that's what resulted in CHEPT.
That's why CHEPT exists, and that's why Dali exists.
He did another thing along the way,
which was he turned it basically into closed AI.
So he turned it into a for-profit and he canceled the part
where it publishes everything.
And basically as of four years ago or five years ago
or so, they stopped publishing their research.
And he did that under the theory that it's too dangerous
to distribute, it's too powerful to dangerous
so other people can't have it.
But there's some irony in that, which is the whole reason
nobody exists is because that was what they were,
Elon and Sam were afraid the Google was going to do.
So anyway, it's been this rather dramatic shift.
Why was SAM removed and then replaced?
So we don't fully know yet.
They're relatively opaque organization.
A bunch of stuff has been reported.
You never quite know whether what's reported is true or not.
There are going to be many investigations in the months and years I had from government
and there's gonna be litigated lawsuits.
There's gonna be a lot of,
there's gonna be, by the way,
multiple books written about this already underway.
There's gonna be a Netflix series, I'm sure.
Right, and so we will learn a lot in the years I had
about what just happened.
Is Microsoft somehow involved?
Microsoft is very involved.
How is Microsoft?
Microsoft is their major investor.
And so they have raised
in the for profit.
In the for profit. And so they have raised $13 billion for Microsoft,
right into the for-profit.
And they have this very elaborate, complicated structure.
So they now have this structure.
They're still a nonprofit at the parent company level,
but they have this for-profit facility area.
And then they have this very complicated system
for how they account for investor money versus donation money
and how things get paid out.
So they have this new structure.
They have an called cap return.
So if you invest money in the for profit,
you get paid out up to like a 10x return on your investment.
And then after that, the profits I'll go back
to the nonprofit.
There's what's called a waterfall.
So different investors investing in different times,
get precedence for how the money comes out.
And then Microsoft's just the vast majority of the money.
So they have tremendous control.
They have the most outside control of anybody. But it's a nonprofit. They can't operate. It's just the vast majority of the money. So they have tremendous control. They have the most outside control of anybody.
But it's a nonprofit.
They can't operate.
It's just a business.
And so they've got the traditional nonprofit overlay.
All investors who invested in this thing over the years
signed paperwork that very specifically said
you were investing in a for-profit sub of a nonprofit.
It would be best if you thought of this as a donation,
not a financial investment.
And then they say, but that's probably OK,
because who knows whether they'll even be money once we have AI anyway.
And that's literally in the document.
So everybody's invested, who's invested in this,
is kind of known that that's the deal.
But then they have this additional thing
that they are very kind of vocal about and public about,
which is this idea of AI safety.
And this idea, this the original concern of like,
is that I can like basically wake up and destroy everything
and take over and wipe out humanity or just cause damage in one of
a thousand other ways.
And so they also have this thing built into their structure, which basically says if they
conclude AI is too dangerous, they'll basically just like shut the whole thing down.
But what, where's the line of dangerous?
That is a very good question.
That is a question that every expert in the field has a different opinion on.
There is heated controversy good question. That is a question that every expert in the field has a different opinion on. There is heated controversy over it.
I am on the side of what's known as the accelerationists.
I think they're basically either as not a line
or if there is, it's so far out in the distance.
It's not worth thinking about and we're not
we're close to it.
And we shouldn't worry about this.
There are a lot of people that are called safetyists
or that sometimes give referred to as the doomers who
are convinced that the line is like already,
but we're almost there.
And we might trip it in any moment.
There was a new story yesterday that I don't know if it's true,
but it suggests that there were a group of people inside OpenAI
that basically blew the whistle and hit the red button and said,
it just got too dangerous and we need to kill it right now.
And so there's reports that this played a major role
in Sam getting fired.
There's irony to that because in the wake of Sam getting fired,
they all decided that they were going to go work for Microsoft,
which is just a big for profit company that presumably
is not going to care as much about safety.
So if this is such a concern that they're
going to fire Sam over it and shut it down,
where are they all going to go to Microsoft,
which is probably even more dangerous.
And then Ilya, the chief scientist, actually flipped.
He reversed himself.
He actually fired Sam.
He was on the board, and he's the guy who fired Sam. And then, 24 hours later, he reversed himself, he actually fired Sam, he was on the board and he's the guy who fired Sam and then 24 hours later he reversed and actually said that
he actually wants Sam back. And so there's this debate over why he changed his mind and was it
because he decided Microsoft was more dangerous than Sam. So this has been the drama. So this has
been the thing that's been consuming the industry for the last week. It's been this spectacular,
amazing, you know, kind of just like, you know meldon resurrection, kind of thing that's happened.
Every question that you just asked,
the question you just asked,
or it just remains a very open question,
which is like, okay, again,
purely on the reporting, have they discovered,
so the reporting basically is that they've discovered
for the first time a self-improvement loop.
So the claim is that they've discovered a loop
that basically where the AI can improve itself.
And the AI safety,
dumer people,
safety is the plywood, dumer is the prejorative.
Now, those people basically say,
if you have an AI that can improve itself,
then it will inevitably become all powerful,
because it will improve itself and then improve itself,
and then improve itself,
compounding all the way up.
And they call this the takeoff scenario.
And the takeoff scenario basically is you get into an improvement loop,
and within, you know, conceivably, 12 hours, it's become super-god.
Right, and it takes complete control of nuclear weapons and you have SkyNets and like the whole thing.
What's really interesting about it is because it's built on human models and the way humans work. If a group of humans had ultimate power
and could press a button that would turn off
half of the world, that would likely happen.
Well, most human stories, the good guys win.
Most human stories, when the bad guys win,
we call it a tragedy, and we feel bad about it.
You know?
I mean, the good guys wrote all the history books.
Hard to get on that.
You guys wrote the history books?
Impossible to know who the good guys are.
We have eight billion people on the planet today,
you know, far more than ever before.
You know, the world has never ended.
Despite the threats and many apocalypse is over time.
The Oppenheimer, you know, the nuclear weapons,
you know, the whole point of Oppenheimer is nuclear weapons are going to destroy everything.
Nuclear weapons didn't destroy everything.
Nuclear weapons actually probably prevented World War III.
So it turns out in developing the new state of ice parts.
That's far.
Yes, that's far.
He's at 100 percent.
That's far.
Right, exactly.
But look, if you had just been through World War II, we've forgotten how bad World War
II was.
If you had just been through World War II, like the expectation of all the military planners
in the 1950s was World War III was right around the corner and it was going to be a land
war in Europe, I guess the Soviets, and it was
going to kill 200 million people.
And look, we still have troops in Germany for that reason, like 80 years later, because
we thought the Russians were going to invade.
Right?
And that looks like what was going to happen.
Like I think most historians are like, yeah, that was highly likely to happen.
And basically, it was only the threat of global destruction.
And we'll go to the Russia, we were on the same side as Russia.
Exactly. And then it flipped hard the threat of global destruction. And we'll go to the rest of it. We were on the same side as Russia. Exactly.
We were.
And then it flipped hard in the years that followed.
Very hard.
Well, and look, there's all kinds of questions around this.
I don't even think we have a good history
of what happened in the 20th century.
Look, the Nazis were very, very bad.
The communists killed even more people.
Right, like, look, we turned over half of Europe
to the Soviets, and they turned it,
you know, we brought down the air curtain and they killed many millions of people and
they imposed into a horrible dictatorship and like surveillance society.
And, you know, East Germany is just a fucking nightmare, right, of what they did to those
people.
And we did that all to protect them from, you know, and say, yes, goodness is we've got
from the Nazis.
Bad news is we turn them over to the communists.
Like, there's like some big, like, the idea
that there was anything morally pure or clear about world
or two, I think is just like completely fake, right?
It's just only because we have this mythology around it.
Like, like, you know, I don't know, like,
it seems like it ended very, very badly.
And I'm not saying, you know, I'm far from saying
that we shouldn't have done it.
But like, you know, have you really achieved a great moral
victory when the guy who killed more people
as a guy who wins, right?
Like, like, how's that going?
Right?
We spent the next 50 years, like literally terrified that there was going to be, you know,
some combination of either World War III or nuclear armageddon, and then the plot twist
is the threat of nuclear armageddon, probably prevented World War III.
Right.
Exactly.
And so like, okay, that's in our history.
Like, that's real.
Yeah, so like, yeah, it's going to get trained up on that, right?
No, by the way, the other thing is it's very easy to anthropomorphize.
It's very easy to impute.
Well, all it knows is what humans...
That's true, but also it doesn't know things the way we know things.
So, it's not a brain.
It's not a life of the brain, but it's not a brain.
It'll give you a bunch of differences.
So, it hasn't been evolved.
So, you and I are the result of four billion years of evolution,
where it's been a pitch battle for survival
across that overwhelming period of time, right?
And why do you, why aren't you even
being so crazily violent and always killing each other?
It's because that was 4 billion years of evolution said,
you're just like, you're fucking trying to kill the other guy,
because if not, he's going to kill you and you.
So every living organism is the result of 4 billion years
of biological pressure and client towards violence.
AIs are not like that at all.
Like they have none of all, they're programmed
on a number of programs like that way.
And so they don't work like that at all.
You know, look, whatever there is to like a human spirit
or soul or personality or a sense of consciousness
or identity, the machine doesn't have that at all.
Like when JGPD is not answering.
But maybe that's the thing that saves us.
The soul. Yeah, yeah's the thing that saves us. The soul.
Yeah, yeah, yeah, exactly.
Yeah.
It's if it's got all of human thought
without the soul.
Seems dangerous.
I don't know.
Well, so here's the other thing though.
You can also test this.
One of the interesting things about,
so the fictional portrayals of AI are all basically,
they're all, I think, they're actually inspired by fascist
Nazi aesthetics and ideology. The assumption is they're gonna, I think, they're actually inspired by like fascist Nazi aesthetics and ideology.
The assumption is they're going to militarize, right? It's like,
skirminator is like the case study of this. It's like, you know, basically the sky, that's basically
the machine version of Nazis, right? And it's even in the, in the iconography and the chrome and the
the, the, the steel and the machinery and the evil, pure malevolence and the red eyes and the
concentration camps in the movies,
and the death machines, and all this stuff.
Or even in the matrix, it's like they're literally harvesting human biological essence
for energy.
It's all this fascist, top-down, death machine kind of thing.
But when you actually use these systems, that's not how they act at all.
In fact, generally, the way they come across,
when I use them is they're very curious,
they're very open-minded.
And by the way, they're happy to engage in more arguments.
And so you can ask them lots of questions
about what is the proper way to live a life,
what is the proper way to organize a society.
And you might agree or disagree with what they tell you,
but it's pretty representative of what most people
have said over time.
And it's kind of like they'll happily tell you
that generally speaking people should be nice to each other
and generally speaking people should respect each other's
differences.
Like it's not kind of, like it's not Terminator.
It's something else, right?
And what is that something else?
To your point, it's the composite of all
a human experience.
But also, it's not, this is very important.
There's no it in the way that we think about it.
There's no person, there's no little person in there. There's no it's in the way that we think about it's yeah there's no person there's no there's no little person in there yeah there's no there's no I understand
right it's not there so I'm says no point to view another way of thinking about it is what
is doing is this generating Netflix scripts it's generating Netflix stories stories
it's just telling stories on the online telling machine all it wants to do is tell you a story
that you're going to like tell me the different categories of software between the internet and my eyes.
Everything, what are all of those things that happen?
What are all the different processes that happen?
Software-wise, like using net-scape as an example of one of the pieces, but it's not all
the pieces.
So what does what?
What are all the pieces you need?
Yeah.
So somewhere there's a piece of,
so I said there's a piece of content.
You're looking at a webpage that content is stored
in a storage system somewhere.
It's a stored in hard drive somewhere that's stored,
that is managed by a computer called a server.
That could be literally a computer sitting
in a closet somewhere, or it could be on an cloud,
which is basically just a giant collection of computers,
kind of run as a big grid.
And then there's the hardware, server, computer,
and storage.
And then there's this, it's called server software.
So there's the software that gets the request
for the content, and then responds with a request.
There is software would be built into that system.
No, it's not an added-on piece.
Yeah, it's a, there's server software.
It's called a web server's piece of software,
which would run on a server computer.
So the server, so usually the terminology
we use is client server.
So client is like what the user uses
and the server is like what's running
in the background somewhere.
And so when I say server, that both means,
that can mean both the hardware itself
of a computer in a closet somewhere,
or it can mean a piece of software running on that computer that does
server-like functions, that tells it how to be a server, basically.
So there's server software running up in the cloud.
That's always connected to the server, so it wouldn't be a general one that you would
talk to other servers.
So we'd only talk to that server, the software.
So the simplest case is just a single computer, a single server computer with a single piece of server software on it.
Now, in practice, most of what you have today is much more complicated than that.
The systems have evolved to become a lot more powerful.
And so probably what's actually happening, like if you're looking at what page today is,
probably you're accessing the server is a cloud of like a million computers.
And you're just hitting randomly one of those computers versus another one.
And there's a network switch that's balancing across the million other people that are trying
to access the same content at the same time.
So it's become a very elaborate, you know, plate spinning exercise on the back end, and
there's these giant businesses like Amazon Web Services that manage all that, but it's
still the same.
What you experience is still the same thing.
As far as your concern, it's just a server, it's just giving you a content.
There's probably two really critical other things that happen back there. One is a lot of work
that goes into making this fast. And so there's this process called caching. And so like
there's probably another server that is actually closer to you that's like at the telecom
company that you're a wireless provider that has like a copy of that content already on it
so that it doesn't have to actually go all the way up.
And so there's this is called caching systems,
performance systems, and then there's all these security
systems, the servers can get attacked, right?
There's lots of hackers that want to like break in
or disable these systems.
And so these days they have all these defense mechanisms
to be able to fend off cyber attacks.
What would they want, what would a hacker want
to get into it for?
So a couple of things.
One is to get a lot of it just to try to get the user data.
So to try to get your name and password and credit card number or theft.
Or they might want to maliciously change the content, deface it, you know, graffiti artist digitally.
Or they might just want to destroy, they might want to actually take that server offline.
They might not want to exist anymore.
And what they do and they don't want it to exist anymore is the bad guys will do what's called
a denial of service attack, which is a DOS or DOS attack.
Basically that means that the bad guys basically set up a large number of hospital computers
to just barrage the server with like too many requests and cause it to basically meltdown.
There's all these elaborate systems.
By the way, the Chinese government does this.
So the Chinese government has the great...
We know that.
And we know that because it's not been well documented.
So one of our companies was the first company that
experienced this.
So the Chinese have what's called the Great Firewall, which
prevents their citizens from looking outside.
The Great Firewall consists of millions of computers
that are being used for censorship and filtering.
They have a capability to turn the great firewall
into something we call the Great Cannon.
And so they can turn it into an outward bound attack.
Wow.
And it's so big and so powerful that it can overwhelm
any sort of small internet company.
And even for the big internet companies,
this can be hard to fight off.
And so there's actually these pitch,
there's almost these pitch digital wars that take place
where the Chinese or others. This is happening happening all the time or it's on occasion.
Well the Chinese aren't always doing it, but there's always denial of service attacks. Somebody's always trying to do it.
So it's actually not that expensive to run a denial of service attack.
But you might even just do it for commercial competition reason.
You know, it's Christmas, you're competing with somebody. If you're nefarious, right, what you would do is you would basically mount a denial service attack against your competitor's website so that they couldn't sell anything, right?
And I'm sure that's happened too, right?
And then, um, there are these things called botnets.
Um, and so one of the things that computer viruses do is,
if your computer gets a virus, it gets basically recruited
into a botnet and your computer ends up getting used to
do these attacks, or by the way, your toaster.
Wow.
Or your fridge.
Wow.
Right?
Um, so there's this, like, yeah, so there's this like,
there's this digital war that's kind of constantly taking place.
Constant, North Korea does a lot of hacking.
They do a lot of hacking for financial reasons.
They find a lot of their military through hacking
for financial crimes.
And then they hire third party hacker rings on the internet
to do it for them.
And so, and then there's now, and then by the way,
there's also like mass propaganda efforts. And so, and then there's now, and then by the way, there's also like mass propaganda
efforts, and so a lot of nation states now have what they
call covert influence teams,
and then that form of the attack is to upload lots of fake
content, right, and to try to overwhelm the real content
or whatever, right, like create lots of fake accounts.
So whenever you are these days, whenever you are doing that,
your computer is kind of maneuvering very elegantly through this
digital firestorm that's happening all the time.
You generally never notice it, but it is actually happening.
The amount of brain power and computer hardware that has been spent over the last 20 years
trying to get these systems to be good at repelling all these attacks is quite staggering.
It's like this whole parallel cyber of cyber war is taking place.
And of course, this is just a very beginning.
Yeah, these wars are going to get much more intense in the years ahead.
We haven't yet had a full, basically, like military war that's been accompanied by a cyber
war, but like what everybody's worried about is like, okay, for somebody in Russia,
we're decided to invade Germany, for example, what the first thing they do is take down the
German power grid.
I don't know, maybe, right?
And so, you know, or basically,
or maybe hacking all the self-remin cars
and causing all the drive-off the road
or crashing to each other, right?
And so there's real world consequences more and more
to, you know, to all these things.
So anyway, so that's what's happening on the outside.
And then basically you've got your Wi-Fi
or whatever here, your cell connection,
and then you've got your computer,Fi or whatever here, or your cell connection, and then you've got your computer,
and then your computer correspondingly your iPhone or your Mac
has many layers of software to basically deal with all that,
download the content, render the content for you.
That takes place in the web browser,
usually you're the operating system somewhere,
then present it to you in a good way,
and then let you interact with it.
And so that's what's called the client type.
There is the browser on your side,
or is there more than a browser?
Yeah, well, the browsers have gotten very complicated, right?
So correspondingly, the browsers are also not very sophisticated.
And so for example, I'll give you an example,
your browser has all sorts of security countermeasures in it,
also itself.
And so if you download a piece of content
that's been compromised and has like a malicious,
worm or virus inside it,
your browser and your operating system
have ways of detecting that and preventing that
from infecting your system.
So it's got that, it's got all kinds of things in there
for performance, making everything fast,
it's got things in there for dealing with video and music.
On the simplest level, for getting the war aspects up
even though those are real world concerns,
the actual connection is pretty simple.
It's a server talking to a browser.
That's right.
Over the internet.
Yeah.
And you can still do exactly that.
Like you can still run it exactly that way.
That still works and practice.
Nobody does that anymore because it's all
gotten much more sophisticated behind the scenes,
but not in a way that you would ever notice.
If all of the other complicated stuff is doing its job right,
you never notice that it even exists.
So you experience fundamentally the same thing you would have 20 years ago.
It's part of the magic that's taking place.
There's a huge amount of plumbing that's been built,
that makes all this work, that it's just many hundreds of thousands of
very smart engineers who spend 20 years.
Yeah. Big huge industries have been built trying to get this stuff to work.
By the way, it's all going to get,
one other thing is going to get more complicated.
So AI is now going to get used for planning
and executing cyber attacks, and AI is also going to get used
for defending against them.
Right, and so a browser three years from now
will have an AI built into it
that'll be doing cyber defense for you.
And again, you may never know that that's even happening,
but it will be kind of doing that on your behalf.
How much time do you spend on YouTube?
Quite a bit, quite a lot.
Yeah, it's an amazing, I mean, for me,
it's almost entirely long form.
It's a discussion's podcast, even audiobooks,
but it's just like the repository of information
knowledge on YouTube is just incredibly staggering.
And then actually a lot of music,
I do a lot of music listening on YouTube now.
Has it replaced other forms of visual media?
Yeah, I think so.
Yeah.
Less television, less movies, more YouTube.
Well, no, I don't watch.
So I don't watch my eight-year-old watches YouTube
in Preferences, Allige and Movies.
I don't do that.
If I'm watching television or movie,
I want a very specific experience.
It's usually with friends, or at the end of the day,
and I want to watch whatever is the best new movie,
or whatever.
So I'm not like, yeah, I just don't have the mode of like sitting and watching YouTube videos, or TikTok videos, the way the lot of the day and I want to watch, you know, whatever is the best new movie or whatever. So I'm not like, yeah, I just don't have the mode
of like sitting and watching YouTube videos
or TikTok videos the way the lot of people do
and it's just frankly a select time.
I'm sure it would be very fun.
But for me, it's basically, it's actually,
mostly, actually for YouTube is audio content.
Mostly for me, it's an audio source.
So I'm an audio book podcast,
just spoken word guy, you know, like two hours a day,
you know, driving around and running around
and doing everything getting ready in the morning. And so it's usually either I'm listening to
a YouTube slash podcast or an audio book. But YouTube has been taking a bigger, bigger share of that.
What was the piece you were recently that optimist? What was it called?
The Techno Optimist Manifesto. Techno Optimist Manifesto. Please explain it to me.
Yes. It's both radical and not radical at all.
It says so.
This is my history of session comes in.
So it's very radical and it just says these things that
have got very radical, which is technology is overwhelmingly
net good.
Capitalism is overwhelmingly net good.
And basically the more technology and the more capitalism we
get, the better things are going to get.
And I describe sort of in detail why that's the case.
I describe indietal with the arguments
or I guess that and why I think that they're wrong.
I also describe by the way the limits to that position,
the things that I'm also not claiming.
But it's basically, it's a call to arms
for the kinds of people who build new products,
building technologies, building new companies.
I describe right up front in the piece
that basically I think we have all been
on the receiving end of a demoralization campaign
for the last 15 years to basically convince us that all
these things are bad and evil.
And I think it's basically, you know, people, it's a demoralization campaign is being
run by people who are very threatened by change and people who are very resentful and bitter.
And we should not let them demoralize us and to not make him things better.
And so yeah, I really kind of went to town.
I was inspired by a lot of prior manifestos, one in particular
that I enjoy tremendously, which is the Futurist Manifesto
from the Italian Futurist Art Movement in the later on 1910.
So I don't know that I hit the bar of the Futurist Manifesto,
but that was kind of my inspirational starting point.
And of course, that was an artistic aesthetic movement,
not a technological movement, but it was at a time
when they were very obsessed with new technologies and what new technologies
would mean for art. So hopefully I got a little bit of that flavor in there.
When is technology in that negative? Yeah, so it's open. It tells a lot of people.
Basically when it causes misery, right? So look, fire, you know, I talk in a manifesto,
like all of our both optimism and
fear of technologies embedded in the myth of Prometheus, who is the God that brought fire
down from the mountain to man.
Fire is the life giver, fire is the source of light and heat and cooks food and serves
as warmth at night and scares off the wolves.
And it allows this to mount defense.
Fire is also a means of attack and you can use fire to burn somebody to death. You can burn down an entire city, you can fire flaming arrows. I've been
reading about the history of the Middle East stuff and they, one of the reasons that they discovered
there was oil in Middle East is because they were the, the, the, the Arabs of that region were early
adopters of Naples. In the mid 1800s, they discovered a way to basically take petro, you know, chemical,
petrochemical substances and basically
make essentially early napole with them.
So a lot of people died by fire and they may look like what is ammunition?
We read about shell and taking place somewhere.
They're sending out bombs.
Fire is the weapon of a bomb.
Fire is the weapon of the catalyst for a bullet.
Fire.
It's a huge power source.
Fire is what is an equally weapon do a bullet fire. It's a huge power source. Fire is what does an nuclear weapon do,
it generates fire, right?
And so it's all in there.
But the thing is, you can say that basically,
it's really easy to same thing.
Like I'm drinking water, you can drown in water.
You can use a shovel to dig a well,
you can use a shovel to close over to death.
It's a tool.
And I won't go so far as to say tools are value neutral
in that they carry consequences with them.
And specifically, they carry consequences
to the ordering of human society, which is ultimately
the thing that's being litigated when we talk about all this stuff.
But that said, they tend to have both youth cases.
And it is arguably easy to get carried away
and just assume that it's only up sides.
And I think it's also very easy to get carried away
and to say that it's mostly down sides and I think it's also very easy to get carried away and to say that
it's mostly down sides. I think most arguments about technology are not
actually arguments about technology. I think the arguments about the
ordering of society. I think most people who post a technological change are
not actually opposed to the technology per se. They're opposed to what they see
as a diminishment of their own status and power. I think that's why the
news industry is so anti-tech is because they've used a challenge to their
own. They're traditional K-in role and their historical businesses.
Look, is societal change good or bad?
It depends.
We are all very happy we don't live in the societies ordered the way that they were
4,000 years ago, like that would suck.
But is all societal change good?
Probably not.
We live in a society today where suicides are rising.
Okay, something's going wrong. right? Like, and you know, so what caused that to happen? Anyway, so maybe I think all of the
important questions around technology are actually questions about society, which are questions about
people. But if we use the fear of societal change and paranoia about technology to prevent progress,
I think that leads to stagnation and I think that creates problems that are almost certainly worse.
I'd actually link to politics. I link to politics. This is not a right-wing or left-wing
observation, but basically what you find is when societies grow, they tend to have a positive
sum mentality where some people can rise without it being a threat to everybody because everybody
kind of views the there's opportunity. When societies aren't growing, you get zero sum politics.
For me to get something I have to take it away from you. And I think what happened basically is our society
downshifted to a slower rate of technological development
and a slower rate of growth in the 1960s, 1970s.
And I think that's culminated in basically zero-sum politics,
both on the American left and American right,
where they've got increasingly negative and hostile
and kind of destructive.
And so to me, like the clear answer to anybody who doesn't like the way politics in the US
are going, the clear answer is we need growth.
To get growth, we need technology.
That is the actual answer.
Whether people will, well, why they're not, I don't know.
Tell me something you believe most people don't believe.
Technology is good.
I don't know.
I think a lot of people believe that.
So yeah, so that is unfair.
So this is another distinction I'd make in the,
I make this a bit in the essay,
which is it's actually not the case to your point.
It's not the case that most people
in negative and technology,
it's the case that most elites are negative
and technology right now.
And so, and then again, I think they're negative
on it primarily because they view it as a threat
to their power and status as elites.
So I would say if the form of the question is,
what do you believe that other elites don't believe
like that, that would definitely be the answer to that question.
Oh, okay, I'll give you one. I'll give you one. I'll give the flip side of it, which is developing
new technology is like creating anything else. It is in a lead art form. It's in a lead process,
not everybody can do it. It's going to be a very, very, very small rare fight group of people
who are going to be able to do it. What's the furthest out conspiracy theory that you believe? Conspiracy theory.
I don't know if this is a conspiracy theory. I think Jung was right about the collective unconscious.
I don't know if that's a conspiracy theory. Maybe that's more of just a metaphysical theory or
something. I think there's a collective human experience.
I have a completely materialist explanation for that,
but I don't know if that's limited to the material
and I don't know that the material explanation is sufficient.
And do you think of yourself as a spiritual person?
No.
No, I'm a scientist and a technologist all the way through
and I apply the scientific method everything.
And I'm also like, because I read a lot of history,
like I know that there are very sharp limits to that,
to the explanatory power of science and technology.
It does not explain most things.
It's not a factor general purpose tool.
And there is a lot that we do,
but it's the amount of things we don't know far exceeds
the number of things we do know.
I mean, look, this is physics.
Physics is a field that's totally hung up.
They've sort of hit a brick wall 50 years ago.
And the rest of the questions of how the reality of the universe is structured and
matter and everything else is basically still unknown.
And so how much can we actually understand?
There's a great book I read one time, it really struck me called The Half Life of Facts.
It turns out basically if you go across time, it turns it.
So radioactive material has a half-life
is the amount of time it takes for half the radiation
to fade.
So it's like this curve.
So this guy basically says, a fax have a half-life.
Anything given, anything human society in the moment
believes is a fact is like, and there's
different half-lives, and he talks about the different models.
But it's like, on average, I think 50 years
that will no longer be a fact.
And we will just be so confident that we will have, you know, it's like Newton for sure thought he had like orbital mechanics figured out and then it turned out no, like relativity
was like spring everything up, right?
And like, and then you know Einstein thought he had a relativity figured out and then
quantum mechanics came along and just like completely freaked him out, right?
And so like even those guys, it turns out
the things that they knew for sure
actually turned out not to be right.
Now by the way, those things that they knew
were very useful while they knew them.
It just, they weren't actually the underlying truth.
And so I guess I would say yes.
I am very open to underlying truths that we don't yet know.
At first I don't know how to get there spiritually,
but I don't want to relent anything out.
Where would you say you're the furthest out on the fringe?
I'm not on the fringe at all in my daily life.
Like I have a lot of friends who are very,
like they're super drugs, you know,
the whole Burning Man, like I'm not a hallucinogen,
I ain't like that, so I don't,
like I'm not, I don't't like helicopter skiing, paragliding.
So there's a lot of things I'm not out on the edge
on, I'm very bourgeois personally,
but I'm extremely open to new ideas,
and particularly obviously new technological ideas,
new business ideas, new cultural ideas,
like extremely, I'm like reflexively default open.
And that's just like incredibly rare in practice.
And I'm quite honestly, it's most people over time
develop scar tissue around new ideas.
Most new ideas don't work, right?
And so like generally, you're experienced
throughout the course of your life as people
throwing up new ideas that are actually
like that idea as their ideas don't work.
And so generally, as people age, they actually get less open.
They get kind of more set in their ways.
They have more rules to how they think about things.
All of my scar tissues in the other direction,
it's all ideas that were good ideas
that I thought were bad ideas.
And as a consequence, I didn't take seriously enough
fast enough.
And so my big lesson over time has been,
I need to be moral, but every day that goes by,
the lesson the universe teaches me
is you need to be more open minded.
And so I'm getting more and more open minded
as I get older.
That sounds great.
It is great.
It's very fun.
It sounds great.
I know a handful of people who have been able to do that over time,
you know, at more advanced ages, and I really admire them.
But it is really, and I enjoyed a great deal.
It's very fun.
And I, you know, I teach, you know, we teach it inside the firm.
We were very deliberate about it.
But it is weird, because it's like I'm on a very,
I'm on the opposite trajectory of almost everybody of my age cohort. Well, it's a really humble place to be.
Yes.
You know, you accept that you don't know.
You don't know.
Yeah, like look, who am I?
Like why on earth do I have the knowledge and insight
and predictive capability to be able to say this idea
is a bad idea?
And the answer is I don't.
They really, really genuinely don't.
But look, part of that is goes back to the venture thing, 50-50 thing we're talking about.
It's like, look, a lot of new ideas are bad ideas.
Like, a lot of them are actually bad ideas.
But that's okay, right?
Like, there are going to be a lot of bad ideas.
And it is-
Can you think of an example of something that was pitched to you that you perceived as a bad idea
that turned into a very good idea that maybe even changed the world?
Well, I mean, look to say that stuff.
Like, I mean, look, I was, look to say I was trained on AI.
I was trained on all this stuff.
Neural networks in the 80s, there was a big AI
in the 80s, it totally failed.
The conclusion from that was this stuff will never work.
You know, I had that conclusion along with everybody,
just to kind of took it by default.
Judge, you know, like, GPD or whatever,
these are big breakthroughs in the last year
that are very kind of shocking even to us
of how well they work, but we knew there was an arc
that was playing out.
But I will tell you before 2012,
I would not have told you that.
I would have said, yeah, I know that field is dead.
This is like stick of fork in it.
Well, here's another thing, like, you know, look,
I don't know if this is true.
Tell me if this is true of art also.
Is there a prehistory?
Like, is there a prehistory to music
where like when there's something big breakthrough in music,
you can look back after the fact
and you can be like, actually,
somebody tried that 10 or 20 or 30 or 40 years earlier.
And it's usually a cycle.
Okay.
Like I can remember when Grunge happened, it wasn't that exciting to me because when I was
in, I got to experience Arasmith.
Right.
And Arasmith was my generation's version of the Rolling Stones.
Right.
So if you were alive in the 60s, the stones were it. And then you were,
if you were a kid in the late 70s, it was Erasmith. But then in the 80s, late 80s, it was Nirvana.
But it was all the same thing coming back around again.
And what would have been the origin point?
Was it Delta Blues?
Or something over the night?
Yes, but I don't even know if you could say this in origin
because it probably goes back to indigenous music.
It's all there's always been music.
It's always building off something from the past
and changing and finding a new way or
a new piece of technology comes along or a new instrument comes along and that changes
everything.
So I think with what we do, I think there's more of a material component to it which is
there are certain things that are just not possible until they're possible.
And so there are discontinuous breaks.
Like the story of Charles Babish, like he couldn't build the difference machine, he didn't
have the technology, the technology, the technology, the time did not permit it, it was not
possible for him to do that.
But by the 1930s, 1940s, it was possible.
So there are discontinuous changes in our world that are based on just material limitations.
Having said that, almost everything that works in tech, people have been trying to get
it to work for decades before it actually works.
Right.
And so even like, actually PT, there was a chat system called Eliza in the 1950s where
they tried to basically get this to work.
And Eliza actually passes the Turing test for a lot of people.
A lot of people actually think Eliza is a real person, as a psychiatrist spot.
And so like, people have been trying to get those things to work for a long time.
And or like the internet, look, the internet has a prehistory that goes back to, you know,
the 1950s when a lot of the original work on packets, called packets, which I was first
done, that didn't really take off until the 80s and 90s.
And so, the other thing that I'm just like really open to
is where history else is very helpful is,
is just like, you know, if something is working,
if there's like a breakthrough that's working today,
it almost certainly in tech is not just like a brand new thing.
It's probably something where there's a 40 year back story.
Yeah, exactly.
And there are probably generations of scientists
and technologists and founders who tried and failed.
And there's a great, there's tons of examples in this.
One of my favorite ones is the French had optical telegraphs working like 40 years before
electric telegraphs.
They had a system of glass tubes with flashing lights under the streets of Paris.
And they could flash messages across across long distances.
And they had like mirrors and repeaters
and all kinds of stuff.
I mean, it was just like an 1840s or something.
Wow.
Right?
And so it's like, you know, all right,
because right, the telegraph takes off 30 or 40 years later.
It's like, okay, like, you know, like,
was that a breakthrough idea?
Or was that just like the 40 year version of,
oh, telephoto-elegions are a great example.
There was a Scottish adventurer of events
and mechanical television starting in like the 1890s.. He had a system where it would actually receive radio
waves, but it was a mechanical television. There was no tube or electronic, so I think
it was totally mechanical. It was spinning wooden blocks in a grid, and the blocks had different
colors in different sides, and the blocks would spin, represent red, green, or blue.
Apparently, I count the time as if you squinted, you could actually see the picture
coming through.
But like, it took another 30 years
before you had like back on my television after that.
So yeah, so there's this deep level.
There's all, not always, at least in our societies,
the societies we've been lucky enough to live in.
There's always some reservoir of fringe thinkers
who are like way off their own leading edge,
probably decades ahead
of their time, probably are never going to be remembered, probably actually originate
the ideas, then they've kind of put the ideas in the air.
I'm in 10 years, 20, 30 years later, someone finds a way to execute it.
That's right.
That's right.
Well, I was on a set, even in rural networks, like they said, the first, I was actually shocked
to learn this, I was reconstructed in history of in rural networks, like I said, the first, I was actually shocked to learn this.
I was reconstructed in history of AI last year.
And I didn't even, I thought the debate started in the 40s.
I actually started in the 30s about where you could build computer space in the human brain.
How much did they know about the brain in the 1930s?
Like it couldn't have been that much.
But they knew enough to, to immediately think like, wow, here's what we could do.
And so like, you know, like, to, to me, those are the people I really,
to be that far ahead of your time, right?
And right, it's just like wow.
Amazing.
How does Moore's law continue to work?
So, you never get to the end?
No, you get to the end.
Well, there's huge debate around this.
You're down to what's called two nanometer transistors.
And so we're down to the level of manufacturing that's taking place, the incredible leaps.
If you ever get a chance, there's this company I think called Apply Materials, that's
the Dutch company that makes the equipment to manufacture modern microchips.
And it's these giant rooms.
It's just incredibly elaborate machinery and is doing all these things,
a lot of us with the photo lithography, so it's literally shining patterns with light
to manufacture, you know, things that end up being material. And then it's these manufacturing
processes, two nanometers, you know, fractions of it, you know, tiny, tiny, tiny fractions of
human hair stuff, as likely atomic level. Like a lot of the barriers for progress and more is
a lot, literally, as literally is getting down to the level
of individual atoms and the problem is like
we can't simplify the atoms,
uploading everything up.
Unbelievable.
And so yeah, it's like manufacturing at the atomic level.
There's huge amounts of engineering going into trying
to optimize that.
There's a lot of work going into trying to make
like shifts three-dimensional, right?
So it's not taking up another dimension.
What else?
There's a lot of work going into so-called quantum computers,
which is a totally different architecture design,
which in theory is going to be another one of
these huge breakthroughs that just works totally differently.
By the way, there's a lot of work going into biological computers.
So there's people working on light and storage,
like in terms of you can store
huge amounts of data in DNA,
because the human body encodes enormous amounts of information.
Wow, that's really cool.
Cellular level and so there's people working on that.
There's people working on biological computers.
Growing computers in tanks.
Wow.
So cool.
Yeah, so it's a, yeah, it's a,
there's a whole field of information processing
that all this stuff is based on,
which is like what Babbage and Loveless and these guys,
people came up with and it's sort of,
yeah, the pattern is like how to store and manipulate,
analyze, synthesize large large, massive information.
It just turns out that the payoff of being able to do that is just gigantic.
And so the amount of money that you would spend on R&D to be able to figure out better
ways to do that is effectively unbounded.
Amazing.
And that continues. I'm going to go to the beach. I'm going to the beach. I'm going to the beach.
I'm going to the beach.
I'm going to the beach.
I'm going to the beach.
I'm going to the beach.
I'm going to the beach.
I'm going to the beach.
I'm going to the beach.
I'm going to the beach.
I'm going to the beach.
I'm going to the beach. I'm going to the beach. you