Modern Wisdom - #346 - Rob Reid - How To Avoid Destroying Humanity
Episode Date: July 15, 2021Rob Reid is an entrepreneur, podcaster and an author. The last 15 months have been a terrifying taster of just what a global crisis is like, except it wasn't lethal enough to be a threat to our long t...erm survival - but just because this one wasn't, doesn't mean that more deadly existential risks aren't out there. Expect to learn how synthetic biology might be the biggest risk to our survival, what we should have learned from 2020, whether Artificial General Intelligence is an immediate threat, Rob's opinion on my solution for saving civilisation, whether we should totally stop all technological development and much more... Sponsors: Get 20% discount on all pillows at https://thehybridpillow.com (use code: MW20) Get perfect teeth 70% cheaper than other invisible aligners from DW Aligners at http://dwaligners.co.uk/modernwisdom Extra Stuff: Check out Rob's Podcast - https://after-on.com Follow Rob on Twitter - https://twitter.com/rob_reid?lang=en Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
What's happening people, welcome back to the show. My guest today is Rob Reed. He's an entrepreneur,
podcaster and an author, and we are talking about how to avoid our own destruction.
The last 15 months have been a terrifying taster of just what a global crisis is like,
except it wasn't lethal enough to be a threat to our long-term survival.
But, just because this one wasn't doesn't mean that
there won't be more deadly existential risks lurking out there waiting to end us.
Today, expect to learn how synthetic biology might be the biggest risk to our survival,
what we should have learned from 2020, whether artificial general intelligence is an immediate
threat, Rob's opinion on my solution for saving civilization, whether
we should totally stop all technological development and much more.
On top of being terrifying across the board, this is now the new record holder for the
longest ever podcast episode that I've recorded on modern wisdom.
And rightly so, right, like this is a very important topic.
If we manage to survive the next few centuries, it's likely that our genetic progeny will get to colonize the galaxy and they'll enjoy experiences and levels of happiness
and fulfillment that we can only dream of. And if we don't, then we were just a brief illumination
in one dark corner of the universe where consciousness lit up and then died off like a poorly lit firework.
In case that wasn't enough bad news, I'm still in Ibiza, so no Saturday episode this week.
Hmm, I know, very sad.
But next week I will be back Monday, Thursday and Saturday will resume.
Make sure you hit the subscribe button, it's the only way you can ensure you will never
miss an episode when it's uploaded and it'll make me very happy.
But now it's time to avoid our own apocalypse with Rob Reed, welcome to the show.
Thank you kindly, it's great to be here.
It really is, so I read something on Reddit the other day that I want to dictate to you
here.
The decision to use CFCs, chloro-fluoro-carbons instead of BFCs, bromo-fluoro carbons was pretty much arbitrary.
How do we decide to use BFCs?
The ozone layer probably would have been totally destroyed before we even knew what was happening,
killing all life.
No way.
BFCs destroyed the ozone at over 100 times the rate of CFCs.
That's amazing.
I never heard that before.
How thick that. I mean, in CFCs we're scary, but obviously they moved slowly enough that we were more
less able to fix the problem before we were all dead.
Someone replied and said, maybe that was the great filter that all the other civilizations
just chose the wrong coolant medium.
That's funny.
So today we're going to be talking about existential
risk. My favorite terrifying topic and also your one of your areas of expertise.
Definitely. And it's amazing how seductive the topic is a lot of us. It's like we can't take
our eyes away from it. We get fascinated, like what you just
read to me. This is probably says something bad about me psychologically, but my main
reaction was like, how cool. I mean, obviously we dodged the bullet, so that's pretty nice,
but like, wow, another existential risk that I didn't even know about.
What do you think it is about that? Because I have the same fascination.
I, you know, maybe it's something that was, you know,
drilled into us when our, you know,
distant ancestors were growing up on the savannah.
Maybe there's something about being fascinated
by things that can annihilate one, one self,
that conferred some kind of survival advantage. And I'm just
riffing here now. I'm just going to make this up. But you know, particularly the
head of the clan, the hunter-gatherer clan, whoever the boss was, who have
chieftain, whatever you want to call that person, really needed to think about
what could kill us all. And the head of the clan probably was a man
and probably fathered far more children
than people who were not head of the clan.
And so we all have a lot of head of clan DNA and us.
I'm making this up as I'm going along,
but I like that theory.
So what we probably do is a statement of fact,
all have a lot of head of plan DNA,
because there were thousands of generations and the heads of the clans were the people who probably
had the most progeny. And the heads of the clans really did have to think about not just what
you killed me, say, but you'd tiger on a hunt or whatever, but what could wipe us all out?
Really need to think about that. And the successful ones continue to have progeny.
So that's my answer.
It's like a, we've got the anxiety bias, right?
That we're more scared of things than we are hopeful about things.
But this is like a macro level.
A macro level anxiety bias.
Exactly.
We've got it.
We've worked it out.
Okay, so.
All right.
Good.
Given the fact that menu obsessed with it and a ton of people that are listening will be
as well. Why do you think we're so blind to how close we can come to total civilizational destruction?
Generally, it's not at the forefront of what we're talking about every day as much as mean you might wish that it was.
Well, I think it's because it's so new.
There is really no plausible step that I can think of that humanity could have
taken before, let's say, the mid-1950s to wipe everybody out. And at that point, it was
one thing. So after Hiroshima and Nagasaki, there's one nuclear power, the United States,
it had precisely two bombs, it used them both. So there's no way to destroy the Earth,
right? Then along comes the hydrogen bomb, and are very few of them and only the US has them and then the Soviet Union gets them and then all of a sudden
There's this insane push to put them on long range bombers and missiles and so forth
It's probably late 50s by the time H bombs were proliferate enough and
You know two sides had enough capability that that truly wiping out society became a problem.
So as a quarter million year old species, we've been facing this for 60 years.
So it's probably, even though what I said about the plan, notwithstanding, to put it on
a global level, that's a pretty new development. And I would also say that the attention that's
given to the careful attention, academic attention, serious thought in industry attention, governmental
attention is far less than what it should be, but nonetheless, the amount of attention
that is given to existential risk today, to me, feels like it's 10 to 30 or even more times what we gave to it, let's
say, 15 years ago.
And 15 years ago, I don't think people like you and I even knew the term existential
risk.
So I think we're developing that muscle pretty rapidly at this point.
And that's a good thing.
And hopefully it's not too late.
That's your hopeful optimism, your unbeatable optimism coming through there.
Yeah, I'm pathologically optimist sometimes.
How much of a do you think could be a hubris as well?
You know, by definition, we haven't destroyed ourselves yet.
Therefore, we're probably fine at surviving any future destruction potentials also.
Yeah, I mean, the response to that is like, at a boy, at a girl, it's been 60 years.
So you've dodged numerous bullets in 60 years, one or two maybe sort of by design and quite
a few more by accident.
And so do you want humanity to last another 60 years,
or do you want it to last another quarter billion years?
And if the answer is you've dodged one bullet for 40 years,
and you've dodged more than one bullet for maybe 20 years,
is that the kind of track record that gives you confidence
that the civilization or the species
is gonna survive another quarter of a million years,
that's absurd.
That's like saying, I wish we could do the proportions
and back of the envelope math could probably reveal it,
but that's kind of like saying one to two seconds
into what Americans call a soccer game
and the rest of the world calls a football game.
We haven't given up any goals, so we're fine.
Yeah, yeah, yeah.
Yeah.
Given the fact that existential risks generally don't have a ton of global attention paid to
them, why do you think climate change is given so much attention when there's more imminent
threats that aren't even really in the conversation?
Well, I think the attention to climate change, for example, developed in a compounding way
of our longer period of time.
And so the whole earth catalog,
the picture of the earth on it, the first Earth Day,
which I think was 1970, et cetera, that's a great deal more
time.
And I think these things, like successful investments,
when a school of thought really, really plants
its roots and takes off.
It's like compounding returns, you know, it's like, so the number of people who were
environmentally aware in 1971 was probably pretty small, but it was, it was a, it was
a meme that the world was ready to hear, and it had a lot of committed people from the very beginning.
And that meme spread and it resonated.
And more work was done.
And that spread.
And it was like an investment that compounds a 20% per annum.
Like, wow, 20% per annum, 10 years in, you know, you think you're in fat city, but
holy cow, 50 years in, it's ginormous. And so I think it's a lot of it is the fact
that that compounding awareness has had, you know,
more years to grow exponentially.
And then the other thing is you get to a certain point
in any of these fields and you start developing
very significant industries and economic interests around them.
And so now there is a very, very large number of people
who are making their living off of
protect against from climate change,
whether they're making electric cars,
whether they're academics who specialize in climate models,
whether they're politicians who fired up their base
and got elected in part on that message,
there is a very, very large interest group that is believes,
I'm not saying it's cynical, but believes at this,
but it is full-time committed to making this stuff work,
and the number of people who are currently full-time committed
to preventing existential risk is probably a minuscule handful
of academics, but that is a hell of a lot
more brain power, persuasion power, intellectual output, et cetera, than we had 15 years ago.
So it's just starting.
You think that first mover advantage for climate change in that case, in 20 years time, then
we're going to have the church of Nick Bosterum and everyone's going to be praying to the
control problem and looking at nanotechnology in Grego.
One hopes.
I don't know if Nick or I want there to be a church dedicated to him.
We can talk about that later in the conversation, if you wish.
But, yeah, I think spreading and compounding awareness of this is the only thing that will
protect us because the policymakers will be the last ones to the party.
But if this becomes something that a million people are really
interested and aware of and informed about, 10 million,
100 million, et cetera, and compounding out
and eventually seeping into government,
that's ultimately the only way we can dodge these bullets.
You mentioned that we'd had a couple of close calls over the last 50 years.
Can you take us through a few of those stories?
Yeah, I mean, I think, you know, starting with, well, first of all,
the BFC thing, that's a new one.
And if that redditor was correcting what they said,
and I'd love it if you could send me that because I want to dig into that,
it is intriguing and frightening.
There's one right there. I mean, the ones that are most chilling to me because I want to dig into that. It is intriguing and frightening. There's one right
there. I mean, the ones that are most chilling to me because I grew up during the Cold War
are the nuclear ones. And, you know, there are a couple of particularly famous incidents,
one during the Cuban Missile Crisis in which there were nuclear-armed subs, Russian subs, patrolling the area outside of Cuba
as the American blockade settled in.
And one of the American boats was sending,
it's funny, they knew the subs were down there,
they didn't want to escalate.
So instead of sending depth charges,
they were sending something like practice depth charges or something weird like that, but they were dropping sending depth charges, they were sending something like practice depth charges
or something weird like that, but they were dropping these depth charges and really menacing
these submarines.
And there were a lot of American boats like Path of Fleet was in a pretty compact area at
this point.
And I think there were four Russian subs in the submarine fleet that were down there.
And these depth charges start coming down. And on one of the subs,
there was a decision to nuke the American fleet
because they had these nuclear torpedoes.
And if they sent them up,
it basically would have wiped out the American fleet.
And in most submarines,
three of the four I believe,
it required two people to say, yes, let's do it in order for it to be done.
You're underwater, you got depth charges, you're not on the phone to cruise ship.
These are people who are fully empowered to start a nuclear war.
The stopgap measure to prevent that was not one person, but two people have to say we're
going to do it.
On this one particular sub where for whatever reason, and I wish I knew the details better,
the decision was being taken, there was a third man.
And I'm saying man because I'm sure it was all men in the Russian submarine crew in the
1960s.
There was a third man who got a vote because he was like the party cadre or something.
He was like the head of the Communist Party as opposed to nearly the military
delegation of the submarine fleet and he said no.
And had that third person not been on the submarine or had that third person said yes,
the next step would have been for the Russian submarines to fire a tactical nuke and basically
eradicate that part of the American fleet.
Now a nuclear weapon has been used during the Cuban Missile Crisis.
And that, you know, it's very, it's almost difficult to imagine how that would not have
escalated to Doomsday.
So holy cow.
And then the other famous incident was the guy I've seen a movie about him, I've heard
interviews with him, so I should remember his name.
Do you remember his name, the guy on the Russian side
who saw, yeah, so another story in the States to,
I wanna say the 80s, probably early 80s,
there was basically the Russian equivalent of NORAD.
So basically the operation where they detected incoming American bombers and missiles and so forth and they saw American missiles coming over and they was like, oh my god, it's Armageddon has started.
There's only one missile or two or something weird like that. And so it basically went up to the person who was in charge of that
facility whose name was escaping me. And his instinct told him, this is not the start of
Duma's Day, they wouldn't just send one missile. And then more missiles started showing up,
but it was not an all-out attack. And this guy was in contact with this his periods in Moscow,
and they said, launch launch everything and he said no
And his instinct was screaming at him that this is some kind of something's gone wrong with our systems
They're not starting they're not gonna send two or three or a handful of missiles over here or
Trigger and all-out response if they were if they were taking a first strike
They'd be setting everything they had so he stood stood his ground and did not launch. And had there been somebody different,
almost probably anybody in that role,
because your job was to say yes to Moscow.
Your job wasn't to think.
So that was an unbelievably close call.
There was another less close call
that I think a little bit more is made up than should be,
where at NORAD on the American side,
there was a first strike, a pretty full, full blooded first strike was detected and
it turned out to be a test sequence. And there was actually a journalist in NORAD when that
happened. So there are, there are a lot of these things that we got through by the skin
of our teeth. And these things could still happen. I mean, there's still an, an, an
ordinate amount of ordinance,
sorry for the pun, on the Russian and American side
and increasingly on the Chinese side.
And one hopes that our safeguards and our software
and our protocols have improved since the 80s.
But I don't know that they have.
And that risk still sits out there.
Now I'm focusing on nuclear because those are,
I think those really are the bullets that we've dodged so far.
We're just getting into the area in which
Sinbio could take us out.
And particularly with the rogue actors,
the scenario that is, you know, I worry about particularly,
although biotere or bio-era, both of them
are very, very, very dangerous.
But we're just getting to the point where synthetic biology could take us out. And super AI is not there yet.
And I feel like that's at least a handful of decades out. And nano is definitely not
there yet. But we are going to have an increasing number of these risks facing us. And the real danger is the proliferation of the ability
to hit, we'll just call it euphemistically, the flashing red button, a hypothetical,
probably non-existent flashing red button that we imagine, you know, Mr. Biden and Mr.
Putin and Mr. Xi have at all times available to them to destroy the world.
Mr. Xi have at all times available to them to destroy the world. When you look at the Cold War, let's think of this.
We spent trillions of dollars preventing two people, obviously over simplifying, but preventing
two people from hitting that flashing red button.
What we spend it on, what we spend on all those detection systems, but we also spend it
on enormous conventional armies to deter small acts that could snowball
into large, large conflagrations.
We spent money on regional wars to prod each other and test each other and to show resolve
and hold each other at bay, the diplomatic arborados, all these things were in place to stop
two people from hitting that button.
And those were two people who were highly inclined
not to hit that button.
And obviously it was more than two
because we just talked about scenarios
in which people down in the kitchen and command
had that power.
But we spent a lot of money making sure
that a very small handful of people
didn't hit that button.
And so far we've succeeded and became terrifyingly close, but so far we've
succeeded. The danger with things like send bio and super AI is that that decision not to do something
unbelievably dangerous or even something deliberately destructive is suddenly going to be in the hands of
thousands of people perhaps. In the case of send bio, I believe it will be thousands of people perhaps. In the case of symbiote, I believe it will be thousands of people.
And probably pretty soon.
In the case of super AI,
it's probably gonna be a smaller group of people
who don't take the right precautions
about not letting the genie out of the bottle.
But with symbiote, focus on that for a moment.
The tools and the methodologies
are improving so rapidly that the things that
only the most brilliant academic synthetic biologists at the pinnacle of, you know, laboratory
budget, equipment, know how, et cetera, things that would elude that person, be impossible
for that person, will be today, will be child's play in a high school biolab in quite a bit less than 20 years because this is an exponential technology.
And that's the frightening thing. And all the wisdom and complexity and the no-bell-worthy work that is done by prior generations will start getting embodied in simpler and simpler and more and more common
and cheaper and cheaper tools.
And so all of a sudden, all that wisdom and genius that alludes most of us is embodied
like, you know, we're talking on laptops.
How much Nobel-worthy work was done to create, you know, the computers that we're using
to speak to on another over many, many generations?
You and I are smugly sitting here with computing power that the most brilliant computer engineer,
electrical engineer, could only dream of 25 years ago.
And it's all embodied in this simple tool, and that goes into send bio tools in wet labs
at lower and lower levels of academia, and in a higher and higher number of lower and lower budget companies,
we are relying on an impossible number of people not to screw up
or not to do something deliberately evil.
And if one says, well, why would somebody ever do something deliberately evil
with Sin Bio after we just been through, the answer is, I don't
know. I don't know what motivated the Columbine kids. I don't know what motivates the, you know,
more than one mass shooter per day that strikes at the United States. I don't know what motivates
those people, but they're motivated and they're killing everybody they can. They just don't
have tools to wipe us all out. So anyway, that was probably a very long window to answer, but we were going to have to worry a great deal about the ability to
do something catastrophic, being in a lot of hands rather than just two that we can watch
very closely.
Is that what you call democratizing the apocalypse?
Yes, that is exactly what I call democratizing the apocalypse, or privatizing the apocalypse and democratizing
it. And privatizing just drives home the fact that saving the world from destruction or
destroying the world is no longer a public good. It's in private hands. And so it's a slightly playful and slightly perverse way of putting it.
But that game of chicken that the superpowers played with each other during the Cold War was, quote, unquote, a public good.
For all of the terror that anybody who grew up under a nuclear threat, and by the way, that's everybody alive right
now because that threat is still very present. We just don't feel it the way we did during
the Cold War. The terror that anybody felt growing up with a nuclear threat, and it was
billions of people who were some degree, I'm sure, traumatized by that, is nothing compared
to the horror that would have been inflicted by more and more conventional wars.
So if conventional wars between the superpowers, you know, more Vietnam's, more Korea wars,
etc., eventually, you know, followed by an enormous World War III smackdown in Europe.
So imagine nuclear weapons were impossible.
It's probably likely, it's almost, I mean, it's highly likely in my mind, looking at the rhythm of geopolitics,
stretching from let's say 1840 to 1940,
if Nukes had never been developed,
it's hard to imagine we wouldn't have continued
to butcher ourselves in our tens of millions.
That's a really good point.
Imagine just how much massacre there would have been
if we didn't have this capacity to do wide-scale
preventative destruction.
Yeah, so I am sure that there would have been
an all-out war between the Soviet block
and the Western block in Europe,
probably in the 50s, probably in the 60s,
without that threat.
And there probably would have been
far more Vietnam's
and in Korea's, and stuff we can't even imagine.
And so that game of chicken was, quote unquote,
a public good, and it was owned and operated
by governments.
And that most terrifying of decisions was concentrated
in a tiny number of hands, and humanity did shoot those rapids and
We probably would have had a far far far far far far more gory and traumatizing
Second half to the 20th century and until this day
I bet we'd still be clobbering each other with conventional weapons and now they're getting to the point that they're terrifying with automated, you know like
so you know public good All of a sudden,
and when that suddenly is in private hands, things change in frightening ways. Let's pivot over
to Super AI risk. The parties that will be in the position at some point in the future, if Super AI
is indeed a possibility, which I personally absolutely think it is, the parties who will be in the future, if super AI is indeed a possibility, which I personally absolutely think it is, the
parties who will be in the position to create the genie in the bottle, step one, and be
screw up and let the genie out of the bottle.
Or let's just go with the creation step, not maybe they don't know that they're inches
away from creating the genie.
Those people are almost certain to be
in some form of private company in my mind,
at least in the United States.
And China, we know less of what they're doing there.
And they're probably our government labs
that are recruiting extraordinary talent
and proceeding headlong down paths
that we can't necessarily proceed on.
But today, the greatest talent in computing is not working for the United States government
or any government.
It's working for deep mind, it's working for Google, it's working for startups that we're
not aware of.
And that means that the person or people who are in a position to say, oh, that's really
risky. or people who are in a position to say, ooh, that's really risky, huh?
But that's kind of tempting.
They have huge economic incentives
to take what they might perceive to be a tiny risk,
and probably get away with a tiny risk.
And as a result of that, the gazillionaires.
That economic incentive did not exist for anybody who felt like they were
chancing it with nuclear armageddon.
We don't have to worry about Putin saying, ooh, it's kind of risky if I take this insane
step and like, invade the rest of the Ukraine from the Eastern Ukraine, could lead to nuclear
war, but I get an IPO and a rich, if it doesn't,
you know, that incentive isn't there, if it's a public good.
And so what I worry about is lots and lots of private actors taking what might be, like,
oh, that's a sliver of 1% risk that the world ends, but that's not going to happen.
Or it probably won't happen, and if it doesn't happen, and let's face it, it probably won't,
holy cow, here comes glory.
And suddenly, there's much, much more incentive for lots and lots of people to take tiny,
tiny risks that could kill us all.
And that's why the privatization really, really worries me.
It's because you've got privatized gains, but socialized losses.
Socialized losses, exactly.
Private, and that's what that was our economic crisis, right?
The financial crisis for years,
people in various positions in Wall Street
on the buy side, on the sell side,
on the fund side, all kinds of things,
were taking odious risks with the world of phonymy
and getting great returns because you know higher risk leads to
higher returns and finance and so they were inhaling money for themselves and putting them into
super yachts of the causes and then when everything fell apart the bill came due to us all of us the
financial crisis that bailout was the cost that was borne by taxpayers throughout the world. And so we see what happens when you have privatized gains and socialized losses.
And people will take tiny risks on their own account all the time.
I mean, we want to be, you know, here's one thing about it,
we all take a tiny risk on our own account when we're the rehab in a car.
You know, it's like, I really am hungry and I want to go and buy some popcorn.
And they've got that great microwave popcorn down at the safe way.
I'm famished. I love popcorn. I'm going to go get it. You don't have to thinking I am putting my life on the line for frickin popcorn, but you are.
It's a tiny risk. Can you take it? If you dial that up, they're people who get involved in extreme sports, who take very, very significant risks in order to prove to themselves that
they're great in order to get public accolades, you know, in some cases some extreme sports,
probably have nice, tidy purses that can be made, you know, nothing like, you know,
in professional basketball or football, but, you know, people on their own account will
take tiny risks.
And particularly, I think when you have somebody who doesn't have deep family ties, doesn't have children, you know, who is, you know, earlier in life or
is for solitary in life or whatever it is, they, when they're facing that risk of like,
I could annihilate the world by mistake or make a zillions of dollars and the risk of annihilating
the world in minuscule, their psychology, because again, we weren't trained on the
savannah to think if I screw up all humans die,
their psychology is probably thinking very much in terms
of like, I'm taking a tiny risk here.
They're probably thinking about their own risk of annihilation
that probably is at least half of the calculus in their mind
because we're all individuals and we don't want to die.
They say, that's minuscule.
They might take the risk of a daredevil.
You know, the daredevil might take an extreme sports person take. Like, okay, I'm kind of putting on the line here, but I think I can shoot these
rapids. And when lots and lots of people are in that position,
you start erathmetically adding up all those risks.
And at some point, it becomes untannable.
So that's why the privatization is really, really dangerous.
Because it democratizes the technology
to the stage where you include so many potential agents
that one of them or multiple of them
are going to be outside of whatever over-ten window
of safety that we have,
and they're going to lie there and decide.
And this is just talking about mistakes.
Mistakes, yeah, this isn't malignans.
Yeah, malignans are far from what you're talking about. This is stakes. Yeah, this isn't malignant. Yeah, malignant. This is negligence, not malignant. The negligence is not malignant. And I do worry more,
worry more about malignant, particularly in senbio, because if we think about COVID,
let's think about COVID, we've all been thinking about COVID for a while. We're very
practiced in thinking about COVID. If we think about COVID, it is remarkable on a number of levels how benign this horrific
thing is.
It is not very lethal compared to a lot of things out there.
It's not lethal at all compared to SARS.
Sars is, depending on what numbers we run, 10 to 20 times more deadly.
It's also coronavirus.
Murs, Middle East respiratory system, is kills at a rate of
about 30% case fatality rate. H5N1 flu, which as you know, I'm quite grimly fascinated by, kills
about 60%, 60% of people. And with COVID, case fatality rate, according to the World Health Organization,
somewhere between half a percent and 1 percent. So, COVID could have been
far, far worse, merely on a lethality basis and also on a transversibility contagious in spaces.
If somebody were malignant, if somebody were malicious and deliberately developing something
to be maximally destructive, It would be worse than COVID.
They let you know, it's an imaginable near future
where somebody is sophisticated enough
for the tools that they're using
or sophisticated enough to allow them to basically dial that up.
They're not gonna unleash something
that it kills a half a percent.
They're gonna release something that is so much more deadly and so much
more dangerous that it could have civilization top-leg potential.
Here's something that I've just thought of.
Have you considered the potential that the lably hypothesis or some variant of it for COVID-19
could be true?
Yeah, and it's entirely plausible and it's undeniable that it's plausible at this point. But the reason that it was released was some big picture thinking, fair weather saint
human who said Bill Gates has told you at the end of his TED talk and we've been warning
you for years about the dangers of engineered pandemics and natural pandemics as well.
So what I'm going to do is I'm going to give you
a very moderately transmissible but very not lethal pandemic,
which is going to act like a global vaccine.
It's going to cause you to have a very benign dose,
a coordination problem dose of how to deal
with this sort of a pandemic,
and maybe this will make people wake up.
We thought about that.
Well, have you read somehow,
have you hacked into my computer
and read the outline of a novel that I'm working on?
Oh, dear.
Because I've actually,
that's a story that I've flashed out a great deal.
So I'm familiar with it.
Familiar with the science.
Yeah, I'm familiar with it with that scenario,
and it's a very, very interesting one.
And I don't think that that was COVID, and here's why.
I think if somebody wanted to unleash an engineered pandemic to freak everybody out and realize
how dangerous it was, they would make sure that the world knew it was engineered.
Because right now, the prevailing wisdom and science and policy making circles is that this wasn't engineered.
And therefore, the response has been more about zoonotic.
Like how do we prevent more zoonotic transmission?
So if somebody wanted to create a mild engineered pandemic
and freak out the world,
they would absolutely make sure
that the world do what this was engineered. So I don't think that's what happened that doesn't rule out the lably hypothesis.
It would have put like,
sort your shit out world into the RNA.
Yeah.
Something like that.
Where they would have released some message online or whatever it was.
Yeah.
And you know, done try to do a pinprick assault and of course,
pinprick attack is some kind of, of course, the danger with that is that thing mutates and gets out of control and annihilates us
anyway. Not that that's necessarily going to be a plot twist in the book that I may
remain outright. But we ruined it now. All right. So if COVID was one of the more benign,
what was the most dangerous or lethal virus in history that you've come across?
Oh, I mean, you know, probably one of the influenza pandemics in probably 1918.
I'm not saying COVID is necessarily been, I mean, like 1918 flu killed so many more people, so many more people in a much smaller world population that proportionately, it's so much worse
than COVID. And that we don't really know if that's because
we have better tools and better detection
and better public health practices today
than they had in 1918.
That's possible, or if 1918 was simply much more virulent.
My guess is it's a little bit of both
because it's not like we really implemented
amazing best practices in public health and most of the world.
You know, Australia, New Zealand did far better than most of us, but we kind of botched a lot of things.
So I'd say 1918 is probably worse, but I'm thinking more in terms of, you know,
again, SARS. If you had SARS-levelhality and COVID level transmissibility, and there's no reason
that that nature when it spends the real that we will won't come
up with that.
Well, a nice big incubation period as well, probably where
you're still able to transmit. Yeah. So maybe seven days, 14
days without showing symptoms,
longer asymptomatic period could be catastrophic, you know,
measles, think about transmissibility, measles is unbelievable.
And you probably heard me say this before
at another conversation, but basically the comparison,
as I know, first of all, I think we all know at this point
that if we got into an elevator that somebody with COVID
occupied 10 minutes before.
And that elevator's going up and down. The doors open and shut people come in and out.
We go into the elevator, even five minutes after somebody with COVID was in there.
And we're not wearing a mask.
The odds of us catching COVID in that elevator from that person who was there
five minutes ago are essentially zero.
That's the understanding of science today.
And there's no reason to doubt that. I mean, we learn more and more back COVID every day, but that's pretty solid. If you were
unvaccinated for measles, or you didn't have immunity, and somebody had been in an elevator hours
before, you could catch it. That's why you're understanding. I've read that in a couple of places,
and I'll just assume that snopes wouldn't allow that to be on the internet if it weren't
true.
But measles is radically transmissible.
So the three dials to me are basically how long is a person asymptomatic, how transmissible
is it, how deadly is it.
And so there's a lot of dials that nature has actually hit without engineering, the high
transmissibility measles, the towering lethality of H5N1 flu, the unbelievable, you know,
I don't know what disease has a super long incubation period. Oh, tuberculosis.
tuberculosis people, it usually is, now this isn't necessarily incubation, but the time
from onset to detection is usually about two years. And this is a story, not that we have been fighting forever, right?
It's about two years between when somebody starts getting pulmonary respiratory tuberculosis
and when they get definitively diagnosed, it starts going on drugs.
So yeah, long incubation periods are out there too.
That's dangerous, man.
And then when you think that someone can play with that, I mean,'s even an iPhone game, right? Where you kind of design your own pandemic. It's been
around for ages and you should see how it, so maybe it's not just me and you that are obsessed with
this. Talking about some of the leaks from labs that we've had, this can, this can go back to
negligence now. We've had some fairly close calls with that. We had anthrax just after 9-11 ended up
in the Senate Majority Leaders Office
or something like that in an envelope.
Yeah, an office that I had the great fortune
to pass through on the very day
that the envelope derived just by sharing with me.
You were kidding me.
Yeah, I was in DC.
I had business at Tom Dashel's office.
So yeah, I was at Tom Dashel, Senate Majority Leader.
This is a single digit number of days after 9-11 and
So let's talk about lab leaks and let's talk about what happens when you've got a malicious insider
Who is leaking things and how hard that is to prevent so?
For those who don't remember this is now getting to be a long time ago
Immediately after 9-11 some envelopes containing anthrax were delivered
to a small number of places, including the Office of Tom
Dashel, who was at the time a Senate majority leader.
Patrick Leahy, I think, another senior senator,
also a Democrat.
Certain media outlets, including the National Inquirer,
just a gasastly publication.
So it was weird.
Immediately the pattern was strange.
Why would Al Qaeda, and it was the attempt,
and we'll talk about the attempt in a second,
was made to make this look like Al Qaeda,
and why would Al Qaeda have a beef
with a national inquiry?
You know, people were, a lot of people
were annoyed with the national inquiry choir, but they were generally
annoyed with it because it was kind of gutter celebrity journalism with lots of paparazzi
and seemed to violate the privacy of lots of people and seemed to dumb down the readers.
And there was a lot of sort of superior sniffing on the part of people who read the New York
Times about those who read the national in choir or blah, blah, blah.
I doubt that that was much of a preoccupation of Osama bin Laden. But there was weird scrolling on the letters that said, you know, something that
immediately told me that this is, this is somebody's faking it, was like, it wrote,
Alla is great. You know, this is we're doing this for Allah and
Arabic speakers even who are learning English even ones with very very very
Remedial English very early in their path of learning English
We'll quickly learn that the word for Allah
I studied Arabic for years and years and lived in the Middle East for a long time. Allah you say God
So God is great, you. Arabic speakers who are learning English
and know enough English to write ransom notes
or threatening things like that.
I don't know.
We'll have learned probably in their first day of instruction
that the word for Allah in English is God.
They don't write, hey, I believe, it's just not done.
It was just a remedial mistake of a non-Arabic speaker
trying to pretend that they were an Arabic speaker writing English. It was so flamingly obvious.
So anyway, these these envelopes go out and
it eventually becomes clear that the anthrax spores which were very deadly. They were milled in a way
that the anthrax spores, which were very deadly, they were milled in a way that the spores
got into suspension in the air.
So respiratory transmission was far more possible.
Natural anthrax is not necessary.
It's hard for a dust of anthrax
to get going to suspension in the air
in a way that could kill a lot of people.
That's very careful military grade processing
or you know very sophisticated academics that have to do that. So it turned out these anthrax
spores came from a US Army lab probably at Fort Dietrich Maryland almost certainly but it might
have been another Army lab. So let's think about that for a moment. you have the United States immediately after 9-11.
Probably, one of the most security-minded and security-capable nation states in the history
of geopolitics.
You have the United States military that has a lot of jobs, but among its jobs is to make
sure the anthrax forers in its own laboratories don't kill anybody, right?
And so, you know, we can say all we want about how inefficient the United States military
is at certain times and, you know, make jokes about the bureaucracy and so forth, but
it's a pretty security capable organization compared to most places.
And you have this nation and military on absolute high alert, yet it can't keep the anthrax
spores from its own pocket, from getting into the office of the Senate Majority Leader,
who was one of the five most powerful people and most important people in the United States
government. That was one malicious party who had access to that lab and it's Generally considered that we know who was the person committed suicide before they could be indicted and go on trial and so forth
So it was never proven in a court of law, but it seems pretty clear who it was it was a disgruntled senior person who had no
Problem getting that stuff out of that lab and getting it to the Senate Majority of Lear's office using the United States Postal Service.
So if that can leak when you have a malicious actor from the United States military, after
9-11, hired everywhere to somebody who I believe is in the line to succeed the president.
You know, the speaker of the House is, I'm not sure about Majority of Lear.
That tells us that any lab can leak, right?
You know, if we can't keep that out of a U.S. military lab after 9-11, what
chance do we have of keeping it out of a, you know, mid-grade state
university bio lab if something equally lethal is there? So that's a
malicious leak. And what's dangerous about a malicious leak is that almost all the
biosafety practices that are out there that govern biolabs are designed to present prevent accidents.
So there's four levels, biosafety level one, two, three, four. And as you go up the chain,
the precautions get greater and greater. To the point that biosefty level 4 labs, the highest biosefty level that we have, a lot of money in training goes into those precautions and there's negative pressure
suits and all kinds of things that look good on television that go into biosefty level 4.
I've heard biosefty level 2 is something on the level of a dentist's office. Not sure
if that's precise analogy or if that's accurate or not, but you go biosephically
of one, two, three, four, and what we need to realize is that there have been leaks out
of biosephity level four labs.
So the accidental leaks.
And so the probably, the one that most attention is drawn to is foot and mouth, sometimes called hoof and mouth disease.
There had been a devastating outbreak in the United Kingdom,
wiped out a very, very high percentage,
a very high percentage of the cattle herd
throughout the UK had to be called billions in damages.
So much like the United States,
against any threat after 9-11,
one hopes that the life sciences infrastructure in the United
Kingdom is on very high alert when it comes to foot and mouth disease in the immediate
wake of this outbreak.
And not long after this outbreak, foot and mouth spores escape from a BSL-4 lab in the
UK.
Like they got out.
Now they didn't start another, they were contained.
They didn't start another outbreak.
But that tells us BSL-4 level labs can be by accident.
So certainly BSL-1-2-3 can leak.
And with a malicious actor,
I don't think there's really any protection against that.
And so again, like when we think about,
do we want to do experiments, for instance,
that could result in a civilization toppling pathogen, you
know, let's say with the transmissibility of COVID and the deadliness of H5M1 flu, that
would probably topple civilization.
We can get into why at a second.
But do we want those spores or those viruses to exist anywhere, even if it's people with
the brightest of white hats on doing it for the best possible reasons,
even government people, so it's not been privatized.
Do we want those spores or those viruses to exist anywhere, at all, on planet Earth?
The answer has to be no, because if a malicious person ends up in that lab, decides to go
Columbine with that bio weapon, or de facto bio weapon, or if the lab has a boo-boo,
as labs have all the time, we're done.
So anyway, a little bit of a riff,
but yeah, all labs leak,
all labs at every biosafety level can absolutely leak.
And particularly if we get malicious actors in there
than any lab can leak.
So do we just glass ceiling how lethal
or how deadly every person across the entire globe,
every sin bio researcher is able to get to?
Wouldn't be nice if that were in any way possible, and unfortunately it's not.
So what you can do, you can do some pretty simple and basic things that significantly reduce
risk.
So I'll start with gain-of-function research.
Gain-of-function research is a branch of research.
It can mean a diversity of things,
and nitpickers will say, well, it's gain-of-function
when you create any GMO crop.
It's gain-of-function when you hone in antibiotic
to be more effective.
But generally speaking, when people in the field
use the term gain function,
they are referring to a narrow branch of research,
which is one in which generally viruses,
but let's say any deadly microorganism
is tweaked in a way to make it much more dangerous and deadly.
And I'll use an example that is a very powerful example
and in many ways the most significant one that we have
unless we find out something about coronavirus
being gain of function research
to get to in a second, I'm sure.
But in 2011, two independent teams,
one in Holland, one in Wisconsin,
did gain a function research on H5N1 flu,
which is the one that I mentioned before
that kills roughly 60% 6L of the people
and in fact.
And it's rational to hate and fear H5N1 flu, but if you look at that little critter, there's
one thing that we can all agree is kind of adorable about it, which is that it's almost zero
transmissibility. You need to really be in a very, very specific set of circumstances that are unbelievably
rare for any human being in order to cash it.
And there was a World Health Organization survey that was done, it was a few years back,
that documented each and every case of H5M1 flu that led to death, and over the course
of the decade, it was roughly 500 cases worldwide.
And it's transmitted rarely,
but very dangerously when it is,
generally from poultry to people who work in poultry farming.
And it doesn't transmit from one person to another.
But if it did, oh my God, 60% fatality rate.
So what these researchers did is to oversimplify things, but to cut to the point,
they made it transmissible through the air.
Now they were working with ferrets, not human subjects,
but ferrets are a good model for virus transmissibility in humans, which is why they're used in
virus research. And so they both, both of these teams created an H5N1
that could be transmissible through the air.
And why? Because science, you know, information
wants to be discovered. And there was a great deal
when these people came under criticism and boy, did they come under criticism,
there was a great deal of smug, fist waving and like,
how dare you prevent science
from doing science?
And the answer is, because if this shit got out, we could all be dead.
And the reason to do gain a function research, if you talk to the virologists, you do it,
and there are quite a few of them, not just these two teams, is like, well, we're getting
ahead of the game here.
You know, we're figuring out what worst thing can happen so we can do ingenious things to
prevent it.
To which the response is, there have probably, there've been tens of thousands of generations
of homo sapiens and influences, probably been with us the entire time, and not once
has H5N1 become airborne transmissible.
So you're not exactly creating something that is
inevitably going to be dealt from nature's deck. In fact, you're creating something that probably never
will be dealt from nature's deck. And you're putting in it by the way both of these projects were
done in biosafety level three labs. So not even BSL for it. So response number one is you're creating
stuff that probably would never exist otherwise,
and you're putting it into a leaky vessel, which is a biosafety level, anything lab, because science,
okay, and you don't get to do that. You don't get to take that chance. Okay, you want to take that
chance with yourself, you can get drunken-go skiing. And that's probably okay-ish. But if you get drunk and go driving, we're
going to lock your ass up. And you can't say, but driving! How can you interfere with my
ability to drive the open road, the American dream? Like, no, that's not cool. So that's
a really good reason not to do. And the other thing is, if H5N1 ever does become transmissible, God forbid, naturally, there are countless
metabolic paths that it could take to get there.
There are countless peculiar sets of mutations that could happen.
It's not like there's only one possible genetic code that would create transmissible H5N1.
There are probably, you know, countless different paths that can
be taken. So it's not like this critter that these two research labs created was something
that we could then immediately go and create a vaccine for and say, we're done with H5N1.
No, the peculiar version of H5N1 that might arise naturally, we can't anticipate that. And so this research is unbelievably dangerous to the extent that it's useful.
I'm not going to say it's completely useless, but to the extent that it's useful, it's unbelievably
limited and the danger of a leak is profound.
So after this work was done, both of these labs had papers teed up, one to go into science
and one to go into nature,
which are the absolute pinnacle of research science to get a paper in either those are the two
publications out of thousands that one wants to be in. So, oh my god, they're going to get the superstar
treatment. And then the American government in particular flexed its muscle and I believe
others did as well. I think maybe they did something in the UK and Holland as well, but basically
leave others did as well. I think maybe they did something in the UK and Holland as well. But basically, science and nature were told, you're not publishing that. And a great deal
of, you know, concerned thought ensued. And there were very, very strident warnings
issued by, you know, different sort of consultative bodies that are, that, that think on behalf of the US government,
but their people outside of the US government
at that bioresque, we really sounded the alarm,
but eventually, and funding for the
Function Research was eventually paused.
Now that word paused, I didn't say stopped.
So for a period of several years,
in the wake of all this, in the wake of a couple of bio
errors that the United States government made, I won't get into them, but there were like
a few blunders that happened that were like kind of, oh my god, how did that happen?
That came to light and was after this research and then after these sort of widely publicized
blunders, the pause lasted a handful of years.
That's it.
And the pause was in the United States government funding of gain of function research.
There was never a ban on it.
A private actor could do it if they wanted to.
Government said that the US government could fund it if they wanted to.
It was just a pause.
In both of these projects, the one in Holland and the one in Wisconsin was getting NIH funding.
National Institutes
of Health funding.
We're both getting funded by the US government.
So, the government said, okay, we're not going to fund this for a few years, but eventually
those papers were released in science and nature, and eventually that pause was lifted,
so the US government is now funding in a function research.
And those two projects got their funding switched back on.
Those two very projects got the funding switched back on I think something like 15 months ago
maybe a little bit longer
so the the world that said oh my god, this is scary. Let's hit the pause button has since said it's cool
they hit the play button and so gain a function of research is happening and
It's not irrational at all to think that gain a function research is happening with coronavirus
viruses. In places where coronavirus is a research, no entity, no national government,
no, you know, major body of scientists is saying, thou shalt not do gain a function research.
It's considered to be fine right now. It's considered to be completely fine.
Given the potential dangers, the expected value of this benefit versus cost.
I don't see how any scientist that's able to do the complex level of
symbio that you need to, to probably be able to sequence these genomes and
mess around with the capability of microorganisms, if an idiot like me
can understand existential risk,
why can't geniuses like them understand
the risk in what they're doing?
Again, I think you've got a semi-privatization issue,
even though these people are getting public funding.
So let's try to eject ourselves into the brain
of the scientist and Wisconsin,
who decided to go ahead with this research.
In his mind, I'm not saying he's a bad guy.
I am sure his motivations are pure.
I'm sure that he thinks what he's doing matters,
and is helpful, et cetera, et cetera.
But he's living in the bubble of his own life and his own career.
And he has the overconfidence that any expert has in their own expertise.
I am an expert driver, right? I've driven tens of thousands
of miles. I have unbelievable confidence in my driving ability, and it's probably misplaced
to some degree. He is an expert wet lab dude. He's been running a laboratory that has his name
on it, that gets all kinds of funding from different bodies and gets grants from competitive
sources, and does excellent work and has never had a leak
of any kind from his lap.
So in his mind, the risk of a leak is negligible.
He probably would not say nonexistent.
He's gonna say so low at silly.
And also in his mind, he has got utility function.
He has got things that he's trying to maximize
in his life as all human beings do, and he wants his career to move forward, and he wants
to publish in science and nature. And he wants to do all these wonderful things. And so
he is his own personal risk reward curve is out of whack with the rest of humanity. If
he does this gain a function research and gets
published in science and nature and does more gain a function research and gets more celebrity
and so forth, his career is going to be much, much more fun. And maybe not much more
remunerative because he's an academic scientist, he's probably capped out. But it's the things
that motivate him, accolades, papers, that kind of thing,
are going to come in greater, greater, in greater cadence.
And so his utility curve says, yes, let's go down this path.
And he faces the same risk that you and I face if the world ends.
He dies.
Now, he's got, he'll have a lot of guilt maybe, so maybe slightly worse for him than
you or I, but basically, he says tiny risk I die
Very high chance my career gets more awesome and I'm convinced I'm doing the good thing and I'm convinced
I'd never let anything leak just like Rob Reed has been driving for decades. He's never had an accident
I've been running a lab for decades. I've never had a leak
So I'm never ever ever going to have a leak? So he's got misplace confidence of any expert.
He has got strong incentives to do things
that incur a tiny little risk,
but that tiny little risk doesn't nearly apply to him,
it applies to all of us.
And so the expected value curve that we all run
in our own brains, whenever we do anything
is generally our own interests.
If I do this 10% chance, I lose this much money,
1% chance, I win this much money, 1% chance, I win this much
money, 89%, you know, whatever.
We're generally thinking for ourselves.
He's thinking for himself, but he's got all of us in the risk curve, and he's not calculating
that expected value of what happens if this goes wrong.
Well, there's a bit of a privatization thing there.
Even to go back to the first atomic bomb test. They did run the numbers.
They did.
And even with the numbers in front of them,
that there was a non-zero risk that the entire atmosphere could be set alight,
permanently curtailing not only all human life, but everything,
literally obliterating the atmosphere and setting,
turning the earth into the sun briefly.
And I think the number
was 14 million, one in 14 million?
I believe that one of the scientists put it actually a little bit lower. I think more
like one in three million chance of that. But they didn't know. And that's the important
thing.
Non-zero.
This was non-zero. And here's again, we get to the public private decision.
And this is really significant.
So at that point, everybody, you know,
Enrico Fermé teller, all of these people,
at the very beginning of the process.
So 1941, as a Manhattan project, is just
getting going, the first atmospheric test
is still years away.
I think it was either Fermi or Teller, or Oppenheimer,
one of them suddenly realized,
oh my God, we could, we don't know if we'd set the atmosphere on fire or not when we
do the first explosion, which turned out to be four years later.
And so it was immediately determined that the odds of that were minuscule, and I think
that a lot of the scientists really said there's zero, right?
But they were running numbers up until the day before the very first test, the Trinity test. They were running
numbers right up until then. And they did the test alone, but the atmosphere didn't
ignite. Now, did they do something irresponsible? Well, let's think about it. They did take
what they thought was an incredibly low but real chance of igniting the atmosphere. But
particularly back in 1941, when they first confronted that danger and decided to proceed
down the path anyway, at that point, the most, you know, the most fast-cil nuclear brain
power was concentrated in Germany at that point.
And the only heavy water plant in the world, I think, at that point,
was in Norway and Germany and Concord Norway. It's 1941. You're looking at a one-and-three
million risk that you might ignite the Earth, and you're looking at a much higher risk
that Hitler is going to develop a nuclear weapon before you are. And we can all imagine what
the world would have looked like if Hitler had developed a nuclear weapon first.
So that's big real possibility.
This is also bigger real possibility, but minuscule.
And ultimately, the decision was made at the highest level of a flawed, but functioning
democracy.
People who had been empowered by 100 million voters or whatever the number of voters was
back in
the 1940s, probably less than that.
But nonetheless, a very, very careful public good, it wasn't like Oppenheimer was going
to be like, oh my god, one of three million chance, the atmospheric nights, the remainder
of chance, IPO.
Yeah, it could get published in Science and Nature, yeah.
Yeah, not going to publish the science, not going to go public and make a billion out. Yeah. It was like
he's facing the same risk as all of us and he is thinking on behalf of the planet. They
were thinking very, very, very carefully about that risk. And they ended up saying Hitler
with a nuke versus this many school chance. And you know, looking back on it, um, people
could argue both sides, but that's the point people could argue both sides
I'm really glad Hitler didn't get a nuclear weapon before you know his enemies did and you know
Was that risk worth taking?
You could certainly argue that it was and I think that ultimately the Manhattan Project people said teeny tiny risk
We need to win this war. You know, let's go
And and again, I don't think that that
could be debated eternally, but the fact that it could be debated tells us it was not
a crazy thing to do. Whereas, gain a function research I might end up on science, and I'm
taking that whole, like you getting on the cover of science magazine is not as awesome as
beating Hitler. It just isn't. Beating Hitler is really, really good.
You getting on the cover of science
doesn't matter to anybody in the frickin' planet, but you.
And taking a similar one and three million risk,
let's say it's one and three million risk,
is obscene when you're not taking that risk
to do something as important for humanity
as defeating Hitler.
And we are going to be giving a lot of people what in 3 million
real-life wheels to have.
To me, bringing a virus
into existence that doesn't currently
in an effort to inoculate us from the chance that it might come into existence.
It's startgiving that.
Thank you.
Why aren't we talking about putting BSL3 plus labs on the moon?
Yeah, no, okay, so well, this is,
this actually started with me,
and he said, can we do a glass ceiling?
And I was starting out by saying,
well, this is a really simple thing we could do,
but we're not doing it,
which is no gain of function research, period at all.
The world should agree on that, just like the world agreed on nuclear nonproliferation
and other, there have been treaties that hundreds of nations have signed over a hundred nations
have signed.
That's how we got rid of chloro-flora carbons, international agreement that we're not
going to use this stuff anymore, which more or less stuck, although there are signs that
they're being used in China now on the slide, but whatever.
So we've done this before. It shouldn't be that hard for all the nations of the world
to agree, no freaking gain of function research. And that probably stops it because there's
no motivation for somebody for a private company to do it.
It really is academic.
It's generally government funded.
It's generally done in relatively transparent areas.
It's generally done because people want to get published.
It's generally done or because the government has an agenda.
It would be an easy thing to say, let's stop all of that.
And now you have taken one source of risk out of the whole life sciences equation, a bioerror
with gain of function.
So that should be a freaking easy thing for us to agree on.
But let's start there.
We're certainly not there at, but if we do that, then there's a whole other set of risks
that are out there that like, okay, we're going to get better and better and better at
designing bugs in Silico.
We're designing bugs on Lab because we weren't smart enough to not do gain a function,
or we're going to continue to just publish the genomes of things like the 1918 Flu and
Smallpox and have that information out there.
We've got recipes for unbelievably lethal things, and instead of it being something coming to be in a wet lab and escaping, we're
not that far from a time when people, and this is oversimplifying what they would do, but
we're not that far from a time when a lot of people in academic and private company
settings will be able to de facto hit a print button and get the genome of whatever arbitrary
creditor they want out and then have the tools
to boot that up and the mechanism of OVIRUS and have it start replicating.
And so, gain a function is one lid that we need to place, but we also need to really, really
harden the entire send bio-infrastructure to make it very, very difficult for people to, you know, to print or obtain dangerous DNA.
And there are quite respectable early efforts, generally originating with private industry
to limit the ability of any random person to get dangerous DNA.
But they are not wide enough spread.
They do not have the force of law. There's self-regulatory steps that the biotech industry, life sciences industry has taken, and they're
not really necessarily envisioning the day in which printable, highly distributed
DNA and RNA synthesis capabilities become wide spread. And so that's another lid
that we have to put on that's a much trickier lid than merely not doing
something stupid and dangerous.
It's kind of like, imagine that you're raising a kid
who loves to get drunk and drive
and also loves to go to school and breathe.
So your kid, stopping the gain of function research is like stopping the
kid from drunk driving. Okay, we got that off the table, but he also goes to school and
breathes. So there's another risk that's much more complex that he's going to catch
a deadly virus at school. And, you know, okay, we've gotten rid of the drunk driving. Nice.
That's a good thing.
Stop doing the self-destructive stuff.
But now there's as much more diffuse,
harder to define risk that we need to work on.
It's gonna be difficult to survive the next century, isn't it?
It is.
It really is.
Why aren't BSL3 plus labs on the moon?
On the moon?
Yeah.
Well, because most of the work that they do
is not with apocalyptic.
I mean, these really truly apocalyptic microorganisms
are rare.
And most of the work that they do isn't
with things that deadly.
We don't have much stuff going on in the moon right now.
So chipping that stuff up there and maintaining it up there and all that payload and the tons and tons of matter that we need to be transported
from here to the moon is currently beyond our capability. And when you look at all the
things, the good things that are being done with therapeutics and academic research that
has an unambiguously glit agenda without disastrous consequences, you can rationally say that like the work that happens in BSL3 and BSL4 labs is
valuable to humanity, it's largely contained and the danger of most of what's in there getting out is highly, highly local. And compared to what we're talking about, extremely minor. So that's probably why we don't. Now, once we get to the moon, we get to Mars,
and we're faring things back and forth quite easily,
and naturally, perhaps another conversation
should happen at that point.
But for now, I think the easiest answer is,
let's not have any apocalyptic microbes anywhere.
And when we find semi-apocalyptic microbes,
like the 1918 flu,
let's not publish their genomes to the internet.
That's getting rid of the drunk driving.
And yeah, that's, but yeah, it's risky.
It's risky as a stuff proliferates.
If we don't build really, really great safeguards
into the tools before they proliferate.
So given the fact that we've had COVID, and that this has been a coordination inoculation
if we want to call it that, that it's taught us we aren't able to shut down travel sufficiently
quickly, that we weren't able to produce PPE sufficiently quickly, that culturally,
given the technology of now, we didn't have any archetypes for how people should behave,
that people didn't understand what social distancing was. People didn't understand why
you should wear masks, about what staying at home and isolation was in quarantine and
in the way that we get vaccines out and stuff like that. Do you feel like we're in oddly
a better position post-COVID? And if so, how much?
Yeah, in some ways, we are hypothetically in a better position.
If we take a set of actions in response to COVID to hardened society against the next pandemic,
if and only if we take those steps.
So, we ignored the warning shots of SARS and MERS and Zika and a whole bunch of other things,
we kept ignoring the warning shots. COVID is a very, very difficult warning shot to miss.
The whole world has been traumatized by this. Trillions and trillions of dollars in economic
damage, millions and millions of lives lost. There will be much greater seriousness applied to pandemic resistance in the future.
The question is, will it be adequate attention and will it be sustained intention and will
it be intelligent attention?
And so there's a, as you know, I'll briefly plug another appearance that I did.
Sam Harrison, I did this four-hour piece that was a very unusual podcast
format and that about a hundred minutes of that was a monologue that I researched and wrote and recorded.
And I did the research, I interviewed over 20 scientists, there were thousands and thousands
and thousands of pages. And in that episode, I proposed a set of steps that collectively are trivially inexpensive
compared to the cost of even the annual cost of the flu, let alone a true pandemic.
And I believe if we take those steps and surely other steps that I wasn't smart enough
to identify, we will really, really, really harden ourselves up.
I'll use one example of something that we should be,
there should be a global headlong effort in right now
and I've heard absolutely no sign of that.
People who are deep in verology are quite convinced
that there's a very high likelihood that with the right
amount of research and the right amount of dollars,
we could create pan familfimilial vaccines.
What do I mean by that?
Well, coronavirus is a virus family, influenza is another.
There are untold thousands of virus families, but only if you doesn't present lethal risks
to humans, coronavirus and influenza being two of them.
So let's say there's 20 of them, it's roughly 20.
We don't currently have
what we could call a universal flu vaccine. What a universal flu vaccine would be, or will
be hopefully, if we develop one, is a vaccine that attacks the core infrastructure of the
entire influenza family. And so what we have with the vaccines that get issued every
year is there's lots and lots and lots of mutations and influenza. I mean so many mutations it mutates
You know, frenetically throughout the year and when we developed the vaccine for the northern hemisphere
We're looking at what's brewing in the southern hemisphere. There's a lot of you know influenza surveillance that's going on throughout the world and
you know, influenza surveillance that's going on throughout the world. And a panel of extraordinarily talented scientists make their best predictions of what elements of influenza are going to be
predominant, let's say in the United States. It's probably the whole Northern Hemisphere that gets
the same vaccine, but let's just say in the United States it's a simplified, so I could be parochial
as well, because here I am. What strains are likely to be predominant in the United States in the coming flu season?
Let's protect against that as best we can in this year's vaccine and some, you know, maybe 50%
of Americans get the flu vaccine, probably less than that. Some percentage of people will be
immunized. And in a good year, that vaccine will be about 60% effective. Now a pan influenza, you know, a universal
flu vaccine would say, screw the strains, we're going for the jugular of this, it's called
a speed ACs, you know, of influenza as a family. I talked to one person who's very, very deep
in the world of lobbying for and doing initial work for a universal flu vaccine, a guy named Harvey
Feinberg, he used to run the Harvard School of Public Health, like all kinds of
titles and accolades I can't remember right now, but he he estimated to me that if
we really went all in on this, it would probably cost about $200 million and take
10 years to get there or not. And he felt that there was a 75% chance that we would get there
It's not a hundred percent chance. I said, love Harvey. Let's go crazy worst case scenario
Could it be 10x that he's like, yeah, he's like maybe it's two billion dollars over 10 years and there's a 50% chance of getting there
Okay, the the flu cost the United States
365 billion dollars a year and lost productivity in medical bills.
If you have got a chance, let's say Harvey's worst number, to invest $2 billion with a 50%
chance of relieving yourself of a annual $365 billion burn-in, there should be no thought
necessary at all.
You take that chance and hopefully Harvey's right and sexually more like $200 million and 75% chance to success.
What we should be doing right now is let's take worst case numbers.
Let's say it's $2 billion per virus and there's 20 of them.
Let's invest those $40 billion over the next 10 years
and get pan vaccines for every virus that infects and kills humans. Now, let's throw in another
20. The zoonotic viruses that are out there that are most threatening. Let's get pan familial
vaccines or do our very best and at least have a 50-50 shot with each of them, you know,
$40 billion over 10 years, $4 billion a year, that's that's that's chump change in the context of the American
budget. That's chump change in the context of $365 billion lost to flu and you know, one
credible economist estimated $14 trillion of damage to the United States economy, US alone
from COVID. Like you do that. I don't see that happening anywhere. There are a couple
of academic labs that are doing working on a pan coronavirus vaccine,. I don't see that happening anywhere. There are a couple of academic labs that are doing,
working on a pan-coronavirus vaccine, but they don't have a $2 billion budget. This is not happening.
And so, you asked me, are we better off for having had COVID?
Theoretically, yes. Theoretically, we've gotten a wake-up call that's unmissable, and now we're
going to take really smart preventative steps. But this shriekingly obvious step, you know, there may be some governments that are calling
for it.
I can't read every new story of your day.
I haven't detected any concerted effort to say, let's just take every virus family off
the table that we can.
And if we're not doing that and that's really cheap and really obvious. I could certainly see us not taking a lot more,
more expensive and slightly less obvious,
but equally important steps.
After the last 15, 18 months as well.
Yeah, the most obvious flare fired up into the air,
directly over our position, highlighting what the problem is,
highlighting all of our insufficiency and
poor coordination, and yet nothing.
Yeah.
Pan-coronov vaccine.
Hello, people.
Why is that not happening now?
That research should have started in April of 2020.
We didn't have to wait for the mRNA vaccines.
That research should have started as soon as we said, holy cow, SARS and now this, coronavirus,
big deal.
But no, there is research, like I said, I've seen a couple of papers coming at academic
labs, or ROG, or academic labs, but it doesn't have the public funding support that it
needs. And that's, yeah, it's bonkers.
What would be the implications if the lab leak theory was proven to be true for China
and for sort of the fallout politically and the safety and future as well. Okay, so let's play this out.
Let's assume for the sake of the thought experiment,
this was a lab leak,
and it gets proven definitively that it was.
It would almost certainly be,
if this was the case,
it would almost certainly turn out
that COVID was a result of gain-of-function research,
because it is a novel virus.
And, you know, it came out of nowhere. And if it was a lab leak, almost by definition,
would not have been something that was circulating naturally and just didn't happen to cause a pandemic,
it would be something that was novel that was created in that lab. So I think if that got out and was proven beyond a reasonable doubt,
there would one hopes that there would be a global
band and gain a function to start with, like this bizarre thing that we haven't done yet,
would be done.
And I think band and gain a function maybe eliminates 2% to 5% of the total risk that we face from
Senbio run amok in evil hands or good hands, but that's a great step in the right direction.
So I think that would happen.
I think that, you know, we talked about the compounding spread of climate risk awareness. I think there would be an unbelievable jump in global awareness and concern about
send bio-run a mock. And so I think that there would be a much better regulatory apparatus. There
would be much more self-know knowledge within academic and private circles.
Like a lot of really, really good stuff would happen.
Is it a part of the hopes that it does get proven?
Well, I mean, if that's in fact what happened, all of the hopes that it gets proven, you know,
if that's in fact what happened, yes, absolutely.
China would have a great deal to answer for to every single country in
the world.
China should be held accountable, and the Wuhan Institute of Aralogy should be held
accountable, and the very practice of gain of function should be held accountable, and
the notion that BSL4 labs are safe should be held accountable.
And so, yeah, if that's in fact what happened absolutely,
I would want that to come out.
So that we, the world says,
China don't do that anymore.
The world says, no more gain of function.
The world says BSL4 is only a best effort.
And those three things right there
would significantly reduce the world's
risk. So yeah, if that's what happened, I'd want it to come out.
I think we don't know. We don't know that that's what happened. And we probably never will.
Well, there's a lot of opaqueness, right? A lot of opaquity. There's incredible opacity.
Yeah, incredible opacity, which makes one suspicious, but then again, authoritarian governments
are opaque by nature. It's our instinct.
It's almost like, if you could have picked, probably except for North Korea, if you could
have picked a country that you didn't want it to start in, it would have been China.
In fact, maybe even more so that it's China because they have more sophisticated resources, probably fewer bad actors that are prepared to turn mole and actually be whistleblower about stuff,
better coordination, better surveillance.
Yeah, I mean, it's the opacity surrounding the investigation of where this came from
is almost total.
And it makes one wonder, why are we hiding?
And so for those who, and again, I do not pretend
to know whether it was a lab leak or not.
I want to be very clear about that.
A lot of very, very, very smart people think it was.
A lot of very, very smart people think it wasn't.
So, and I don't have the level of biosophistication
to enter that debate.
So I will plainly state, I don't have a theory on that.
But a very strong argument in favor of it is like, what in the world are you hiding?
Because you're hiding something.
Why in the world will you not allow a very, very serious outside investigation into the
very early cases?
It seems that a couple of WIV, we went and sent you to Verology,
people were hospitalized or something,
and we were like, oh, a lot like COVID in December.
And why is that information,
why is that not being explored?
Why did the World Health Organization delegation
that went there have zero access to anything
that will do a lot that would shed light
on the first five weeks,
why all the opacity? And it's easy to imagine that something's being hidden,
but we also have to acknowledge that authoritarian governments are opaque by nature,
that's their instinct, and they could be opaque for reasons that are rational to them
that don't have to do with this. Yeah, yeah.
It could be that it's completely zoonotic, completely natural.
And when they did their own investigation, they're like, oh my God, our safety protocols,
the Wuhan Institute of Barology, kind of suck.
They didn't come out of there, thank God, we feel really guilty if it did.
But oh my God, it could have, it didn't, but it could have, we don't want anybody seeing
that, it could be something like that.
The fallibility all the way down,
just humans, humans in our biases the whole way.
So, it's...
And then we're back out now from just sin bio into
some of the broader strategies that we have for ex-risk.
I think looking at all of the different ways
that we could potentially manifest our own extinction on top of the natural risks as well that just background an ambient and constantly going on whether it be a super volcano or a gamma ray burst or an asteroid that's going to come to hit us.
It seems like making the situation already worse for ourselves is probably a bad idea.
for ourselves is probably a bad idea. Would there be an equivalent of putting a glass ceiling on gain of function research,
or putting a glass ceiling on the source of research that we do entirely?
Should we be looking at perhaps considering that across the board with regards to technology?
Should we curtail our technological progress for a few thousand years until our wisdom
can catch up with it?
I would say no for diversity of reasons. One is, I think it's thoroughly impossible. It's
strictly in the realm of thought experiment, because if let's talk an entirely equally
impossible scenario, but let's say the Western democracies,
any Eastern democracies, the democracies of the world all agree to do that.
China ain't going to stop.
It just isn't.
If China stops, Russia is not going to stop.
If China and Russia stop, North Korea is not going to stop.
If North Korea stops, then somebody
that we never thought of, like, you know, maybe suddenly, you know, Egypt takes the lead
and goes back to stand aside that they're going to become a world power.
Who knows? Like, if we all stopped it, you know, and Egypt didn't, a lot of smart Egyptians
used to live there. At some point, they're going to be number one in it. And then being,
you know, like, you just, you know, there's a coordination problem.
And then I also think that there's a human flourishing problem as well.
It's, um, uh, uh, uh, Sin bio has so much extraordinary promise for us as humans.
It has so much extraordinary promise when it comes to, you know, therapeutics.
I think Sin bio when cancer is eventually beaten, and if we don't destroy ourselves by some other mechanism, one day we will
completely defeat cancer.
And it's going to be insights in wisdom
that come out of synthetic biology
in adjacent fields that do that.
It's going to be insights in wisdom,
you know, that come out of synthetic
biology in adjacent fields
that will allow us to eventually create
clean meat that is so much less damaging
to the planet and so much less
immoral in terms of the horrible suffering that's inflicted on the conscious systems that
we call cows and chickens.
You know, there's so much human and nonhuman flourishing that's going to result from it.
It's a path that we should go down with lots of safeguards.
And you know, if we wanted to cease technology, we couldn't solve the coordination problem,
and if we did, there's no guarantee that we would have the wisdom that we've seeking a
few thousand years. We've been trying to seek wisdom for thousands of years already, and
we haven't gotten there yet. So the Greeks and Romans probably would have thought if you
asked them, like, oh, man, 2000 years from now, we're so great at ethics already. Imagine where we'll be in 2,000 years.
You must have nearly finished. You must be at the back of the book soon.
It must be done. Yeah, we've got to be very close to it, except, of course, almost all
of those philosophers own slaves. So we question their ethical purity, but that's a whole
another conversation. So thinking about that, if the global coordination
problem is so tough, that essentially this is untenable,
it's not a strategy that we can look at doing. You've got this increasing democratizing
or the privatizing of the apocalypse. You have the over-ten window of potential bad
actors being more and more people being brought in so that they're perhaps outside of that.
We have technology that is further emancipating the ability to
bring about our own destruction, right? On an individual, on a local, a national, international
global level, all of this. Yep. The future. The future doesn't look bright. Yeah, what do we do?
When you take in this broad spectrum view, as opposed to just looking at SynBio, just looking at AGI, or Nanotechnology, or Natural Risk, or whatever.
Take a more broad scale perspective, and I often think to superintelligence and machine extrapolated
volition with this, that trying to pick apart individual strategies and tactics to try and
plug the holes in the bottom of the boat one cork at a time is not
tenable because for every cork that you put in there are a hundred that will appear far
quickly and you can even notice that they're there. Is this how you feel? Is this how I feel?
Apocalypse. No, so what I think you need to do and so the approach that I take when I think
about Zin Bio is you need to create a really,
really agile, adaptive and multi-layered defense strategy. And I think great inspiration can be
taken from our own immune systems. Our own immune systems are confronted daily with all kinds of
pathogens, many of which they have never, our immune system hasn't encountered before. And most people who are, you know, not extremely old and are well, don't get sick most of the time.
And that's because our immune system is multi-layered and adaptive, very, very adaptive.
And so, to take some bias, an example, first of all, you stop doing gain of function research.
That's like, I want to live to be 90, okay, stop drunk driving every night. Okay, so great. We've taken one thing,
self-inflicted, really obvious off the table. And that's like I said, I think that takes a few
percentage points of risk out of the equation. What I think would be very, very big would be
something that the industry has already tried to do on a self-regulatory basis, which is to create a set of standards of what is dangerous DNA?
And let's not let anybody get it unless they have a very good reason to have it.
So there's something called the IGSC, which is an industry body that a number of DNA makers are voluntary members of.
DNA makers, what do I mean by that? Most of the complex DNA and RNA
nucleic acids that are ordered or used in both private and public settings, generally are not
made if it's a long and complex strand that will generally not be made in the lab that wants to do
the experiment with it. It will be made by an expert body, a company,
and one example, Swiss bioscience,
publicly traded company, they make a lot,
that's all they do is make long strands
of error corrected nucleic acid,
or that's a lot of what they do.
So they're very good at that.
They're better at that than any particular
academic or private labs likely to be.
And so there's several companies,
like they're probably number of the low dozens,
who do this for a living.
And so those are the places where dangerous nucleic acid
is going to originate for now.
A bunch of them are in this organization called the IGSC.
And the IGSC has maintains its main product,
if you can call it that, is a database of dangerous sequences.
And whenever any IGS-link member they've all agreed to do this, gets an order for something
that is on that list.
A series of internal safeguards go into effect.
And basically, every order is rated red, yellow, or green.
That's majority of your green.
Nothing wrong with it.
No problem.
Yellow order comes in, and those companies have teams of bioinformaticians, generally PhDs,
who look at that yellow case, and usually in most cases it's like, okay, this is fine.
They're taking part of, gene the dangerous, but they're just taking a regulatory part
of the gene or whatever it is, it gets passed through.
When you get a red flag, then a lot of hours are spent, and probably the customer is
contacted, and work is done to make sure that this is, again, benign usage.
In worst case scenario, these companies will call the FBI.
That's really good.
The companies are investing significant money in this.
It's about my understanding talking to a few people.
It's about $15 per order of work that goes into this whole, if you take the cost of all
this bioinformaticians and the Red Yehlu Green System inside a company like Respioscience,
you total up that full budget and you look at the number of budgets, it's about $15
in order.
And you know, it's real money.
These are staffs of people that,
this is great, self-regulation.
But there's problems with it.
One problem is that as the ability to synthesize DNA
and RNA gets better and better and cheaper and cheaper,
which is what's happening as happened with computing,
you're getting more and more orders.
And the order size is shrinking.
And so 15 bucks is becoming a bigger and bigger
and bigger part of the order cost.
The cost of that apparatus is significant enough
that there are a lot of companies
that aren't members of the IGSC for cost reasons.
And so the IGSC says that they account for 80%
of the DNA in the world,
or at least in their industry of provision, but that number
was kind of pulled out of the year a decade ago when the IGSC was founded.
Nobody really knows.
Because there's only one Chinese member of the IGSC, and there's a lot of DNA synthesis
going on in China.
There's no way it's 80%.
So it's a hell of a start.
It's a very good start.
It's an admirable start.
But we need to dial that game up dramatically. It needs to be a hundred percent, which means
that it needs to be mandatory. So voluntary and eighty percent
ain't gonna work. Gotta be mandatory and a hundred percent. Because if you're a malign actor,
it's really easy to go to the twenty percent or doing this. Like there is essentially no protection unless there's total protection.
So if we could get, this is a harder agreement
to get passed than a global agreement against gain a function.
But if we can get past the notion that like,
we as a species are going to take this IGSC model,
dial it up and make it mandatory.
That's a big step forward.
Now the next step forward that is very, very important is that we are going to go to
a distributed model of DNA, creatinic creation.
What I mean by that is, you know, the people at TwistBioScience hate when I say this, but
I don't see there being much of a role for central service bureau is 10 to 15 years
out.
Just as there isn't much of a role, you know, we all send a lot of text messages today.
We send more text messages than at any time in human history before.
But despite our obsession with text messages,
there aren't a lot of telegraph companies anymore, right?
Because to send a text message in the 1920s,
you had to use this very expensive centralized infrastructure
to send your text message.
And we went to that centralized infrastructure infrastructure to send your text message.
And we went to that centralized infrastructure.
We sent our text message and that's the way it worked.
Eventually that centralized infrastructure disappears because that capability goes out to
the edge.
It happened with photography.
That's happened with a lot of recording tech.
It's happened countless times.
It's just what happens.
And so Bench top DNA synthesizers are in their infancy right now, but there's a pretty
good one out there now called the the bio XP, which allows scientists to synthesize the DNA
that they want in their own damn lab without being wizards.
Because the scientists are using the DNA, they don't want to be great at synthesizing DNA.
They're synthesizing it for a purpose.
They want to get that synthesizing out of the way as quickly as possible to do their actual
work.
Synthesizing the DNA is like brushing and flossing their teeth.
You got to do it.
It's not, you don't wake up saying, today I'm going to brush my teeth.
It's going to be the pinnacle of my day.
They want as little hassles possible.
That's why they're using twist bioscience rather than developing this capabilities themselves. And if there's a little machine that they can have in their
lab and just print out what they need, that's going to be even better than twist bioscience.
Much is the iPhone is better than the telegraph office. So the next thing that we need to do
as a species is a global society is say, yes, those distributed printers are coming. They
need to be mini IGSEs.
They need to have those protocols built into them so that dangerous DNA also goes to this
process.
The makers of the BioXP, which is the best printer on the market right now, they are an
IGSE member.
When you are printing something on your BioXP, that print desire goes back to the company
that makes the printer and it goes to the red
green yellow process.
And so that's a doable step.
Now let's imagine a world 20 years from now where these printers are everywhere, even
in high schools, and they've got these very, very good protections in safeguards in place.
Does that mean it's impossible to create a doomsday bug?
Certainly not.
It's still possible, but what we've done is we've made it so much harder that the number
of people who can pull it off has gone from untold hundreds of thousands, even millions,
to a tiny handful of experts.
That's an unbelievably powerful protective step because particularly that tiny narrow group
of experts that we prefer to be zero, but those people have generally come through life following
a path of continuous self-improvement, career development. They're not the type of people,
they're not the type of people who go Columbine, anthrax, leak, notwithstanding, they're generally
not. So you've diminished a great deal of risk, and also with the passage of more time,
the ability to create a complex virus genome from whole cloth
will even vanish from the high pentacles of academic science
because it's just not necessary anymore.
There was an incident a few years ago
where a Canadian researcher created the horsepox virus, that genome
from whole cloth.
Horsepox is not a big deal to humans, but what's interesting and important about it is it's
a very close cousin to smallpox.
And so if that particular academic scientist in Canada was able to synthesize the horsepox virus using male-order DNA
and other techniques and his own extraordinary skills. That means by definition he could
have synthesized smallpox if he wanted to. He just elected to synthesize the equally
complex horsepox and very closely with horsepox vives. And part of the reason he did that
was to put the role and notice, hey, this is possible
and some idiot put the small box genome on the internet back before this was possible.
And hello world wake up, right?
Now the skills that he has to create that are going to, they're not going to become more
and more widespread as as printers and things
like quiz bioscience get better and better.
They're actually going to diminish because those sort of wet lab skills create things from
a whole cloth and so forth.
People aren't going to develop those skills because they're just going to be able to hit
print.
So, if the world, if the academic world and the life science and symbioworld become dependent
on these distributed printers, which I believe they will, and the printers are really, really good at preventing bad stuff from happening.
Now, that's a pretty big protective staff.
So you're actually utilizing humans' inherent laziness, or their paths of least resistance
to distract them with that? Well, I mean, the sequence that you went through with Sam,
that four-hour special is one of the,'s an absolute masterpiece and thank you the work that you guys did on that is
Phenomenal we linked in the show notes below for anyone that wants to do a deeper dive on this
I want to I want to change tack for a second. Mm-hmm. Have you read Seven Eves by Neil Stevenson?
Seven Eves is the one where the moon
Explodes
You explore the first line the first explodes. Yeah, yeah, yeah.
The moon explodes. Okay, I'm really embarrassed
and I hope Neal's not lessening because we've had a few times
and I think very, very highly of them,
but I love Neal and his writing.
That was one I couldn't get through.
And the reason was sometimes I think, you know,
Neal's work turns into mechanical engineering porn.
And I am not very adept when it comes to mechanical engineering.
And it's not a fascination of mine.
I'm like, bioengineering and other things.
I'm obviously not a life science expert either,
but that'll get through.
But like, there are a lot of like this device is this,
and there's this gear that has this leverage ratio.
And that's really cool because this.
And so I was like, I just can't.
But it was a hell of a start.
I'll tell you, I really enjoyed the first 50 pages.
Yeah.
And I have, I have, I have, that's the only Nielsenz in the novel I have ever started and not
gotten through.
That's crazy.
So yeah, I mean, yeah, it's, it's a, I wouldn't say it's a slog, but there's certain thing.
I mean, you learn the finer points of orbital dynamics, right?
In this, you're learning about the Nazia and all of the different ways that the front and
back of a three-dimensional boat, essentially, which is what a spaceship is moves and how
when you swing back around, you're in different positions.
So yeah, hard sci-fi can be hard at some time.
But everyone that's listening, it's in the new reading list, which is about
to get released, which everyone can pick up for free quite soon. But in it, this thing
occurs and they need to find somewhere to get people off earth, right? They need to put
them up into space. It's the equivalent of an existential risk. There isn't existential
risk. The fact that it's the moon exploding is irrelevant. It's something. The difference
is that they have almost exactly two years from when it happens to when the what they call the white sky or the hard rain begins.
And what they actually do is they say at the very very beginning the president says we need to have a two-pronged strategy. We need to send people underground and we need to send people into the sky. Mm-hmm. Given the fact that we are constantly up against an increasing anthropogenic risk from ourselves,
an ambient background natural risk, we don't have a second community on Mars.
We don't have another planet, we do not have a backup.
Would it be advisable to create a siloed community somewhere which is totally self-sufficient,
defended and agapped from the rest of the world? advisable to create a siloed community somewhere, which is totally self-sufficient, defended,
and air-gapped from the rest of the world.
I have never thought of that in shame on me because that's such a cool idea that anybody
who thinks of an existential risk should be thinking about this.
That is a really freaking interesting notion.
Wow, yeah, we should. We should have that.
The Long Now Foundation has this really cool library they've created, which has the
1500 books that they believe would be necessary to reboot civilization.
They've got, if you go to the interval cafe
in the Port Maison area of San Francisco
and anybody go to San Francisco,
I urge you to go to the interval cafe, it's awesome.
It's run by the long now.
And downstairs is the cafe in bar
and upstairs are all those books.
They've got this library, but you can't go up
and look at them, unless somebody from long now
is gonna take you up there.
But that's almost a symbolic gesture.
And yeah, creating a community like that, and maybe it's like national service, you
rotate through for two years.
Like nobody would want to live their entire life there, but maybe it's a whole bunch of
young people because you want a lot of young people who can procreate and all that other stuff.
But maybe it's like people go on a two year rotation,
which would probably be kind of fun and cool.
Like a thing you've done for two years.
But it seems to be in that community.
It seems dumb to me that we're waiting
for some sort of potential risk to occur
before we can save ourselves from it.
Like I know that the problem is fully airgapping yourself
whilst you're attached to the same piece of rock
is going to be difficult.
If you've got a malignant AI that's got a control problem that's running around, it'll
probably know where it is. It might be able to manipulate it. But again, the same thing
is you're trying to achieve with the Sinbios solutions. It's not about perfect. It's about
reducing the risk as much as you can, because that's all that you can do, just because you
can't get perfect doesn't mean that you shouldn't try. So for me, defended, you know, really heavily defended,
walled off somewhere in a mountain, bunker, shit, you know, all of the seeds, all of the books,
all of the everything that you need. And just if you've got it there, yeah, ethically,
is there a question? Yeah, of course there is. It's the same as the generation ship.
I'll give you know, if you're going to fly to Alpha Centaurion, decide to commit the next
thousand generations of your
progeny to living on the genome.
Cheedium.
Yeah, exactly.
To be born, live and die in this steel cage floating through space is that ethical?
Well, if you decide to take the grandest view of humanity as a whole, if the choice is
between some people suffering and humanity dying, I think that
so this is something I reckon more people should be considering that because there's bunkers.
Isn't there, have you looked into these rumors that billionaires are buying up, is it New
Zealand or Australia that they're buying up ranches or bunkers or something? Have you heard
about that?
Yeah, well, the whole prep or movement is very real.
This is prep is, this is prep is on steroids.
Yeah, and this is some, and some prep is going to be billionaires.
And there would be a fair enough.
Okay, yeah, so I'm sure that there's some crazy prepping stuff going on.
What's more relevant to this conversation,
there was a big effort in the very early 90s called,
I think it was biosphere 2. Have you heard about this?
No.
So is some kind of eccentric billionaire
or near billionaire funded another eccentric stream
to create a completely independent biosphere?
And it was in somewhere in the Southwestern United States,
like Arizona, and in Mexico, something like that, and the bones of it still exist.
And it got a great deal of press at the time.
And so they created this huge, bubbled community.
And it was, the bubble was huge, the physical footprint was huge, the budget was huge, and
it was privately funded.
The number of people was not.
It was something like nine or eleven people went in there.
And the objective was to completely decouple from the Earth's biosphere. And it wasn't bunkered, it wasn't guarded, it was, you know, but just decouple-
It's like an ecological rather than a defensive experiment. Yeah.
Yeah. And, you know, have agriculture going on in there and, you know, have all these,
you know, to try to create a bias-faring balance that these people could
live in and be completely self-sufficient in for two years.
And it failed.
And there's a really good documentary that I saw toward the beginning of COVID that I'm
happy to try to dig up and you could put it in the show notes.
It was well done.
It failed for a bunch of reasons, but basically the agriculture started failing.
Their ability to generate enough calories in there was meant to be, you know, they had
huge, huge greenhouses and so forth.
That started failing, and then also the carbon in the atmosphere, the internal atmosphere
started getting out of whack.
And so at some point, this small group of people were slowly starving and is fixating and
They eventually gave up and there was also this controversy where one of the people
needed to have surgery and
So obviously left the biosphere to get the surgery she needed and came back in
Layden with like a Santa Claus like bag full of stuff that that
was missing. So people like, oh, you guys are cheating. But, you know, okay, this did not
have the budget or scientific power of even a lightweight government effort. I mean,
when you watch the documentary, you realize kind of how absurd, it was like a donkey
hoodie type of thing. It was charming, but a little absurd.
And they just didn't have the firepower
to pull this thing off.
But that's an interesting relevant example.
But I think what you're describing
would, it could be a national effort.
The US has the resources and budget to do it,
and trying to do it in a couple of other places.
But maybe ideally it would be international.
And you wouldn't be sentencing people to life for thousands of generations of life.
I imagine people would rotate in and out.
I imagine rotating in would be a pretty cool experience to have in life if you were there
for a year or two.
There'd probably be lots of other young single people in there.
It'd probably be kind of fun.
There's people who spend the winter in Antarctica.
If you're in Antarctica, one of the research stations, once winner settles in, there's
nine months where you cannot get out.
And that's a small group of people, and it's really intense, and a lot of people have
horrifying experiences, but some people have awesome experiences with that.
So I'd be kind of like that.
That's a really interesting idea.
I like it.
Cool.
I love it.
What do you think is a big ex-risk, which is rarely talked about within the community?
So you might not be able to predict the unknown unknowns, but what about the belly-known unknowns or the
recently known unknowns?
Well, you know, there was an experiment run at CERN on Brookhaven back in the 90s that
the best documentation I've seen of it is in Martin Reese's book, Our Final Hour, and that's what's
called the United States, and it's got the much more intelligent title in the UK of our
final century. So I guess that sort of reflects the perceived attention spans of the American
and British populations, according to the publishers who released that book, but it's
an amazing book. It was one of the first really popularly accessible
contemplations on exerisks.
And it really was more about things that could wipe out.
He was also very interested in things that could wipe out
tens of millions of people.
But there was this experiment run that had a tiny chance,
again, let's talk about Trinity experiment
in New Mexico level chance of creating
something called a strange lit, a hypothetical form of matter, which is very much like ice
nine in cat's cradle by Kurt Vonnegut.
So cat's cradle is a brilliant novel that I'd recommend to anybody who enjoys speculative,
witty, crazy fiction.
And in cat's cradle, there was this thing called ice-nine that somebody developed, that
if it touched, it was a form of ice, that if it touched water, the adjacent molecules
would also turn ice-nine, basically at a melting temperature of, I don't know, 90 degrees,
Fahrenheit, you know, 35, 40 degrees centigrade. And the danger with ice-nine was if it touched
normal water, the adjacent molecules would immediately turn into ice-nine, and the adjacent
molecules to those would turn into ice-nine.
Time might just touch.
Yeah, it was like minus touch. Almost instantly any water it touched would turn into ice-nine,
and would only be
multiple if you got it really hot. And so this is a taking time bomb that runs
throughout the novel because obviously if somebody drops a shard of ice
nine into the ocean or even into a tributary, a miniscule tributary of a
miniscule river, that is going to flow through the all the water, the earth,
with the unbelievable
rapidity and game over. So, Strangelic is ice-nine. And if it exists, if it exists, and if it
is possible to exist, and if it has certain properties, and if if if, and they if if if
this down at the Strangenbrick Heaven Institute experiment to say, oh, it's just really unlikely. And they proceeded with the experiment.
And if the strangely thing had happened, it would have turned all of Earth into strangely
matter. And it's unlikely but possible that the ice-nine effect would have propagated
further out and might have destroyed the entire universe. That's not the reaction that humans need to have to existential risks
in order to prevent it if we think existential risk is just excellent forms of comedy. We're
done, Chris. No, but it's so interesting. It's so interesting. So I'm so fascinated
by it. Once again, we probably have a set of risk and pay off in the minds of the scientists who
are making this decision like, well, it's just really unlikely.
And if it doesn't happen, and it almost certainly won't, might end up on the cover of nature.
Might end up on the cover of science.
And there was no, so they went ahead with it.
And the danger with that is only this cloistered elite
is in a position to say no,
because only this cloistered elite is in a position
even though the experience happening
and to know its ramifications.
And some outsider came in and said,
no, don't do it, they'd be like, okay, Ninkam Poup.
What did you get your PhD?
Yeah, exactly.
Oh, you studied Arabic in college
and now your government regulator.
Well, you're too stupid to even understand the risk
that we're taking, blah, blah, blah, blah.
And yeah, so it's things like that.
You know, it's, it, it, it, it, it, it, it,
I mean, that many people know about that.
That being said, and this is where I want to pivot
into something that I'm quite passionate about
with regards to ex-reskin.
This is what I see as the future of Humanity Institute,
which is phenomenal, and the work that Nick and Toby and the guys do there is outstanding, it's world changing.
But what I see as one of the big holes in their world view with regards to trying to enact
the most successful existential risk protection strategies is a lack of an understanding of
the importance of public idols. So if you were to look
to someone like Greta Thumburg, not a climate scientist, not a specialist in her field,
and yet because she was right place, right time, right delivery, she was able to garner support
behind a particular movement. And what I think is missing, and what you've just highlighted there,
the fact that there aren't sufficient people,
if you had a million people that had PhDs in Arabic
and were wagging their finger at somebody,
you go, actually, there's quite a lot of people
outside of the office,
maybe we have to listen to them,
but when you have small number of people
and the type of research that's being done
is very specialized and gated
within these particular intellectual communities.
The intellectual proletariat are not going to listen to the plebs in the street, especially if there's only three plebs.
Me, me, and you, and Sam Harris, like waving our hands outside, like that's not going to work.
So I think that there needs to be a, I think existential risk, research, and the movement at large needs to be made more sexy and more popular.
That's not to say that the guys at the, you know, Nick and Tobio, handsome gentleman, but
someone who's going to be out there, you know, Andy Sandberg is a perfect example of this,
although he's busy doing real research and actually getting the work done, but more of
these sort of public facing talking points,
the stuff that the things that you did with Sam,
Josh Clark, the end of the world with Josh Clark
was an pastor.
Yes, a nine-part podcast series, fully sound-scape,
or beyond outstanding,
but it needs to be like endless amounts of that.
Get people interested, because the compulsion that I have
and you have, when not outliers,
like this is something that a lot of people
would just naturally be interested in.
And then on top of that when you say,
this is potentially the most important thing
that any of us can work on at all in our lives,
the most important thing, the successful continuation
of the human species, or the ensuring that we do not
need to permanently new to our own ability to reach our full civilizational potential
is what else is there other than enjoying your life and you know loving the
people around you and stuff like that as far as callings go I can't see much
that is bigger than that so there seems to be a delta between the fact that it's
so compelling and so important and naturally interesting to some of us.
And also there is this huge vacuum when it comes to the public conversation about it and the pressure
that's being placed on artificial intelligence research and nanotechnology and symbio.
And I have we got the the first time that you a person who is steeped in this world and has written
novels about it has heard and there should be no way that there should be any idea that hasn't been heard already
when the consequence is the outcome of our species.
Yeah.
And the other thing that we could use more of is storytellers.
And so as a science fiction writer, that's something that I can do a bit of.
And I believe that a very significant reason why we survived
the Cold War as a species was movies like War Games, movies like Dr. Strangelove that
really, really resonated with the popular consciousness and really got people freaked out at all levels
of society. Those stories can travel in a really, really powerful way. I think one reason why
the world dodged the bullet of totalitarianism, in the resistance to totalitarianism,
like a really methodical resistance to totalitarianism, you could date it, you know, to
World War II. It can't really date it to World War II because we're fighting alongside Stalin against Hitler,
right?
You really kind of dates to the 1950s.
You know, if you look at the intellectual landscape in, I don't know about all countries,
but certainly in the United States, in the 30s and 40s and early 50s, it was unbelievably
fashionable to lionize communism.
It was unbelievably fashionable to lionize Stalinism.
And lots and lots of thoughtful caring people, really, really smart people, became communists
and not merely communists, but a lot of them would have done anything to hasten a communist
revolution in, you know, in Western democracies.
Paul and Codd, Korean communists.
Bring on a solidistic government.
What changed all of that?
A lot of things did, but I think a very big thing that changed all of that was the
novel 1984.
1984 was written in 1948, and it was a monster global sensation.
And the intelligentsia, like all novels, it was read more by the intelligentsia than by the extreme mainstream, but that really shifted the dialogue at the intelligentsia.
It painted such a plausible and horrifying picture of what Stalinism with slightly better technology would look like and it was a science fiction novel people forgot that
1984 was a freaking science fiction novel first of all it was set in the future people and
Secondly, the telescreen technology that the thought police used to keep an eye on everybody
There was that was impossible in 1948 so it was it was near future sci-fi and
it was it
shifted the debate.
Anybody who read that book
could no longer say, I wanna live under Stalin.
You just couldn't.
It really inoculated the Western democracies,
or free countries throughout the world.
I keep saying Western mission.
It really inoculated most of the intelligentsia,
and it really inoculated most of the free world against the lure of
totalitarianism.
What was the lure of totalitarianism?
Go back a couple decades and 1930s,
the world's going through the Great Depression.
And all you see, or have I mean Stalin was
industrializing a non-industrial nation of Soviet Union
at an unbelievable pace.
And there was a lot of low-hanging fruit
that kind of worked in the 30s and 40s.
And you have a lot of people who are worried about starvation
in outside of the communist block,
and so that gets seductive.
And it gets seductive if you're a brilliant student
at Cambridge or Harvard or whatever,
and you encounter these ideas,
and they're all about the good of the the good of the many and so forth.
That inoculation was really powerful from 1984.
And so I think we need stories.
More stories like the Terminator.
People chuckle and sneer and sneer about how unsophisticated the Terminator is.
But guess what?
It brought the notion of super AI risk to basically everybody understands the Terminator
story in SkyNet.
You know, I mean, some people are not, are so disconnected from popular culture.
They don't, but you have much less if you want to talk about super AI risk to somebody
who's not inculcated in the world, you have a much shorter path to that, to getting them
up to speed and understanding the risk because we've
all seen Terminator.
That was pretty damn good story.
We haven't really had the symbioterminator franchise yet.
That would be really useful.
Have you seen?
Really, really useful.
Have you seen Next?
I think it's an NBC series.
No.
Okay.
Tell me about Next.
I'm already excited.
Next is a nine or ten-part series about a misaligned, super intelligent, artificial
general intelligence that goes rogue and starts taking over the world.
But it's done in a very, very smart way.
So the guys, they're using the language of Nick.
They're talking about recursive self-improvement.
It's trying to find the correct architecture with which it can deploy itself
onto multiple servers because it knows it's got the algos but it doesn't have the computing power
to be able to do it. It's trying to manipulate people psychologically. It's trying to manipulate people
with regards to their logistics. It uses different actors to play off against each other. It knows
it is phenomenal. And it's a bit, for one of a better term, it's a bit Americanized, right?
It's very dramatized. It's, it's, it's, it's sort of glitzy and everyone's got really
nice teeth and so on and so forth. And there's kind of an Elon Musk type character in there
who's a little bit crazy and he's trying to fix things. And, but next, anyone that's
been interested in this, if you want to watch a, like, a 10-part series, I think it's NBC.
It's Fox. I just, Fox, I'm looking at the think it's NBC. It's Fox. I just talked about Wikipedia.
Wikipedia page right now, it sounds fabulous.
It's legit.
And they know that stuff.
They understand about the alignment problem.
They understand about recursive self-improvement
and what it would try and do.
Again, there's some dramatization.
But that, again, you're right.
I think pushing the culture in that sort of a way.
So I came across this quote and it was someone criticizing the intellectual commentary
out of the intelligentsia and saying, just because you've beaten bad philosophy doesn't
mean you've beaten bad psychology.
And what they were highlighting was the fact that people who can proselytize about ideas academically are still at the mercy
of cognitive biases because they're still human, deep at heart.
And I think that what you've just hit upon there uses the same, it leverages the same
type of mechanism, right?
So if you can move the culture in a way where it becomes desirable to have a particular
mode of thinking, because all of the norms have been pushed in a particular direction through,
things that people don't even consider like how much did those movies to do with the Cold War
in the late 1900s? How much did that really push public consciousness? Or how much did the terminator
really open people's eyes to the fact that, oh my god, like this is, this is what super computers perhaps could do. And that was well,
well, well before it's time.
So long ago.
Yes. So prophetic.
Yes.
Unbelievably prophetic.
And the same thing that happened with 1984, you can shift culture not just through academia,
but through entertainment as well, not just through education.
Oh, yeah, absolutely. No, I don't think entertainment is an enormous lover.
Yeah.
It shapes culture in that way.
So, yeah, I think that would be a very useful way to do it.
So let's say that someone's taken the existential risk, Black pill.
Perhaps we've begun some people at the beginning of their dominoes, sort of toppling journey
today, or there might be some people that have joined us who are already part way up the mountain or down the mountain depending
on how you look at it. How would you advise them if they're passionate about this topic?
Generally, general eggs essential risk. How do you think that we should conduct our lives to
contribute best to a cause that we care about? Wow, that is an interesting question. I mean, I think
care about. Wow, that is an interesting question. I mean, I think, you know, people who, like
there are relatively few avenues right now, I mean, they could contribute money to the few centers that are working on this. I mean, that's something that anybody could do with
five bucks. And if millions of people do that with five bucks, then the budgets that support
people who think about this full time would rapidly, rapidly expand. I think that public pressure on governments everywhere to take this seriously
would be pretty highly leveraged outcome. I mean, let's imagine that the United States
and Australia continue to hit the snooze button on this, whatever.
But for whatever reason, Denmark gets with the program.
The Danish government is very small compared to the US government, but it's a very, very,
very large compared to most organizations in the world.
A mid-sized country with lots of resources, lots and lots of brilliant people, you know, very progressive
social policies. If the Danish government said, you know what, we're going to try to
lead the world in taking this seriously. Seriously. And then they started directing government
sized budgets, as opposed to academic sized budgets at the problem. You know, any government
of mid-size or greater that decided to make this a priority and fund a lot of basic science and public education and so forth, that would really be a major domino to fall.
So I think people, particularly people in countries like Denmark or Estonia, you know, that are smaller countries that, you know, have maybe, you know, a higher average IQ than
my country. I don't know, like, you know, that small, well-organized countries that have a history
of, you know, being a little bit ahead of the curve when it comes to social policies and other
things. Any government that takes this seriously and uses the might of government and the megaphone
of government could have would have a significant impact on public awareness.
It just that would be a hell of a start.
So you're talking lobbying local officials, bringing it up more, I suppose.
The problem is, again, the chasm between most people,
when you talk to them about existential risk, it's just not, the same way as a hundred years ago,
climate change wouldn't have been a word.
So I suppose that's something,
that's definitely something I'll take away
from this conversation.
I think I'd looked at existential risk
and the furthering of existential risk
as a purely academic or intellectual conversation,
which then needed to essentially just be communicated to people through shows
like this or YouTube videos that are kind of like edgitainable or whatever it's called.
But the cultural side of that, you know, moving people in pure entertainment is something
that I totally didn't consider, but it's probably even more powerful and more accessible.
Yeah, it's part of the packet of things you want to do.
What was the name of the guy who created the podcast
we were just talking about into the world podcast,
which was just a lock.
OK, so let's talk about really local government now.
So let's say somebody is influential
with their local school board.
And they say, we're going to do something crazy here.
We are going to, in our school
system, have a required 10-week course for all seniors, you know, all people who are about
to graduate, to be fun class. But it's a 10-week class. They all got to take and they're going
to listen to Dodd Josh's podcast. And there's going to be a smart, you know, social study
teacher who leads it. They're going to have great discussions and debates. And, you know, we're going to just do that in our school system because that's what we can influence.
And then maybe some state or provincial government says, hey, this is cool.
It starts spreading.
Like again, it's like compound spreading.
And so nobody was aware of existential risk 15 years ago, really.
Now I think low digitdigit millions of people
are aware and interested in the subject.
And at some point, maybe one of those people
is the chairman of a school board and implements this.
And so you suddenly have a local school system
studying this and maybe adjacent town say that's a neat.
And this is a compounding of it,
or people who did that high school
class nine years later, one of them's in a position of influence. That's how the environmental
movement over the 50 years, people became aware through this or that, and they made more
people aware. So I think that, you know, anybody who can implement some kind of policy,
whether it's local government, school board, national government, to take this more seriously,
that's a big step. Oh, here comes Ringo. Oh, we got a dog. It's popped up. Yeah, at least popped up on the, hasn't gotten into the
camera yet, but he's an imminent danger of just lifting up. There's a head on the...
What a way to say. It looks very plussied. He is right now, but I think he's telling me that he wants
to go out, and I'm going to obey that signal because, you know, that means something with a dog.
He doesn't want to make a mess. Ringo is new to potty training. I've had him
for a month. He's a little rescue dog. He's 12 months old and he came to be unpotty trained
and he's learned very quickly but I don't know. I think he's just singing once to cuddle.
Okay. Maybe I don't have to go out. Oh, he's barking at somebody in an adjacent deck. Anyway, well, we got Ringo now.
Now, you're not going out if you're going to bark, dude. Anyway, I'll deal with Ringo. I think we're
good. Cool. Rob Reed, ladies and gentlemen, Rob, thank you so much for coming on today. This has
been everything that I wanted to try and get out of it, but there's still yet more to talk about, so we're on to you may have to happen in the future.
I'd be very open to that. It's great to be here. It's been so much fun talking. Yeah.
Talk to me about what people can expect from you coming up. You're working on books,
and if you've got any other super secret projects that you can tell Hinta. So I'm working, there's a pretty serious effort
that's pretty far along, but about to hit
one of the really, you know, significant, you know,
go-no-go decisions to turn one of my novels
into a movie that I've been very engaged in.
And if that happens, there's some prospect
that I would be involved in, you know, the further
creation because it would be a feature length animation, which is something that happens
over a period of years rather than 10 weeks of production.
And so that's a cool thing that I've been working on.
My podcast is called the After On Podcast.
It's not unlike yours.
It's more similar to my podcast than anything I can think of really and I interview
You know world-class thinkers and scientists and occasionally entrepreneurs
About their life's work and and try to you know create an interview that makes all that really accessible
I had a long hiatus from that because I was doing a bunch of other projects including this thing with Sam
But that's now up and running
So I've got new episodes coming out with that.
I've been very busy recently writing and recording a lot of music
with a band called The God Children, hello, Ringo.
Ringo is not our drummer. Ringo is a drummer and no other band, not this Ringo.
With a band called The God Children, we are invisible online as of yet,
but we've got a bunch of music that we're about to start releasing,
which I'm pretty excited about.
And the thing that's been taking up a great deal
of time recently is I started an investing partnership
with a guy named Chris Anderson
who runs for all intents and purposes kind of owns
and he bought the Ted Conference guy.
And he bought the Ted Conference,
which was a profit making concern
and put it
into a nonprofit entity. So nobody owns it now, but he's the one who bought it and he's
Sky runs it. So he and I have created an investing partnership we call resilience reserve.
And we're investing in startups that in some meaningful way have the potential to make
the world more resilient. And, you know, so I'm investing with that fund.
So there's a lot going on, lots of balls in the air.
Dude, that's so cool.
I knew about the music thing,
but I didn't know about you in Chris
and creating a fund.
And yeah, there's so much,
it's so impressive to see somebody who's obviously
just worked and worked and worked at different things
and managed to get themselves to a place where they have expertise and contacts and some leverage and some capital
and then when you pull all of that together you can do some pretty cool shit.
Yeah, and you're doing some pretty cool shit.
I'm amazed at the quality of your guests, the quality of the interviews, and just the frequency
with which you're coming out with episodes is unbelievable. So yeah, I mean, I'm also
in quite a bit of awe is a podcaster who could never release episodes with a frequency that
you do, even if I were working a thousand hours a week. I'm going to know what you're doing,
so you keep up the good work too. Well, I appreciate it. Rob Reed, ladies and gentlemen. Rob, catch you next time.
All right. See ya.
you