Stuff You Should Know - How Existential Risks Work
Episode Date: April 28, 2020An existential risk is a special kind of threat that are different from other types of risks in that if one of them ever befalls us, it would spell the permanent end of humanity. It just so happens we... seem to be headed for just such a kind of catastrophe. Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
On the podcast, Hey Dude, the 90s called,
David Lasher and Christine Taylor,
stars of the cult classic show, Hey Dude,
bring you back to the days of slip dresses
and choker necklaces.
We're gonna use Hey Dude as our jumping off point,
but we are going to unpack and dive back
into the decade of the 90s.
We lived it, and now we're calling on all of our friends
to come back and relive it.
Listen to Hey Dude, the 90s called
on the iHeart radio app, Apple Podcasts,
or wherever you get your podcasts.
Hey, I'm Lance Bass, host of the new iHeart podcast,
Frosted Tips with Lance Bass.
Do you ever think to yourself, what advice would Lance Bass
and my favorite boy bands give me in this situation?
If you do, you've come to the right place
because I'm here to help.
And a different hot, sexy teen crush boy bander
each week to guide you through life.
Tell everybody, ya everybody, about my new podcast
and make sure to listen so we'll never, ever have to say.
Bye, bye, bye.
Listen to Frosted Tips with Lance Bass
on the iHeart radio app, Apple Podcasts,
or wherever you listen to podcasts.
Hey everybody, Josh here.
We wanted to include a note before this episode,
which is about existential risks,
threats that are big enough
to actually wipe humanity out of existence.
Well, we recorded this episode just before the pandemic,
which explains the weird lack of mention of COVID
when we're talking about viruses.
And when this pandemic came along,
we thought perhaps a wait and see approach
might be best before just willy nilly
releasing an episode about the end of the world.
So we decided to release this now,
still in the thick of things,
not just because the world hasn't ended,
but because one of the few good things
that's come out of this terrible time
is the way that we've all kind of come together
and given a lot of thought
about how we can look out for each other.
And that's exactly what thinking about existential risks
is all about.
So we thought there would be no better time
than right now to talk about them.
We hope this explains things
and that you realize we're not releasing this glibly
in any way.
Instead, we hope that it makes you reflective
about what it means to be human
and why humanity is worth fighting for.
Welcome to Stuff You Should Know,
a production of iHeartRadio's How Stuff Works.
Hey, and welcome to the podcast.
I'm Josh Clark, and there's Charles W. Chuck Bryant
over there.
And there is guest producer Dave C.
Sitting in yet again.
At least the second time, I believe.
He's already picked up that he knows not to speak.
He's nodding.
The custom established by Jerry.
But yeah, he did nod, didn't he?
So yeah, I guess it is twice that Dave's been sitting in.
What if he just heard, two times from the other side
of the room, you're like, didn't have the heart
to tell him not to do that?
Right.
I think he would catch the drift
from like the record scratching.
Right.
It just like materialized out of nowhere.
Right.
Not many people know that we have someone
on permanent standby by a record player.
It's just waiting.
Just in case we do something like that.
And that person is Tommy Chong.
Hi, Tommy.
Do I smell bong water?
Yeah.
Man, I bet he reeks of it.
Yeah, probably.
So I mean hats off to him for sticking to his bit, you know?
Cheech was like, hey, hey, I want a good long spot
on Nash Bridges.
So also whatever you want me to about pot.
Like I'm just into gummies now.
Right.
Tommy Chong like tripled down.
Yeah.
And he sold the bongs, didn't he?
That, P-test beaters.
P-test beaters.
Okay.
I can't suddenly think of how to say something like that.
It's a way to defeat a urine test.
Oh well.
Listen, you fancy pants.
I would say, I don't know.
I know that a street guys call it P-test beaters, but.
P-test beaters is a band name about as good as say like.
Diarrhea Planet.
Sure.
Actually, I think Diarrhea Planet's got to beat,
but still.
All right.
So Chuck, we're talking today about a topic
that is near and dear to my heart, existential risks.
That's right.
Which I don't know if you've gathered that or not,
but I really am into this topic.
Yeah.
All around.
As a matter of fact, I did a 10 part series on it
called The End of the World with Josh Clark,
available everywhere you get podcasts right now.
I managed to smash that down.
That's kind of what this is.
It's a condensed version.
And forever, like I wanted to just S-Y-S-K-F-I
the topic of existential risks, like do it with you.
I wanted to do it with you for years.
This is going to be a live show at one point.
It was.
I think even before that, I was like,
hey, you want to do an episode on this?
You're like, this is pretty dark stuff.
We're doing it now.
No, the only time I said that was when
you actually sent me the document for the live show.
And I went, I don't know about a live version of this.
So I guess, I guess that must have been before
The End of the World then, huh?
Oh yeah, this is like eight years ago.
Well, I'm glad you turned down the live show
because it may have lived and died there.
Yeah.
So one of the other-
You might not have made all those
End of the World big bucks.
Right, exactly, man.
I'm rolling in it.
My mattress is stuffed with them.
So, and you know,
bucks aren't always just the only way of qualifying
or quantifying the success of something, you know?
Yeah, there's also Academy Awards.
Right, Oscars.
And that's it.
Peabodies.
Big money or public award ceremonies.
Okay, granted.
The other reason I wanted to do this episode
was because one of the people who was a participant,
an interviewee in The End of the World with Josh Clark,
a guy named Dr. Toby Ord,
recently published a book called The Precipice.
And it is like a really in-depth look
at existential risks and the ones we face
and what's coming down the pike
and what we can do about them and why.
Who's hot, who's not.
Right, exactly.
Cheers and jeers.
Who wore it best.
Right, exactly.
And it's a really good book.
And it's written just for the average person to pick up
and be like, I hadn't heard about this.
And then reach the end of it and say,
I'm terrified, but I'm also hopeful.
And that one reason I wanted to do this episode
to let everybody know about Dr. Ord's book or Toby's book,
it's impossible to call him Dr. Ord,
he's just a really likable guy,
is because he actually turned the tone
of the End of the World around.
Almost single-handedly.
It was really grim before I interviewed him.
Early samples.
Really, and also you remember,
I started listening to The Cure a lot.
Sure.
Just got real dark there for a little while,
which is funny that The Cure is my conception
of really dark.
Anyway.
There's death metal guys out there laughing.
Right, so talking to him,
he just kind of just steered the ship a little bit.
And by the end of it, because of his influence,
the End of the World actually is a pretty hopeful series.
So my hat's off to the guy for doing that,
but also for writing this book, The Precipice.
Hats off, sir.
So we should probably kind of describe
what existential risks are.
I know that you know in this document,
it's described many, many times.
But the reason it's described many, many times
is because there's like a lot of nuance to it.
And the reason there's a lot of nuance to it
is because we kind of tend to walk around thinking
that we understand existential risks
based on our experience with previous risks.
Right.
But the problem with existential risks are
they're actually new to us,
and they're not like other risks
because they're just so big.
And if something happens,
one of these existential catastrophes befalls us,
that's it.
There's no second chance, there's no do-over,
and we're not used to risks like that.
That's right.
Nobody is because we are all people.
Right.
The thought of all of human beings being gone,
or at least not being able to live
as regular humans live and enjoy life,
like and not live as matrix batteries.
Sure.
Because you know, technically,
the matrix, those are people.
Yeah.
But that's not a way to live.
The people in the pods?
Yeah, that's what I'm saying.
I wouldn't want to live that way.
But that is another version of existential risks.
It's not necessarily that everyone's dead,
but you could become just a matrix battery.
Yeah.
It's not flourish or move forward as a people.
Right, exactly.
So, but with existential risks in general,
like the general idea of them is that,
like if you are walking along
and you suddenly get hit by a car,
like you no longer exist,
but the rest of humanity continues on existing.
Correct.
With existential risks,
it's like the car that comes along
and hits not a human, but all humans.
So it's a risk to humanity.
And that's just kind of different
because all of the other risks
that we've ever run across either give us
the luxury of time or proximity.
Meaning that we have enough time
to adapt our behavior to it,
to survive it and continue on as a species.
Right.
Or there's not enough of us in one place
to be affected by this risk
that took out, say, one person or a billion people.
Right. Like if all of Europe went away,
that is not an ex-risk.
No.
And so people might say-
That'd be sad.
It would be really sad.
And I mean, up to, you know,
say 99% of the people alive on Earth,
if they all died somehow,
it would still possibly not be an existential risk
because that 1% living
could conceivably rebuild civilization.
That's right.
We're talking about giving the world back to mother nature
and just seeing what happens.
Do you remember that series?
I think it was a book to start The Earth Without Us.
No.
Oh, it's so cool.
I think I know that.
It was a big deal when it came out.
And then they made like a,
maybe a science channel or a Nat Geo series about it
where this guy describes like
how our infrastructure will start to crumble.
Like if humans just vanished tomorrow,
how the Earth would reclaim,
nature would reclaim everything we've done
and undo, you know, after a month,
after a year, after 10,000 years.
Yeah, I've heard of that.
It's really cool stuff.
Yeah, there's a Bonnie principally.
My idol has a song called It's Far From Over
and that's sort of a Bonnie principally look
at the fact that, hey, even if all humans leave,
it's not over.
Like new animals are gonna, new creatures are gonna be born.
Right.
The Earth continues.
Yeah.
And he also has a line though about like,
but you better teach your kids to swim.
That's a great line.
Yeah, it's good stuff.
Did they ever tell you I saw that guy do karaoke
with his wife once?
Oh, really?
You know our friend Toby?
Oh, sure.
At his wedding.
They're friends.
Yeah.
I would have not been able to be at that wedding.
Cause you would have just been such a fanboy.
I don't know what I would do.
I would, it would have ruined my time.
Really?
It really would.
Cause I would second guess everything I did about,
I mean, I even talked to the guy once backstage
and that ruined my day.
It really did cause you spent the rest of the time
just thinking about how you should have said stuff.
No, it was actually fine.
He was a very, very, very nice guy.
And we talked about Athens and stuff.
But that's who I just went to see in DC, Philly and New York.
Nice.
Went on a little, followed him around on tour
for a few days.
Did he sing that song about the world going on
or life going on?
He did.
So, so let's just cover a couple of things that we,
like people might think are existential risks
that actually aren't okay.
Yeah. I mean, I think a lot of people might think,
sure, some global pandemic that could wipe out humanity.
There could very well be a global pandemic
that could kill a lot of people,
but it's probably not going to kill every living human.
It would be a catastrophe, but not a next risk.
Yeah. I mean, because humans have antibodies
that we develop and so people who survive that flu
have antibodies that they pass on the next generation.
And so that that disease kind of dies out
before it kills everybody off.
And the preppers at the very least.
They'll be fine.
Would be safe.
What about calamities like a mudslide
or something like that?
You can't mudslide the earth.
You can't. And that's a really good point.
This is what I figured out in researching this.
After doing the end of the world,
after talking to all these people,
it took researching this article for me to figure this out.
That it's time and proximity.
That are the two things that we use to survive
and that if you take away time and proximity,
we're in trouble.
And so mudslides are a really good example of proximity
where a mudslide can come down a mountain
and take out an entire village of people.
It has.
Yes. And it's really sad and really scary to think of.
I mean, we saw it with our own eyes.
We stood in a field that was now what,
like eight or nine feet higher than it used to be.
Yeah. And you could see the trek.
This was in Guatemala when we went down
to visit our friends at Coed.
There was like, the trees were much sparser.
You could see the track of the mud.
They were like, the people are still down there.
This is, it was a horrible tragedy.
And it happened in a matter of seconds.
It just wiped out a village.
But we all don't live under one mountain.
And so if a bunch of people are taken out,
the rest of us still go on.
So there's the time and there's a proximity.
Yeah. I think a lot of people in the 80s might have thought
because of movies like War Games
and movies like The Day After
that global thermonuclear war would be an ex-risk.
And as bad as that would be,
it wouldn't kill every single human being.
No, no, they don't think so.
They started out thinking this, like as a matter of fact,
nuclear war was the first,
one of the first things that we identify
as a possible existential risk.
Sure.
And if you kind of talk about the history of the field,
for the first like several decades,
that was like the focus,
the entire focus of existential risks.
Like Bertrand Russell and Einstein wrote a manifesto
about how we really need to be careful with these nukes
because we're gonna wipe ourselves out.
Carl Sagan, you remember our amazing nuclear winter episode?
Yeah, yeah.
That was from, you know, studying existential risks.
And then in the 90s, a guy named John Leslie came along
and said, hey, there's way more than just nuclear war
that we're gonna wipe ourselves out with.
And some of it has taken the form of this technology
that's coming down the pike.
And that was taken up by one of my personal heroes,
a guy named Nick Bostrom.
Yeah, he's a philosopher out of Oxford.
And he is one of the founders of this field.
And he's the one that said, or one of the ones that said,
you know, there's a lot of potential existential risks
and nuclear war peanuts.
Bring it on.
Right.
But I don't know if Bostrom specifically believes
it probably does that we would be able to recover
from a nuclear war.
That's the idea as you rebuild as a society
after whatever zombie apocalypse or nuclear war happens.
Yeah, and again, say it killed off 99% of people.
To us, that would seem like an unimaginable tragedy
because we lived through it.
But if you zoom back out and look at the lifespan
of humanity, not just the human's life today,
but all of humanity, like it would be a very horrible period
in human history, but one we could rebuild from over,
say, 10,000 years to get back to the point where we were
before the nuclear war.
And so ultimately, it's probably not an existential risk.
Yeah, this is a tough topic for people
because I think people have a hard time
with that long of a view of things.
And then whenever you hear the Big Mac comparisons
of how long people have been around
and how old the earth is and that stuff,
it kind of hits home, but it's stuff for people living
that live 80 years to think about,
oh, well, 10,000 years will be fine.
And even like, when I was researching this,
she brought this up a lot,
like where do we stop caring about people
that are descendants?
We care about our children, our grandchildren.
That's about, I just care about my daughter.
That's about it.
That's where it ends.
To heck with the grandchildren.
I mean, you have grandchildren yet.
Yeah, but wait till they come along.
Everything I've ever heard is that being a grandparents
is even better than being a parent.
And I know some grandparents.
Okay, let's say I'm not dead
before my daughter eventually has a kid if she wants to.
I would care about that grandchild.
But after that, forget it, like my kids, kids, kids,
who cares?
Granted, that's about where it would end up.
I care about people and humanity as a whole.
I think that's what you gotta do.
You can't think about your eventual ancestors
thinking you just gotta think about people.
Right, yeah, you help people you don't know now.
It's kinda requisite to start caring about existential risks,
to start thinking about people not just,
well, let's talk about it.
So, Toby Orrd made a really good point
in his book, The Presbytes, right?
That you care about people on the other side of the world
that you've never met.
Yeah, that's what I'm saying.
Like that happens every day.
Right, so what's the difference between people
who live on the other side of the world
that you will never meet?
And people who live in a different time
that you will never meet?
Why would you care any less about these people,
human beings, that you'll never meet,
whether they live on the other side of the world
at the same time, or in the same place you do,
but at a different time?
I think a few, I mean, I'm not speaking for me,
but I think if I were to step inside the brain
of someone who thinks that, they would think like,
A, it's a little bit of a self,
it's a bit of an ego thing,
cause you know like, oh, I'm helping someone else.
So that does something for you in the moment.
Right.
Like someone right now on the other side of the world
that maybe I've sponsored is doing better because of me.
Gotcha, you got a little kick out of it
from helping Sally Struthers or something.
Yeah, that does something, yeah, it helps Sally Struthers.
Help her food on her plate.
Is she still with us?
I think so.
I think so too, but I feel really bad if...
I certainly haven't heard any news of her death.
People would talk about that.
And the record scratch would have just happened.
Righty.
So I think that is something too.
And I think there are also sort of a certain amount
of people that are just believe you're worm dirt.
There is no benefit to the afterlife
as far as good deeds and things.
So like once you're gone, it's just who cares?
Cause it doesn't matter.
There's no consciousness.
Yeah, well that's, I mean, if you were at all like peeked
by that stuff, I would say definitely read the precipice
because like one of the best things that Toby does,
and he does a lot of stuff really well,
is describe why it matters.
Cause I mean, he's a philosopher after all.
So he says like, this is why it matters.
Like not only does it matter
because you're keeping things going
for the future generation,
you're also continuing on with the previous generation built.
Like who are you to just be like,
oh, we're just going to drop the ball.
No, I agree.
That's a very self-centered way to look at things.
Totally, but I think you're right.
I think there are a lot of people who look at it that way.
So you want to take a break?
Yeah, we can take a break now
and maybe we can dive into Mr. Boestrom's
or doctor, I imagine.
Sure.
Boestrom's five different types.
Are there five?
No, there's just a few.
Okay, a few different types of existentialists.
We can make up a couple of the Adam.
No, let's not.
Hey, dude, the 90s called David Lasher and Christine Taylor
stars of the cult classic show, Hey Dude,
bring you back to the days of slip dresses
and choker necklaces.
We're going to use Hey Dude as our jumping off point,
but we are going to unpack
and dive back into the decade of the 90s.
We lived it and now we're calling on all of our friends
to come back and relive it.
It's a podcast packed with interviews,
co-stars, friends and non-stop references
to the best decade ever.
Do you remember going to Blockbuster?
Do you remember Nintendo 64?
Do you remember getting Frosted Tips?
Was that a cereal?
No, it was hair.
Do you remember AOL Instant Messenger
and the dial-up sound like poltergeist?
So leave a code on your best friend's beeper
because you'll want to be there
when the nostalgia starts flowing.
Each episode will rival the feeling
of taking out the cartridge from your Game Boy,
blowing on it and popping it back in
as we take you back to the 90s.
Listen to Hey Dude, the 90s called
on the iHeart Radio app, Apple Podcasts
or wherever you get your podcasts.
Hey, I'm Lance Bass, host of the new iHeart podcast,
Frosted Tips with Lance Bass.
The hardest thing can be knowing who to turn to
when questions arise or times get tough
or you're at the end of the road.
Ah, okay, I see what you're doing.
Do you ever think to yourself,
what advice would Lance Bass
and my favorite boy bands give me in this situation?
If you do, you've come to the right place
because I'm here to help.
This, I promise you.
Oh, God.
Seriously, I swear.
And you won't have to send an SOS
because I'll be there for you.
Oh, man.
And so will my husband, Michael.
Um, hey, that's me.
Yep, we know that, Michael.
And a different hot, sexy teen crush boy bander
each week to guide you through life step by step.
Oh, not another one.
Kids, relationships, life in general can get messy.
You may be thinking, this is the story of my life.
Just stop now.
If so, tell everybody, yeah, everybody
about my new podcast and make sure to listen
so we'll never, ever have to say bye, bye, bye.
Listen to Frosted Tips with Lance Bass
on the iHeart Radio app, Apple Podcasts
or wherever you listen to podcasts.
All right, Chuck, so one of the things you said earlier
is that existential risks, the way we think of them
typically is that something happens
and humanity is wiped out and we all die
and there's no more humans forever and ever.
That's an existential risk.
That's one kind really, and that's the easiest one
to grasp, which is extinction.
Yeah, and that kind of speaks for itself.
Just like dinosaurs are no longer here, that would be us.
Yes, no more humans.
So we aren't as cool.
And I think that's one of those other things too.
It's kind of like how people walk around like,
yeah, I know I'm gonna die someday,
but if you sat them down and you were like,
do you really understand that you're going to die someday?
That they might start to panic a little bit,
and they realize, I haven't actually confronted that.
I just know that I'm gonna die.
Yeah, or if you knew the date.
That'd be weird.
It'd be like a Justin Timberlake movie.
Would that make things better or worse for humanity?
I would say better probably, right?
I think it'd be a mixed bag.
I think some people would be able to do nothing
but focus on that and think about
all the time they're wasting, and other people would be like,
I'm gonna make the absolute most out of this.
Well, I guess there are a couple of ways you can go
and it probably depends on when your date is.
If you found out your date was a ripe old age,
you might be like, well, I'm just gonna try
and lead the best life I can.
That's great.
You find out you live fast and die hard at 27.
Yeah, die harder.
You might die harder.
You might just be like, screw it.
Or you might really ramp up your good works.
Yeah.
It depends what kind of person you are probably.
Yeah, and more and more I'm realizing is
it depends on how you were raised too, you know?
Like we definitely are responsible
for carrying on ourselves as adults.
Like you can't just say, well, I wasn't raised very well
or I was raised this way, so whatever.
Like you have a responsibility for yourself
and who you are as an adult.
But I really feel like the way that you're raised too
really sets the stage and puts you on a path
that can be difficult to get off of
because it's so hard to see that.
For sure.
You know?
Yeah.
Because that's just normal to you
because that's what your family was.
Yeah, that's a good point.
So anyway, extinction is just one of the ways,
one of the types of existential risks that we face.
A bad one.
Yeah.
A permanent stagnation is another one.
And that's the one we kind of mentioned.
Dance around a little bit.
And that's like some people are around,
not every human died and whatever happened.
But whatever is left is not enough
to either repopulate the world
or to progress humanity in any meaningful way.
Or rebuild civilization back to where it was.
And it would be that way permanently,
which is kind of in itself tough to imagine,
to just like the genuine extinction
of humanity is tough to imagine.
The idea of, well, there's still plenty of humans
running around, how are we never going
to get back to that place?
And there's-
That may be the most depressing one.
I think the next one's the most depressing,
but that's pretty depressing.
But one example that's been given for that is like,
let's say we say, all right, this climate change,
we need to do something about that.
So we undertake a geo-engineering project
that isn't fully thought out.
And we end up causing like a runaway greenhouse gas
effect or something.
Like make it worse.
And there's just nothing we can do to reverse course.
And so we ultimately wreck the earth.
That would be a good example of permanent stagnation.
That's right.
This next one, so yes,
agreed permanent stagnation is pretty bad.
I wouldn't want to live under that.
But at least you can run around and like do what you want.
I think the total lack of personal liberty
in the flawed realization one is what gets me.
Yeah, they all get me.
Sure.
Flawed realization is the next one.
And that's sort of like the matrix example,
which is that there's some technology that we invented
that eventually makes us their little batteries in pods.
Do I like, oops.
Right.
Basically, or there's just some,
someone is in charge, whether it's a group
or some individual or something like that.
It's basically a permanent dictatorship
that we will never be able to get out from under.
Because this technology we've developed.
Like a global one.
Yeah, is being used against us.
And it's so good at keeping tabs on everybody
and squashing descent before it grows.
There's just nothing anybody could ever do to overthrow it.
And so it's a permanent dictatorship
where we're not doing anything productive.
We're not advancing.
We're say it's like a religious dictatorship
or something like that.
All anybody does is go to church
and support the church or whatever.
And that's that.
And so what Dr. Boffstrom figured out is that
there are fates as bad as death.
Sure.
There are possible outcomes for the human race
that aren't as bad as extinction
that still leave people alive.
Even like in kind of a futuristic kind of thing
like the flawed realization one goes.
But that you wouldn't wanna live the lives
that those humans live.
No.
And so humanity has lost its chance
of ever achieving its true potential.
That's right.
And that those qualify as existential risks as well.
That's right.
Don't wanna live in the matrix.
No.
At all.
Or in a post-apocalyptic altered earth.
Yeah.
Okay.
The matrix.
Basically like Thundar the Barbarian.
That's what I imagine with the permanent stagnation.
So there are a couple of big categories
for existential risks.
And they are either nature made or man made.
The nature ones we've,
you know, there's always been the threat
that a big enough object hitting planet earth could do it.
Right.
Like that's always been around.
Like that's some sort of new realization.
But it's just a pretty rare,
it's so rare that it's not likely.
Right.
All of the natural ones are pretty rare
compared to the human made ones.
Yeah.
Like I don't think science wakes up every day
and worries about a comet or an asteroid or a meteor.
No.
And it's definitely worth saying that the better we get
at scanning the heavens, the safer we are eventually
when we can do something about it.
If we see this as a comet heading our way.
What do we do?
Just hit the gas and move the earth over a bit.
Just send Superman out there.
Right.
And there was nothing we can do about any of these anyway.
So maybe that's also why science doesn't wake up worrying.
Right. Yeah.
So you've got near earth objects.
You've got celestial stuff,
like collapsing stars that produce gamma ray bursts.
And then even back here on earth,
like a super volcanic eruption
could conceivably put out enough soot
that it blocks photosynthesis.
We did a show on that.
Yeah, sends us into essentially a nuclear winter too.
That would be bad.
But like you're saying, these are very rare
and there's not a lot we can do about them now.
Instead the focus of people who think
about existential risks.
And there are like a pretty decent handful
of people who are dedicated to this now.
They say that the anthropogenic or the human made ones,
these are the ones we really need to mitigate
because they're human made, so they're under our control.
And that means we can do something about them
more than say...
A comet. Yeah.
Yeah, but it's a bit of a double-edged sword
because you think, oh, well, since we could stop this stuff,
that's really comforting to know, but we're not.
Right.
Like we were headed down a bad path
in some of these areas for sure.
So because we are creating these risks
and not thinking about these things in a lot of cases,
they're actually worse,
even though we could possibly control them.
Right.
Which definitely makes it more ironic too.
Yeah.
Right.
So there are a few that have been identified
and there's probably more that we haven't figured out yet
or haven't been invented yet,
but one of the big ones just...
I think almost across the board,
the one that existential risk analysts
worry about the most is AI, artificial intelligence.
Yeah, and this is the most frustrating one
because it seems like it would be the easiest one
to not stop in its tracks,
but to divert along a safer path.
Right.
The problem with that is that people who have dedicated
themselves to figuring out how to make that safer path
are coming back and saying,
this is way harder than we thought it was gonna be.
To make the safer path?
Yeah.
Oh, really?
Yeah.
Right.
While people recognize that there needs to be a safe path
for AI to follow,
this other path that it's on now,
which is known as the unsafe path,
that's the one that's making people money.
So everybody's just going down the unsafe path
while these other people are trying
to figure out the safer one.
Because the computer and war games would say,
maybe the best option is to not play the game.
Sure.
And that's, if there is no safe option,
then maybe AI should not happen.
Or we need to,
and this is almost heresy to say,
we need to put the brakes on AI development
so that we can figure out the safer way
and then move forward.
But we should probably explain what we're talking about
with safe in the first place, right?
Yeah, I mean, we're talking about creating
a super intelligent AI
that basically is so smart that it starts to self-learn
and is beyond our control.
And it's not thinking, oh, wait a minute,
one of the things I'm programmed to do
is make sure we take care of humans.
Right.
And it doesn't necessarily mean that some AI
is gonna become super intelligent
and say, I wanna destroy all humans.
Right.
That's actually probably not going to be the case.
It will be that this super intelligent AI
is carrying out whatever it was programmed to do.
It would disregard humans.
Exactly.
And so if our goal of staying alive and thriving comes
in conflict with the goal of whatever this AI's goal is,
whatever it was designed to do, we would lose that.
Yeah, because it's smarter than us.
By definition, it's smarter than us.
It's out of our control.
And probably one of the first things it would do
when it became super intelligent
is figure out how to prevent us from turning it off.
Right, well, yeah, that's the fail safe
is the all important fail safe
that the AI could just disable.
Exactly, right?
You can just like sneak up behind it
with the screwdriver or something like that.
And then you get, and you get shot.
Right.
The robots like C.
Waw, waw.
In a robot voice.
That's called designing friendly or aligned AI.
And people are like some of the smartest people
in the field of AI research have stopped figuring out
how to build AI and have started to figure out
how to build friendly AI.
Yeah, aligned is an aligned with our goals
and needs and desires.
Yeah.
Nick Bostrom actually has a really great thought experiment
about this called the paperclip problem.
Yeah.
And it's, you can hear it on the end of the world.
Oh, nice.
I like that.
Driving listeners over.
Thank you, thank you.
The next one is Nanotech.
And Nanotech is, I mean, it's something
that's very much within the realm of possibility
as is AI actually.
That's not super far-fetched either.
Well, it's super intelligent AI.
Yeah. Yeah.
It's definitely possible.
Yeah.
And that's the same with nanotechnology.
We're talking about, and I've seen this everywhere
from little tiny robots that will just be dispersed
and clean your house to like the atomic level
where they can like reprogram our body from the inside.
So little tiny robots that can clean your car.
Yeah, exactly.
Those are the three.
Those are three things.
So.
Two of them are cool.
One of the things about these nanobots
is that because they're so small,
they'll be able to manipulate matter
on like the atomic level,
which is like the usefulness of that is mind boggling.
Yeah, just send them in.
And they're gonna be networked.
So we'll be able to program to do whatever
and control them, right?
The problem is if they're networked
and they're under our control,
if they fall under the control of somebody else
or say a super intelligent AI,
then we would have a problem.
Yes.
Because they can rearrange matter on the atomic level.
So who knows what they would start rearranging
that we wouldn't want them to rearrange.
Yeah, it's like that Gene Simmons sci-fi movie in the 80s.
I wanna say it was Looker.
No, I always confuse those two as well.
Oh, was Looker the other one?
This was Runaway.
Runaway.
I think one inevitably followed the other on HBO.
They had to have been a double feature.
Because they could not be more linked in my mind.
Same here.
You know?
I remember Albert Finney was in one.
I think he was in Looker.
He was.
And Gene Simmons was in Runaway
as the bad guy, of course.
Oh yeah.
And he did a great job.
And Tom Selleck was the good guy.
Yeah.
Tom Selleck.
Yeah.
But the idea in that movie was not nanobots.
They were, but they were little insect-like robots.
Right.
But they just weren't nano-sized.
Right.
And so the reason that these could be so dangerous
is because not their size,
but there's just so many of them.
Yeah.
And while they're not big
and they can't like punch you in the face
or stick you in the neck with a needle
or something like the Runaway robots,
they can do all sorts of stuff to you molecularly
and you would not want that to happen.
Yeah.
This is pretty bad.
There's an engineer out of MIT named Eric Drexler.
He is a big name in molecular nanotech.
If he's listening right now,
right up to when you said his name,
he was just sitting there saying,
please don't mention me.
Really?
Oh no.
He just tried to back off from his gray goo hypothesis.
So yeah, this is the idea
what there are so many of these nanobots
that they can harvest their own energy,
they can self-replicate like little bunny rabbits.
And that there would be a point
where there was Runaway growth such
that the entire world would look like gray goo
because it's covered with nanobots.
Yeah.
And since they can harvest energy from the environment,
they would eat the world.
They'd wreck the world basically.
Yeah.
That's scary.
You're right.
So he took so much flak for saying this even
because apparently it scared people enough back in the 80s
that nanotechnology was like kind of frozen for a little bit.
And so everybody went Drexler.
And so he's backed off from it saying like,
this would be a design flaw.
This wouldn't just naturally happen with nanobots.
You'd have to design them
to harvest energy themselves and to self-replicate.
So just don't do that.
And so the thing is, is like, yes,
he took a lot of flak for it,
but he also like, it was a contribution to the world.
He pointed out two big flaws that could happen
that now are just like a sci-fi trope.
Right.
But when he thought about them,
they weren't self-evident or obvious.
Yeah.
I mean, I feel bad we even said his name.
But it's worth saying.
Clyde Drexler.
Right.
Clyde the Glide.
Clyde the Glide, that's right.
Biotechnology is another pretty scary field.
There are great people doing great research
with infectious disease.
Part of that, though, involves developing new bacteria,
new viruses, new strains that are even worse
than the pre-existing ones as part of the research.
And that can be a little scary, too.
Because, I mean, it's not just stuff of movies.
There are accidents that happen, protocols that aren't followed.
And this stuff could get out of a lab.
Yeah, and it's not one of those like,
could get out of a lab, even things that you.
It has gotten out of a lab.
It happens, I don't want to say routinely,
but it's happened so many times
that when you look at the track record
of the biotech industry, it's just like,
how are we not all dead right now?
It's crazy.
It's kind of like broken arrows,
lost nuclear warheads.
Exactly.
But with little tiny horrible viruses.
And then when you factor in that terrible track record
with them actually altering viruses and bacteria
to make them more deadly, to do those two things,
to reduce the time that we have to get over them.
So they make them more deadly.
And then to reduce proximity to make them more easily spread,
more contagious, so they spread more quickly
and kill more quickly as well.
Then you have potentially an existential risk on your hand.
For sure.
We've talked in here a lot about the Large Hadron Collider.
We're talking about physics experiments as the,
I guess this is the last example that we're gonna talk about.
Yeah, and I should point out that this is not,
physics experiments does not show up anywhere
in Toby Yord's precipice book.
Oh, okay.
This one is kind of my pet, my pet theory.
Yeah, I mean, there's plenty of people who agree
that this is a possibility,
but a lot of existential risks theorists are like,
I don't know.
Well, you'll explain it better than me,
but the idea is that we're doing all these experiments
like the Large Hadron Collider
to try and figure stuff out we don't understand.
Right.
And which is great,
but we don't exactly know where that all could lead.
Yeah, because we don't understand it enough.
We can't say, this is totally safe.
Right.
And so if you read some physics papers,
and this isn't like Rupert Sheldrake Morphic Fields,
kind of like Sheldrake, right?
It's actual physicists have said,
well, actually using this version of string theory,
it's possible that this could be created
in a Large Hadron Collider,
or more likely a more powerful collider
that's going to be built in the next 50 years
or something like that.
The super large Hadron Collider.
The dooper.
Yeah.
I think it's the nickname for it.
Oh man, I hope that doesn't end up being the nickname.
The dooper.
It's pretty great.
Right.
Yeah, I guess so, but it also is a little kind of, you know.
I don't know.
I like it.
All right.
So they're saying that a few things could be created
accidentally within one of these colliders
when they smash the particles together.
Microscopic black hole.
My favorite, the low energy vacuum bubble.
No good.
Which is it's a little tiny version of our universe
that's more stable, like a more stable version.
That's kind of cute actually.
A lower energy version.
And so if it were allowed to grow,
it would grow at the speed of light.
It would overwhelm our universe
and be the new version of the universe.
Yeah, that's like when you buy the baby alligator
or the baby boa constrictor or Python,
you think it's so cute.
Right, and then it grows up and eats the universe.
It's screwed.
The problem is this new version of the universe
is set up in a way that's different than our version.
And so all the matter, including us,
that's arranged just so for this version of the universe
would be disintegrated in this new version.
So it's like the snap.
But can you imagine if all of a sudden
just a new universe grew out of the Large Hadron Collider
accidentally and at the speed of light
just ruined this universe forever?
If it was, we just accidentally did this
with the physics experiment.
I find that endlessly fascinating and also hilarious.
Just the idea.
I think the world will end ironically somehow.
It's entirely possible.
So maybe before we take a break,
let's talk a little bit about climate change
because a lot of people might think climate change
is an existential threat.
It's terrible and we need to do all we can,
but even the worst case models probably don't mean
an end to humanity as a whole.
Like it means we're living much further inland
than we thought we ever would
and we may be here in much tighter quarters
than we ever thought we might be
and a lot of people might be gone,
but it's probably not gonna wipe out every human being.
Yeah, it'll probably end up being akin to that same line
of thinking and the same path of a catastrophic nuclear war,
which I guess you could just say nuclear war.
Sure.
Catastrophic's kind of built into the idea,
but we would be able to adapt and rebuild.
It's possible that our worst case scenarios
are actually better than what will actually happen.
So just like with a total nuclear war,
it's possible that it could be bad enough
that it could be an existential risk.
It's possible climate change could end up being bad enough
that it's an existential risk,
but from our current understanding,
they're probably not existential risks.
Right.
All right.
Well, that's a hopeful place to leave for another break
and we're gonna come back and finish up
with why all of this is important.
Should be pretty obvious, but we'll summarize it.
Hey, dude, the 90s called David Lasser and Christine Taylor,
stars of the cult classic show, Hey Dude,
bring you back to the days of slipdresses
and choker necklaces.
We're gonna use Hey Dude as our jumping off point,
but we are going to unpack and dive back
into the decade of the 90s.
We lived it, and now we're calling on all of our friends
to come back and relive it.
It's a podcast packed with interviews, co-stars,
friends, and nonstop references to the best decade ever.
Do you remember going to Blockbuster?
Do you remember Nintendo 64?
Do you remember getting Frosted Tips?
Was that a cereal?
No, it was hair.
Do you remember AOL Instant Messenger
and the dial-up sound like poltergeist?
So leave a code on your best friend's beeper,
because you'll want to be there
when the nostalgia starts flowing.
Each episode will rival the feeling
of taking out the cartridge from your Game Boy,
blowing on it and popping it back in
as we take you back to the 90s.
Listen to, hey dude, the 90s called
on the iHeart radio app, Apple Podcasts,
or wherever you get your podcasts.
Hey, I'm Lance Bass, host of the new iHeart podcast,
Frosted Tips with Lance Bass.
The hardest thing can be knowing who to turn to
when questions arise or times get tough,
or you're at the end of the road.
Ah, okay, I see what you're doing.
Do you ever think to yourself,
what advice would Lance Bass
and my favorite boy bands give me in this situation?
If you do, you've come to the right place,
because I'm here to help.
This, I promise you.
Oh, God.
Seriously, I swear.
And you won't have to send an SOS,
because I'll be there for you.
Oh, man.
And so, my husband, Michael.
Um, hey, that's me.
Yep, we know that, Michael.
And a different hot, sexy teen crush boy bander
each week to guide you through life, step by step.
Oh, not another one.
Kids, relationships, life in general can get messy.
You may be thinking, this is the story of my life.
Just stop now.
If so, tell everybody, yeah, everybody
about my new podcast and make sure to listen
so we'll never, ever have to say bye, bye, bye.
Listen to Frosted Tips with Lance Bass
on the iHeart Radio app, Apple Podcasts,
or wherever you listen to podcasts.
Okay, Chuck, one thing about existential risks
that people like to say is, well, let's just not,
let's just not do anything.
And it turns out from people like Nick Bostrom
and Toby Ord and other people around the world
who are thinking about this kind of stuff.
If we don't do anything,
we probably are going to accidentally wipe ourselves out.
Like doing nothing is not a safe option.
Yeah, but Bostrom is one who has developed a concept
that's hypothetical called technological maturity,
which is, would be great.
And that is sometime in the future
where we have invented all these things,
but we have done so safely
and we have complete mastery over it all.
There won't be those accidents.
There won't be the gray goo.
There won't be the AI that's not aligned.
Yeah, because we'll know how to use
all this stuff safely, like you said, right?
We're not mature in that way right now.
No, actually we're at a place that Carl Sagan
called our technological adolescence,
where we're becoming powerful, but we're also not wise.
So, at the point where we're at now,
technological adolescence,
where we're starting to invent this stuff
that actually can wipe humanity out of existence.
But before we reach technological maturity,
where we have safely mastered
and have that kind of wisdom to use all this stuff,
that's probably the most dangerous period
in the history of humanity.
And we're entering it right now.
And if we don't figure out how to take on
these existential risks,
we probably won't survive from technological adolescence
all the way to technological maturity.
We will wipe ourselves out one way or another,
because this is really important to remember.
All it takes is one, one existential catastrophe.
And not all of these have to take place.
No, it doesn't have to be some combination.
Just one, just one bug with basically 100% mortality
has to get out of a lab.
Just one accidental physics experiment has to slip up.
Just one AI has to become super intelligent
and take over the world.
Like just one of those things happening and then that's it.
And again, the problem with existential risks
that makes them different is we don't get a second chance.
One of them befalls us and that's that.
That's right.
It depends on who you talk to about
if you wanna get maybe just a projection on our chances
as a whole, as humans.
Toby Ord right now is what, a one in six chance
over the next 100 years.
Yeah, he always follows that with Russian roulette.
Other people say about 10%.
There's some different cosmologists.
There's one named Lord Martin Rees who puts it at 50-50.
Yeah, he actually is a member of the Center
for the Study of Existential Risk.
And we didn't mention before,
Bostrom founded something called
the Future of Humanity Institute, which is pretty great.
F-H-I?
And then there's one more place I wanna shout out.
It's called the Future of Life Institute.
It was founded by Max Tegmark and Jan Tallin,
who's a co-founder of I think Skype.
Oh, really?
I think so.
All right, well, you should probably also shout out
the Church of Scientology.
No, the Church is so genius.
Yeah, that's the one, that's what I was thinking about.
Well, they get confused a lot.
This is a pretty cool little thing you did here
with how long, cause I was kinda talking before
about the long view of things
and how long humans have been around.
So I think your rope analogy is pretty spot on here.
So that's J.L. Schellenberg's rope analogy.
Well, I didn't think you wrote it.
I wish it were mine.
I admit that you included it.
So what we were talking about like you were saying
is like it's hard to take that long view,
but if you step back and look
at how long humans have been around.
So Homo sapiens has been on Earth about 200,000 years.
It seems like a very long time.
It does, and even modern humans like us
have been around for about 50,000 years.
Seems like a very long time as well.
That's right.
But if you think about how much longer the human race,
humanity could continue on to exist as a species,
that's nothing.
It's virtually insignificant.
And J.L. Schellenberg puts it like this,
like let's say humanity has a billion year lifespan
and you translate that billion years into a 20 foot rope.
Okay, that's easy.
To show up with just the eighth of an inch mark
on that 20 foot rope,
our species would have to live another 300,000 years.
From the point where we've already lived.
Yes, we would have to live 500,000 years
just to show up as an eighth of an inch,
that first eighth of an inch on that 20 foot long rope.
Says it all.
That's how long humanity might have ahead of us.
And that's actually kind of a conservative estimate.
Some people say once we reach technological maturity,
we're fine, we're not gonna go extinct
because we'll be able to use all that technology
like having AI track all those near Earth objects
and say, well, this one's a little close for comfort.
I'm gonna send some nanobots out to disassemble it.
We will remove ourselves from the risk
of ever going extinct when we hit technological maturity.
So a billion years is definitely doable for us.
Yeah, and it's why we should care about it
is because it's happening right now.
I mean, there is already AI that is unaligned.
We've already talked about the biotech and labs.
Accidents have already happened, happened all the time.
And there are experiments going on with physics
that we think we know what we're doing.
But accidents happen and an accident
that you can't recover from, there's no whoopses.
Let me try that again.
Right, exactly, because we're all toast.
So this is why you have to care about it.
And luckily, I wish there were more people
that care about it.
Well, it's becoming more of a thing.
And if you talk to Toby Ord, he's like,
so just like say the environmental movement
was the moral push.
And we're starting to see some results from that now.
But say starting back in the 60s and 70s,
nobody had ever heard of that.
Yeah, I mean, it took decades.
He's saying like we're about,
that's what we're doing now with existential risk.
People are going to start to realize like,
oh man, this is for real.
And we need to do something about it.
Because we could live a billion years
if we managed to survive the next hundred.
Which makes you and me, Chuck, and all of us alive right now,
in one of the most unique positions
any human's ever been in.
We have the entire future of the human race
basically resting in our hands.
Because we're the ones who happen to be alive
when humanity entered its technological adolescence.
Yeah, and it's a tougher one then save the planet
because it's such a tangible thing
when you talk about pollution.
And it's very easy to put on a TV screen
or in a classroom, and it's not so easily dismissed
because you can see it in front of your eyeballs
and understand it.
This is a lot tougher education wise
because 99% of people hear something
about nanobots and gray goo or AI and just think,
come on man, that's the stuff of movies.
Yeah, and I mean, it's sad that we couldn't just dig
into it further because when you really do start
to break it all down and understand it,
it's like, no, this totally is for real and it makes sense.
Like this is entirely possible and maybe even likely.
Yeah, and not the hardest thing to understand.
It's not like you have to understand nanotechnology
to understand its threat.
Right, exactly.
That's well put.
The other thing about all this is that
not everybody is on board with this.
Even people who hear about this kind of stuff are like,
no, you know, this is-
Overblown.
This is pie in the sky, it's overblown.
Or the opposite of pie in the sky.
It's a-
The pie in the sky is good, right?
Cake in the ground.
Is that the opposite?
We're in a real dark sky territory.
It's a turkey drumstick in the earth.
Okay.
That's kind of the opposite of a pie.
Sure.
Okay.
I think I may have just come up with a colloquialism.
I think so.
So some people aren't convinced,
some people say, no, AI is nowhere near being even close
to human level intelligent,
let alone super intelligent.
Yeah, like why spend money?
Cause it's expensive.
Right, well, and other people are like,
yeah, if you start diverting, you know,
research into figuring out how to make AI friendly,
I can tell you China and India aren't gonna do that.
So they're going to leapfrog ahead of us
and we're going to be toast competitively.
Right.
So there's a cost to an opportunity cost,
there's an actual cost.
So there's a lot of people,
it's basically the same arguments for people
who argue against mitigating climate change.
Yeah.
Same thing kind of.
So the answer is terraforming, terraforming.
Well, that's not the answer.
The answer is to study terraforming is right.
The answer is to study this stuff
and figure out what to do about it.
But it wouldn't hurt to learn how to live on Mars.
Right, or just off of earth,
because in the exact same way like that,
like a whole village is at risk
when it's under a mudslide or a mountain
and a mudslide comes down.
If we all live on earth,
if something happens to life on earth,
that's it for humanity.
But if they're like a thriving population of humans
who don't live on earth, who live off of earth,
if something happens on earth, humanity continues on.
So learning to live off of earth
is a good step in the right direction.
But that's a plan B.
That's plan A dot one.
One A or one B.
Sure.
It's tied for first.
Like it's something we should be doing
at the same time as studying
and learning to mitigate existential risks.
Yeah, and I think it's gotta be multi-pronged
because the threats are multi-pronged.
Sure, absolutely.
And there's one other thing
that I really think you gotta get across.
Well, like we said that if say the US starts
to invest all of its resources
into figuring out how to make friendly AI,
but India and China continue on like the current path,
it's not gonna work.
And the same goes with if every country in the world
said, no, we're going to figure out friendly AI.
But just one dedicated self to continuing on this path,
the rest of the countries in the world,
progress would be totally negated by that one.
Yeah, so we gotta get the, it's gotta be a global effort.
It has to be a species-wide effort,
not just with AI, but with all these,
understanding all of them and mitigating them together.
Yeah, that could be a problem.
So thank you very much for doing this episode with me.
Oh, me? Yeah.
Oh, you're talking to Dave?
No, well, Dave too.
We appreciate you too, Dave, but big ups to you, Charles.
Because Jerry was like, I'm not sitting in that room.
He's like, I'm not listening to Clark blather on
about existential risk for an hour.
So one more time, Toby Yords,
The Precipice is available everywhere you buy books.
You can get the end of the world with Josh Clark
wherever you get podcasts.
If this kind of thing floated your boat,
check out the Future of Humanity Institute,
the Future of Life Institute.
And they have a podcast hosted by Ariel Kahn.
And she had me on back in December of 2018
as part of a group that was talking about existential hope.
So you can go listen to that too.
If you're like, this is a downer.
I want to think about the bright side.
Sure.
There's that whole Future of Life Institute podcast on there.
So what about you?
Are you convinced of this whole thing?
Like that this is an actual thing
we need to be worrying about and thinking of?
Meh.
You know?
Really?
No, I mean, I think that sure,
there are people that should be thinking about this stuff.
And that's great.
As far as like me?
What can I do?
Well, I ran into that.
Like there's not a great answer for that.
It's more like start telling other people
is the best thing that the average person can do.
Hey, man, we just did that in a big way.
We did, didn't we?
That's great.
500 million people.
Now we can go to sleep.
Okay.
You got anything else?
Hmm, I got nothing else.
All right.
Well then, since Chuck said he's got nothing else,
it's time for Listener Mail.
Yeah, this is the opposite of all the smart stuff
we just talked about.
I just realized.
Hey guys, love you.
Love stuff you should know.
On a recent airplane flight,
I listened to and really enjoyed the Coyote episode
wherein Chuck mentioned Optan Wolfbait
as a euphemism for farts.
Coincidentally, on that same flight,
we're Bill Nye, the science guy.
What?
And Anthony Michael Hall, the actor.
What?
It was a star-studded airplane flight.
Wow.
He said, so naturally, when I arrived at my home,
I felt compelled to watch, rewatch the 1985 film,
Weird Science, in which Anthony Michael Hall stars.
In that movie, and I remember this now that he mentions it,
in that movie, Anthony Michael Hall uses the term Wolfbait
as a euphemism for pooping.
Dropping Wolfbait, which makes sense now,
that it would be actual poop and not a fart.
Did you say his name before?
Who wrote this?
No, your friend who used the word Wolfbait.
Oh, Eddie, yeah, sure.
Okay, so is Eddie like a big Weird Science fan
or Anthony Michael Hall fan?
I think he just, I don't know.
Kelly LeBrock fan?
Yeah, that must be it.
Okay.
It has been a full circle day for me
and one that I hope you will appreciate hearing about,
and that is Jake.
Man, can you imagine being on a flight
with Bill Nye and Anthony Michael Hall?
Who do you talk to?
Who do you hang with?
I'd just be worried that somebody was gonna take over
control of the plane and fly it somewhere
to hold us all hostage and make those two perform.
Or what if Bill Nye and Anthony Michael Hall are in cahoots?
Maybe.
And they take the plane hostage.
Yeah, it'd be very suspicious
if they didn't talk to one another, you know what I mean?
I think so.
Who was that?
That was Jake.
Thanks, Jake, that was a great email.
And thank you for joining us.
If you wanna get in touch with us like Jake did,
you can go on to stuffyoushouldknow.com
and get lost in the amazingness of it.
And you can also just send us an email
to stuffpodcastatihartradio.com.
Stuff You Should Know is a production
of I Heart Radio's How Stuff Works.
For more podcasts from I Heart Radio,
visit the I Heart Radio app.
Apple podcasts are wherever you listen
to your favorite shows.
Listen to Hey Dude, the 90s called
on the I Heart Radio app, Apple Podcasts,
or wherever you get your podcasts.
Listen to Frosted Tips with Lance Bass
on the I Heart Radio app, Apple Podcasts,
or wherever you listen to podcasts.