Lex Fridman Podcast - Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot
Episode Date: November 12, 2019Elon Musk is the CEO of Tesla, SpaceX, Neuralink, and a co-founder of several other companies. This is the second time Elon has been on the podcast. You can watch the first time on YouTube or listen t...o the first time on its episode page. You can read the transcript (PDF) here. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:57 - Consciousness 05:58 - Regulation of AI Safety 09:39 - Neuralink - understanding the human brain 11:53 - Neuralink - expanding the capacity of the human mind 17:51 - Neuralink - future challenges, solutions, and impact 24:59 - Smart Summon 27:18 - Tesla Autopilot and Full Self-Driving 31:16 - Carl Sagan and the Pale Blue Dot
Transcript
Discussion (0)
The following is a conversation with Elon Musk, part 2.
The second time we spoke on the podcast, with parallels, if not in quality, then an outfit,
to the objectively speaking greatest sequel of all time, Godfather Part 2.
As many people know, Elon Musk is a leader of Tesla, SpaceX, Newerlink, and the boring
company.
What may be less known is that he's a world-class engineer and designer, constantly emphasizing
first principles thinking and taking on big engineering problems that many before him
will consider impossible.
As scientists and engineers, most of us don't question the way things are done.
We simply follow the momentum of the crowd. But the revolutionary ideas that change the world on the small and large scales happen
when you return to the fundamentals and ask, is there a better way?
This conversation focuses on the incredible engineering and innovation done in brain
computer interfaces at Neuralink.
This work promises to help treat neurobiological diseases
to help us further understand the connection
between the individual neuron
to the high level function of the human brain.
And finally, to one day expand the capacity of the brain
through two-way communication
with computational devices, the internet,
and artificial intelligence systems.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe to YouTube, Apple Podcasts, Spotify, Support on Patreon, or
Simply Connect with me on Twitter, Alex Friedman spelled F-R-I-D-M-A-N.
And now, as an anonymous YouTube commenter, refer to our previous conversation
as the quote, historical first video of two robots
conversing with the Supervision,
here's the second time,
the second conversation with Elon Musk. Let's start with an easy question about consciousness.
In your view, consciousness is something that's unique to humans, so it's something that
permeates all matter, almost like a fundamental force of physics?
I don't think consciousness permeates all matter.
Pancycus believe that.
There's a philosophical, how would you tell?
That's true, that's a good point.
I believe in scientific methods,
don't really mind anything,
but the scientific method is like,
if you cannot test the hypothesis,
then you cannot reach
meaningful conclusion that it is true.
Do you think consciousness, understanding consciousness, is within the reach of science
of the scientific method?
We can dramatically improve our understanding of consciousness.
You know, we hot pressed to say that we understand anything with complete accuracy, but can we dramatically improve
our understanding of consciousness,
I believe the answer is yes.
Does an AI system in your view have to have consciousness
in order to achieve human level or superhuman level intelligence?
Does it need to have some of these human qualities
that consciousness may be a body,
maybe a fear of mortality,
capacity to love those kinds of silly human things.
There's a different, you know, there's this scientific method to try to pretty much believe
in, where something is true to the degree that it is testably so. And otherwise, you're really just talking about
preferences or
untestable belief, that kind of thing.
So ends up being somewhat of semantic question
where we're conflating a lot of things
with the word intelligence.
If we parse them out and say
You know all we headed
Towards the future where NAI will be able to
Outthink us in every way
Then the answer is unequivocally yes
In order for an AI system that needs to all think us in every way, it also needs to have
a capacity to have consciousness, self-awareness, and
understand it. It will be self-aware, yes. That's different from consciousness.
I mean, to me in terms of what consciousness feels like, it feels like consciousness is in a different dimension.
But this could be just an illusion.
If you damage your brain in some way physically, you get, you, you damage your consciousness,
which implies that consciousness is a physical phenomenon in my view.
The thing is that, that I think are really quite likely that digital intelligence will outthink us
in every way and we'll certainly be able to simulate what we consider consciousness.
So to a degree that you would not be able to tell the difference.
And from the aspect of the scientific method it might as well be consciousness
if we can simulate it perfectly. If you can't tell the difference, when this is sort of the tearing test, but think of
a more sort of advanced version of the tearing test.
If you're talking to digital super intelligence and can't tell if that is a computer or
a human, like let's say you're just having a conversation over a phone or a video conference or something
where you think you're talking, looks like a person makes all of the right inflections
and movements and all the small subtleties that constitute a human and talks like human
makes mistakes like a human.
And you literally just can't tell is this are you
Really constantly with a person or a or an AI
Might as well as well be human so on a darker topic you've expressed serious concern about existential threats of AI
It's perhaps one of the greatest challenges our civilization faces, but since
I would say we're kind of an optimistic descendant of apes, perhaps we can find several paths
of escaping the harm of AI. So if I can give you three options, maybe you can comment which
do you think is the most promising. So one is scaling up efforts on AI safety and beneficially-eyed research in hope of finding an
algorithmic or maybe a policy solution.
Two is becoming a multi-planetary species
as quickly as possible.
And three is merging with AI and riding the
wave of that increasing intelligence as it
continuously improves.
What do you think is most promising,
most interesting, as a civilization that we should invest in?
I think there's a lot of investment going on in AI. Where there's a lack of investment is
in AI safety. And there should be, in my view, a government agency that oversees
anything related to AI to
confirm that it is does not represent a public safety risk. Just as there is a regulatory authority
for the like the food and drug administration is necessary for automotive safety. There's the FAA
for aircraft safety, which generally comes in conclusion that it is important to have a government referee
or referee that is serving the public interest in ensuring that things are safe when there's
potential danger to the public. I would argue that AI is unequivocally something that
has potential to be dangerous to the public and therefore should have a regulatory agency
just as other things that are dangerous to the public have a regulatory agency.
But let me tell you the problem with this is that the government moves very slowly
and the rate of the usually where a regulatory agency comes into being is that
something terrible happens. There's a huge public outcry,
and years after that,
there's a regulatory agency or a rule put in place.
It takes something like seatbelts.
It was known for a decade or more
that seatbelts would have a massive impact on safety
and save so many lives in serious injuries. And the car industry
fought the requirement to put seat belts in tooth and nail. That's crazy. And hundreds
of thousands of people probably died because of that. And they said people wouldn't buy
cars if they had seat belts, which is obviously absurd. Or look at the back of the industry and how long they fought anything about smoking.
That's part of why I helped make that movie, thank you for smoking.
You can see just how pernicious it can be when you have these companies effectively achieve
regulatory capture of government, the bad.
People in the act community refer to the advent of digital superintelligence as a singularity.
That is not to say that it is good or bad, but it is very difficult to predict what will happen after that point. And that there's some probability it will be bad,
some probability it will be good.
Obviously, I want to affect that probability
and have it be more good than bad.
Well, let me, on the merger with AI question
and the incredible work that's being done in your link.
There's a lot of fascinating innovation here
across different disciplines going on.
So the flexible wires, the robotic solar machine, the responsive brain movement, everything
around ensuring safety and so on.
So we currently understand very little about the human brain.
Do you also hope that the work at Neuralin will help us understand more about our, about the human
mind, about the brain?
Yeah, I think the work in Neuralin will definitely shut a lot
of insight into how the brain and the mind works. Right now,
just the data we have regarding how the brain works is very
limited. You know, we've got F FMRI, which is that's kind of like putting a
stethoscope on the outside of a factory wall
and putting it all over the factory wall.
And you can hear the sounds, but you don't know
what the machines are doing really.
Now, it's hard.
You can infer a few things, but it's very broad brushstroke.
In order to really know what's going on in the brain, you have to have high precision sensors
and then you want to have stimulus and response.
If you trigger a neuron, how do you feel?
What do you see?
How does it change the perception of the world?
You're speaking to physically just getting close to the brain, being able to measure signals
from the brain will give us open the door inside the factory.
Yes, exactly.
Being able to have high precision sensors that tell you what individual neurons are doing
and then being able to trigger the neuron and see what the response is in the brain.
So you can see the consequences of if you fire this neuron, what happens, how do you feel, what is change?
It'll be really profound to have this in people because people cannot take you
late. They're change, like if there's a change in mood, or if they, you know,
if they can tell you if they can see better or hear better, or be able to form sentences better or worse, or their memories are jogged
or that kind of thing.
So on the human side, there's this incredible general malleability, plasticity of the human
brain.
The human brain adapts, adjusts, and so on.
So that's not that plastic for your totally frank.
So there's a firm structure, but there's some plasticity in the open question is sort of I could ask a
Broad question is how much that plasticity can be utilized?
sort of on the human side there's some plasticity in the human brain and
on the machine side we have
Neural networks machine learning artificial intelligence. It's able to adjust and figure out signals.
So there's a mysterious language that we don't perfectly understand that's within the human
brain. And then we're trying to understand that language to communicate both directions.
So the brain is adjusting a little bit. We don't know how much and the machine is adjusting.
Where do you see as they try to sort of reach together, almost
like with an alien species, try to find a protocol, communication protocol that works.
Where do you see the biggest benefit arriving from on the machine side or the human side?
Do you see both of them working together?
I actually think the machine side is far more malleable than the biological side, by all the huge amount. So it will be the machine that adapts to the brain.
It does the only thing that's possible.
The brain can't adapt that well to the machine.
You can't have neurons start to regard an electrode as another neuron, because a neuron
just did this like the pulse.
And so something else is pulsing.
So there is that, there is
that elasticity in the interface, which we believe is something that can happen. But the
vast majority of the maliability will have to be on the machine side.
But it's interesting when you look at that synaptic plus-tasty at the interface side,
there might be like an emergent plusity. Because it's a whole nother, it's not like in the brain,
it's a whole nother extension of the brain.
You know, we might have to redefine
what it means to be malleable for the brain.
So maybe the brain is able to adjust to external interfaces.
There will be some adjustments to the brain
because there's gonna be something reading
and simulating the brain.
And so it will adjust to that thing.
But both the vast majority of the adjustment
will be on the machine side.
This is just, it has to be that, otherwise it will not work.
Ultimately, we currently operate on two layers.
We have Silver Lumbic, a prime primitive brain layer,
which is where all of our impul of impulses are coming from.
It's sort of like we've got like a monkey brain with a computer stuck on it.
That's the human brain. A lot of our impulses and everything are driven by the monkey brain.
And the computer of the cortex is constantly trying to make the monkey brain happy.
It's not the cortex that's steering the monkey brains. The monkey brain steering the cortex.
You know, but the cortex is the part that tells the story of the whole thing. So
we convince ourselves it's more interesting than just the monkey brain.
The cortex is like what we call human intelligence. You know, so it's like
that's like the advanced computer relative to other creatures. The other
creatures do not have,
either really, they don't have the computer,
or they have a very weak computer relative to humans.
It sort of seems like surely the really smart thing
should control the dumb thing,
but actually the dumb thing controls the smart thing.
So do you think some of the same kind of machine learning methods, whether that's natural
language processing applications, are going to be applied for the communication between
the machine and the brain, to learn how to do certain things like movement of the body,
how to process visual stimuli and so on. Do you see the value of using machine learning
to understand the language of the two-way communication
with the brain?
Sure.
Yeah, absolutely.
I mean, we're neural net.
And that AI is basically neural net.
So it's like digital neural net will interface
with biological neural net.
And hopefully bring us along for the ride. Yeah.
But the vast majority of our intelligence will be digital.
This is like, like, so like think of the difference in intelligence between
your cortex and your limbic system is gigantic.
Your limbic system really has no comprehension
of what the hell the cortex is doing. It's just literally hungry or tired or angry or
sexy or something. In that case, that impulsed to the cortex and tells the cortex to go satisfy that.
So then a great deal of like a massive amount of thinking,
like truly, the Japanist amount of thinking
has gone into sex without purpose,
without procreation, without procreation,
which is actually quite a silly action in the absence of procreation.
It's a bit silly.
So why are you doing it?
Because it makes the limit system happy, that's why.
That's why.
But it's pretty absurd, really.
Well, the whole of existence is pretty absurd in some kind of sense.
Yeah.
But I mean, there's a lot of computation has gone into
how can I do more of that with the procreation
not even being a factor?
This is I think a very important error
of research by NSFW.
And agency that should receive a lot of funding,
especially after this conversation.
If I propose a formation of a new agency. Oh, boy.
What is the most exciting or some of the most exciting things
that you see in the future impact of Neuralink?
Well, it's on the science, engineering,
and societal broad impact.
So Neuralink, I think at first we'll
solve a lot of brain-related diseases.
So create anything from like autism, schizophrenia, memory loss, like everyone experiences
memory loss at certain points in age.
Parents can't remember their kids' names and that kind of thing.
So there's a tremendous amount of good that Neuralink can do in solving critical damage to brain or spinal cord, there's a lot that can be done to improve
quality of life of individuals, and those will be steps along the way.
And then ultimately, it's intended to address the existential risk associated with digital superintelligence. Like we will not be able to be smarter than a digital super computer.
So therefore, if you cannot beat them, join them.
And at least we won't have that option.
So you have hope that your link will be able to be a kind of
connection to allow us to merge to ride the wave of the
improving AI systems.
I think the chance is above 0%.
So it's non-zero.
Yes.
There's a chance.
And that's...
So what if you've seen Dom and Dumber?
Yes.
So I'm saying there's a chance.
He's saying one in a billion or one in a million, whatever it was, the dumb and dumb are.
You know, we're from maybe one in a million
to improving, maybe it'll be one in a thousand
and then one in a hundred than one in ten.
Which depends on the rate of improvement of neurolink
and how fast we're able to do make progress, you know.
Well, I've talked to a few folks here
that quite brilliant engineers, I'm excited.
Yeah, I think it's like fundamentally good, fundamentally good. You're giving somebody back full motor control
after they've had a spinal cord injury.
Restoring main functionality after a stroke,
solving, debilitating, genetically-oriented
ranges these are all incredibly great, I think.
And in order to do these,
you have to be able to interface with neurons
at a
detailed level in each builder. Fire the right neurons, read the right neurons, and then
effectively you can create a circuit, replace what's broken with silicon, and essentially
fill in the most thing functionality. Then over time, we can have, we develop a tertiary layer.
So if like, limbic system is the primary layer, then the cortex is like a second layer.
And I said that, you know, obviously the cortex is vastly more intelligent than the limbic
system. But people generally like the fact that they have a limbic system and a cortex.
I haven't met anyone who wants to delete either one of them. They're like, okay, I'll
keep them both. That's cool. The limbic system is kind of fun. That's what the fun is.
It's absolutely. And then people generally don't lose the Cortex either.
Right. So they're like having the Cortex and the Lumbic system.
And then there's a tertiary layer which will be digital superintelligence.
And I think there's room for optimism given that the cortex,
the cortex is very intelligent and the limbic system is not.
And if they work together, well,
perhaps they can be a tertiary layer
where digital superintelligence lies.
And that will be vastly more intelligent than the cortex,
but still co-exist peacefully
and in a benign manner with the core text on the Linux system.
That's a super exciting feature, both in the low-level engineering that I saw
as being done here and the actual possibility in the next few decades.
It's important that Newerlinks solve this problem sooner rather than later
because the point at which we have digital super intelligence, that's when we pass
the singularity and things become just very uncertain. It doesn't mean that they're necessarily better good. For the point
of which we pass singularity, things become extremely unstable. So we want to have a human
brain interface before the singularity, or at least not long after it, to minimize existential
risk for humanity and consciousness as we know it.
So there's a lot of fascinating, actual engineering, a lot of
of problems here at New York Link.
Yeah, quite exciting.
What are the problems that we face in New York?
Art, material science, electrical engineering, software,
mechanical engineering, micro fabrication.
It's a bunch of engineering disciplines, essentially.
That's what it comes down to, you have to have a tiny electrode, so it's so small it doesn't hurt neurons,
but it's got to last for as long as a person.
So it's got to last for decades.
And then you've got to take that signal, you've got to process that single locally at low power.
So we need a lot of chip design engineers,
that are, you know, because we're going to signal processing
and do so in a very power efficient way
so that we don't heat your brain up,
because it brings very heat sensitive.
And then we're going to take those signals,
and we're going to do something with them,
and then we're going to stimulate the back to, you know,
so you could buy directional communication.
So if somebody's good at material science, software, mechanical engineering, electrical engineering,
trip design, micro fabrication, that's what those are the things we need to work on.
We need to be good at material science so that we can have tiny electrodes that last a long
time and it's a tough thing with the Charles Science Profile and the Tough One because you're
trying to read and simulate electrically in an electrically active area.
Your brain is very electrically active, electrochemically active. So how do you have a coating on the electrode that doesn't dissolve over time
and is safe in the brain? This is a very hard problem.
And then how do you collect those signals in a way that is most efficient,
because you really just have very tiny amounts of power
to process those signals.
And then we need to automate the whole thing,
so it's like Lasik.
So it's not, if this is done by neurosurgeons,
there's no way it can scale to a large number of people.
And it needs to scale as large numbers of people,
because I think ultimately we want the future
to be determined by a large number of the humans.
Do you think that this has a chance to revolutionize surgery period, so newer surgery and surgery?
Yeah, absolutely. Yeah, for sure. It's got to be like, like, if lazy cat to be hand done,
we're done by hand by a person, that wouldn't be great. Yeah.
It's done by a robot.
And the law pharmacologist kind of just needs to make sure your heads in my position and then they just press button and go. This is a smart summon and soon auto park takes on the full beautiful
mess of parking lots and their human, human, nonverbal communication,
I think it has actually the potential
to have a profound impact in changing how our civilization
looks at AI and robotics,
because this is the first time human beings,
people that don't own a test,
I mean, you've never seen a test or heard about a test,
a get to watch hundreds of thousands of cars
without a driver. Yeah. Do you see it this way, almost like an education tool for the world about AI?
Do you feel the burden of that, the excitement of that?
Or do you just think it's a smart parking feature?
I do think you are getting at something important, which is most people have never really seen
a robot.
And what is the card that is autonomous?
It's a full-wheeled robot.
Right. Yeah.
It communicates a certain sort of message with everything from safety to the possibility
of what AI could bring, its current limitations, its current challenges, it's possible.
Do you feel the burden of that?
Almost like a communicator, educator, to the world about AI?
We're just really trying to make people's lives easier with autonomy, but now that you
mentioned it, I think it will be an eye-opener to people about robotics because they've
really never seen most people never seen a robot and there are hundreds of thousands of
Tesla's won't be long before there's a million of them that have autonomous capability
and the drive without a person in it. Tesla's won't be long before there's a million of them that have autonomous capability and
the drive without a person in it.
And you can see the kind of evolution of the car's personality and thinking with each iteration
of autopilot.
You can see it's uncertain about this or it gets's more certain, now it's moving in a slightly
different way. Like I can tell immediately if a car is on Tesla Autopilot, because it's
got just little nuances of movement, it just moves in a slightly different way. It's,
it's, it's causing on Tesla Autopilot, for example, on the highway are far more precise about
being in the center of the lane than a person. If you drive down the highway and look at
how, where cars are,, the human driven cars are
in within their lane, they're like bumper cars.
They're like moving all over the place.
The car and autopilot, that's there.
Yes, so the incredible work that's going into that neural network is learning fast.
Autonomy is still very, very hard.
We don't actually know how hard it is fully, of course.
You look at the most problems you tackle. This one included with an exponential lens,
but even with an exponential improvement, things can take longer than expected sometimes.
So where does Tesla currently stand on its quest for full autonomy.
What's your sense?
When can we see successful deployment of full autonomy?
Well, on the highway already,
the probability of intervention is extremely low.
Yes.
So for highway autonomy,
with latest release, especially the probability of needing to intervene is really
quite lower.
In fact, I'd say for stuff to go traffic, it's as far safer than a person right now.
It's something that the probability of an injury or an impact is much, much lower for
a part of the person.
And then with navigate and autopilot, it can change lanes, take highway and or changes, and then we're coming out of from the other direction, which is
flow speed full autonomy. And in a way, this is like, it's like, how does a
person learn to drive? You learn to drive in the parking lot. You know, you know,
first time you learn to drive, probably wasn't jumping on Marcus Street in San Francisco.
That'd be crazy. You learn to drive in the parking lot, get things, get things right at low speed. And then the missing piece that we're working on is traffic lights
and stuff streets. Stuff streets, stuff streets, I would say actually also relatively easy,
because you know, you kind of know where the stuff street is, voice casing geocoded, and then
use visualization to see where the line is and stuff with the line
to eliminate the GPS error.
So I was actually, I was probably, complex traffic lights and very windy roads, the two things
that need to get solved.
What's harder, perception of control for these problems?
So being able to perfectly perceive everything or figuring out a plan
once you perceive everything, how to interact with all the agents in the environment, in your sense
from a learning perspective is perception or action harder and that giant beautiful multi-task learning
near on that work. The hottest thing is having an echo representation of the physical objects in vector space.
So taking the visual input, primarily visual input,
some sonar and radar, and then creating
the an accurate vector space representation
of the objects around you.
Once you have an accurate vector space representation,
the planning and control is relatively easier.
That's as relatively easy.
Basically, once you have accurate vector space representation,
then you're kind of like a video game.
Like, cars in like Grand Theft Auto or something, they work pretty well.
They drive down the road, they don't crash, you know, pretty much unless you crash into them.
That's because they've got an accurate effective space representation of where the cars are.
And then rendering that as the output.
Do you have a sense, high level,
that Tesla's on track,
on being able to achieve full autonomy?
So on the highway?
Yeah, absolutely.
And still no driver's sensing.
And we have driver's sensing with torque on the wheel? That's right. Yeah, yeah, absolutely and still no driver state driver sensing
And we have driver sensing with torque on the wheel. That's right. Yeah
By the way, just a quick comment on karaoke most people think it's fun But I also think it is a driving feature. I've been saying for a long time singing in the car is really good for
Attention management and visual and management
Right, it tells like karaoke
Yeah, it's great
So one of the most fun features of the car.
Do you think of the connection between fun and safety sometimes?
Yeah, you can do both at the same time that's great.
I just met with Andrew and wife of Carl Sagan.
Oh, you do have to cut Carl Sagan's.
I generally have a big fan of Carl Sagan, he was super cool.
And they had a great way of putting things. All of our consciousness, all civilization, everything we've ever known and done is on this tiny blue dot.
People also get they get too trapped in there.
This is like squabbles amongst humans.
And this is not a thing with a big picture.
They take a civilization and our continued existence for granted.
They shouldn't do that.
Look at the history of civilizations.
Their rise and they fall.
And now civilization is all, it's globalized.
And so civilization, I think now rises and falls together.
There's not geographic isolation.
This is a big risk.
Things don't always go up.
That should be, that's an important lesson of history.
In 1990, at the request of Carl Sagan, the Voyager 1 spacecraft, which is a spacecraft
that's reaching out farther than anything human made into space, turned around to take a
picture of Earth from 3.7 billion miles away. And as you're
talking about the pale blue dot, that picture there takes up less than a single
pixel in that image. Appearing as a tiny blue dot, as pale blue
dot is called it. So he spoke about this dot of ours in 1994 and if you could humor me, I was wondering if in
the last two minutes you could read the words that he wrote describing this, be able to
do that.
Sure.
Yes, it's funny, the universe appears to be 13.8 billion years old, with like 4.5 billion years old.
You know, another half billion years or so, the sun will expand and probably evaporate
the oceans and make life impossible on Earth, which means that if it had taken conscious
this 10% longer to evolve, it would never have bulged at all. It's just 10 sent longer.
And I wonder how many dead one planet civilizations
that are out there in the cosmos
that never made it to the other planet
and ultimately extinguished themselves
over destroyed by external factors.
Probably a few.
It's only just possible to travel to Mars, just barely.
If G was 10% more, wouldn't work really.
If FG was 10% lower, it would be easy.
Like you can go single stage from surface of of the Mars all the way to the surface of the Earth. Because Mars is 37% of its gravity.
We need a giant boost to get off the Earth.
Channeling call, say again.
Look again at that dot.
That's here. That's home. That's us.
On it, everyone you love, everyone you know,
everyone you've ever heard of, every human being who ever was, lived out their
lives. The aggregate of our joy and suffering, thousands of confident
religions, ideologies and economic doctrines, every hunter and farger, every hero
and coward, every creator and destroyer of civilization, every king
and peasant, every young couple in love, every mother and father, hopeful child, inventor
and explorer, every teacher of morals, every corrupt politician, every superstar, every
supreme leader, every saint and sinner in the history of us species lived there,
on a mode of dust suspended in a sunbeam.
Our planet is a lonely speck in the great, enveloping cosmic dark.
In our obscurity, in all this vastness, there is no hint that help will come from elsewhere
to save us from ourselves.
The Earth is the only world known so far to harbor life.
There is nowhere else.
At least in the near future to which I species could migrate.
This is not true.
This is false.
Mars.
And I think Carl is saying I would agree with that.
He couldn't even imagine it at that time.
So thank you for making the world dream.
And thank you for talking to me.
I really appreciate it.
Thank you.
you