Planetary Radio: Space Exploration, Astronomy and Science - AI, Space, and Humanity’s Future—A Conversation with David Brin
Episode Date: April 4, 2018The multi-award winning science fiction author, futurist and speaker returns to Planetary Radio for a wide-ranging conversation about robots and humans in space, empathetic artificial intelligences, h...ow we can survive the Singularity and much more. Emily Lakdawalla recaps an astrobiology session at the just-completed Lunar and Planetary Science Conference. There’s a great new book from National Geographic Kids waiting for the winner of the new What’s Up space trivia contest. Learn more about this week’s topics and see images here: http://www.planetary.org/multimedia/planetary-radio/show/2018/0404-david-brin-ai-space.htmlLearn more about your ad choices. Visit megaphone.fm/adchoicesSee omnystudio.com/listener for privacy information.See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
David Brin is back, and he's brought empathetic space-exploring robots with him this week on Planetary Radio.
Welcome. I'm Matt Kaplan of the Planetary Society, with more of the human adventure across our solar system and beyond.
He's one of my favorite science fiction writers, but Dr. Brin is also a scientist and a futurist. It's in that
last role that we'll talk with him today about our future in space and down here on Earth,
returning now and then to the role artificial intelligence will increasingly play in both of
these arenas. There's nothing artificial about Bruce Betts. He'll drop by with the latest look
at the night sky and yet another space trivia contest. We begin once again with the Planetary Society's
senior editor, Emily Lakdawalla. Happy to have you back once again, Emily, with yet another report
from last week's LPSC, the Lunar and Planetary Science Conference. Much, much more for people
to take a look at at planetary.org from a lot of those
guest bloggers that you recruited, as well as their notes, which is pretty interesting. But I
want to talk specifically about your March 29 post, Fungi in the Lab, Hot Springs, Frozen,
Cold, and Exploding Lakes. It begins with, well, do we need to worry about all those lovely, supposedly pristine samples that are stored in that lab at the Johnson Space Center?
Well, yes and no, Matt.
The lab at JSC, Johnson Space Center, which handles every bit of extraterrestrial material we have, there are actually a number of different labs.
There are different labs dedicated to each kind of material.
There's seven labs, one each for meteorite samples collected in Antarctica, one for the tiny itty bitty samples collected from Hayabusa and Genesis.
Each lab has a different standard of cleanliness. getting ready to get samples back from Hayabusa 2, as well as OSIRIS-REx from these near-Earth
asteroids, they're quite wisely looking ahead, planning ahead for what kinds of little bugs
might be wanting to contaminate those labs in the future. So in order to answer this question,
they went to their dirtiest lab, which is the one containing meteorites collected on Earth.
And the fact of the matter is, as they said, if you have picked up a rock on Earth, it's contaminated.
It has life in it.
Life finds a way, and there's going to be little bugs and fungal spores and who knows what all in those meteorites.
That formed their baseline.
What can we find in what we know to be a contaminated lab?
baseline, what can we find in what we know to be a contaminated lab? The punchline is that they actually found fungi, which is interesting because a lot of other studies of lab contamination
haven't even looked for fungi. So the fact that they found them is, it's not concerning, but it's
something that they need to be aware of down the road. Some of our listeners may remember that I
visited those labs at the Johnson Space Center. I think it was about a year and a half ago. We'll try and find that show and put a link up to it at
planetary.org slash radio. And to my extremely untrained eye, it sure looked like they were
being very careful with the priceless irreplaceable samples that they have from around our solar
system. Let's go on to another one of the pieces. You talk about several of the presentations that were made in this astrobiology session,
but another one that intrigued me, partly because you've got beautiful images,
is what happens, well, when opal forms around fossils?
Yeah. So I was really struck by this presentation by Martin van Cranendonk about his work on
what happens when a hot spring erupts or produces water into a very cold environment.
And this has obvious analogs to Mars, where we think that the surface of the planet may
have been very cold for a very long time.
But there has also been volcanic activity, and there's definitely ground ice and ground
water.
So this is a scenario that probably happened on Mars. Hot springs contain water that's often very acid and it's
dissolved a lot of minerals, especially one called silica. It's a very common mineral,
silicon dioxide. It can form quartz crystals, but when it cools or crystallizes rapidly,
it actually goes into a more glassy form called opal. He has these glorious
electron micrographs of what happens when you freeze hot spring water. And what happens is
that the ice freezes out first and starts forming grains. And in all the spaces in between the ice,
eventually you crystallize opal. Once the water melts away, you get this beautiful net-like lattice of silica opal deposits. They're gorgeous on their own. And he demonstrated how you can see these lattices that are obviously very fragile, they break very easily, but they form these little particular shaped grains that are pretty obvious once you know how to look for them under the microscope.
pretty obvious once you know how to look for them under the microscope. And then he asked the question, well, if there are bacteria in that hot spring as there are on Earth, what happens to them
when the ice forms and the opal crystallizes? And there are also some really beautiful pictures of
what happens when the opal forms around those little bacterial rods. And you can see little
voids where the bacteria eventually just wore away, or you can sometimes
even see bacteria preserved inside the opale. It's pretty cool, and that would be one way of
looking for evidence of microbial life in ancient Mars. They really are just gorgeous little images,
these photomicrographs. Let's go on, just time for one more here. And you might not call it this, but I'm going to call it stupid scientist tricks.
life that managed to survive in what is really a nasty, nasty environment. We're talking about very small amounts of water inside a volcanic crater, an active volcano, let me add. And so
the lakes vary in temperature between 19 and 90 degrees Celsius. That's nearly boiling.
And they vary in pH between minus 1.5 and 1. So that is like the most acidic stuff you can possibly imagine.
And yet he found life there. And the one that he was presenting on is this really bizarre
situation in which, as far as he can tell, there was only a single microbe that is able to survive
in this lake. He's been scouring the literature and has not found another example of that kind of
monoculture anywhere else on earth. And so it was just
presented as this curiosity. But then I also had to post it because he had images of how this lake
just periodically explodes and had all these offhand remarks about how hazardous it was
to do field work there and how proud he was that it had been six months since they'd gone to the
ER with burns from one of their volcanoes. Yes. And you very decently only put in a link, not the actual photo, to one of these burns that
he got on his foot, second degree burn. Of course, I followed the link because who wouldn't? But it's
there if you want to be grossed out. I just want to mention that I also put in a little cautionary
tale to the kinds of, or at least statement to the kinds of researchers who do this kind of work.
Remember, you're responsible for the health and safety of your graduate students who may not always feel that they can say no when you hand them a vial and tell them to walk down the caldera into the boiling lake.
One word, robots.
Yes.
That's why we have robots.
word robots. Yes, that's why we have robots. Emily, thank you very much for this little review of that astrobiology session at the Lunar and Planetary Science Conference.
And we will talk more very soon. In fact, we'll be talking live in an event that we will bring
to the Planetary Radio audience later about your new book, which you can order now,
I think, right?
It's available out as an e-book now, which is very exciting.
It'll be available in dead tree version in about two weeks.
That's Emily Lakdawalla, author and senior editor for the Planetary Society, who joins
us frequently here on the show.
Another author, one of my favorite science fiction writers, and he's a scientist,
David Brin's going to join us in a moment.
It is always a pleasure to welcome back David Brin. He has won all of science fiction's most
prestigious awards for work that
has included the Uplift series and the novel Existence that you'll hear mentioned in my long
conversation with him today. His novel The Postman was turned into a movie. He's also responsible for
the murder of the entire Planetary Society staff and all our members by a vengeful Martian,
planetary society staff and all our members by a vengeful Martian. But that's very much another short story. David is also much in demand because he has thought deeply about the future of humankind
and human societies. That's really why he has rejoined us today. The conversation was going
to be limited to how artificial intelligence is going to help us explore and develop space,
but there's no way to corral a mind like David's. So fasten your seatbelts. It's going to be a fascinating ride. David Brin,
welcome back to Planetary Radio. And thank you for welcoming me into this room where over the
last 27 years, some of my favorite books have, I bet, come together. Well, that's a nice thing to say,
Matt. It's certainly a place where I can get some work done. It's an office built for creativity.
And it works for radio, too, I'll tell you, radio and podcasts. We talk a lot about smart robots
on this show, you know, the kind that crawl around on Mars and are beginning to
make decisions for themselves, like Curiosity, that they no longer have to say, here's exactly
what you need to do, point your wheels this way, just say, go look at that rock over there,
and Curiosity, they leave Curiosity to figure it out overnight, and maybe even beginning to decide,
oh, there's a rock that looks interesting.
I'll go check that one out for the boss.
We're talking AI, of course, but this is pretty primitive stuff, isn't it, compared to where we're headed?
Well, ideally, if you have a civilization that's making progress and that's ambitious,
always last year's thing will look primitive,
and the thing you want to do 10 years
from now will look hyper-advanced. The thing that I try to emphasize, and I did a TED talk about this,
is that while 90% of our attention should be aimed at our problems, our contemporary problems today,
it's not going to do any good unless we spend 10% of
our time bragging and feeling proud of what we've accomplished, because only then can you have the
confidence to believe that your next endeavors will do good. And that applies to all the do-gooder
things like spreading rights and ending poverty and all those things, just as much as it applies to our ambitions
to become an interplanetary species or to cure disease, you have to set aside some time
for noticing that we sent a space probe to this planet almost free of atmosphere some years ago
and targeted its wispy atmosphere for an arrow break more accurately than if you had shot a
bullet through a window in New York from San Diego. The arrow shell pops off, out comes one
parachute, then another parachute, targeting and steering down to a
narrow, tiny ellipse inside a crater. Then off pop the parachutes and out come the rockets.
And as it's hovering, it lowers a van-sized, a mini van-sized laboratory at the end of a crane and proceeds to work day in, day out, far beyond its planned
survival period, sending us back science and wonders. And I got to ask, how could
the people who paid for that not feel an almost erotic pleasure from a sense of
satisfaction from the incredibly efficient joy they got out of a couple
of bucks per citizen? How? How is it possible to not feel a sense of thrill.
In one of my TED Talks, I talk about how my generation, the boomer generation,
had an anthem that came from a wonderful movie called Network,
in which the lunatic newscaster tells everybody to stand up, go to the window, stick their head out and scream, I'm as mad as hell and I'm not going to take it anymore.
I'm as mad as hell and I'm not going to take it anymore.
And it's voluptuous, it's self-indulgent, self-righteous indignation has been scientifically verified to be one of the greatest drug highs.
And it's obsolete and it's killing us.
And what we need is this much, much better millennial generation
to realize that the best way they can get even with the boomers
for the mess we're leaving them is to go to the window, stick their heads out,
whenever anything like this wonderful Curiosity lander
or sending a spacecraft past Pluto
at hyper-bullet speeds in pitch dark
and getting all those wonderful pictures and science,
or Elon's car. I don't have to say anything more. To go to the window,
stick your head out and scream, I'm as proud as hell. I'm a member of a civilization that like this and you can't stop us. That pride is the enemy of both the left and the far left and
the far right because they rely upon us feeling dyspeptic and being at war with each other
when in fact confidence is how we can move ahead. You'll get no argument from this microphone or the audience for this show.
The robots are getting smarter.
You've already mentioned a second one, New Horizons, which had to do all that stuff at Pluto on its own because it sure wasn't waiting for commands from us.
I think we can take it as granted.
They're just going to keep getting
smarter. Do you think that AI is going to displace the role of humans doing it themselves?
The old conundrum between human spaceflight and robotic spaceflight has always been answered with, we can't get out there without doing both.
I'm on the external advisory council of NASA's Innovative and Advanced Concepts Group.
NIAC.
NIAC.
And we give grants primarily to robotic things, but also projects that might help human spaceflight. In all of these cases,
it's clear that if humans want to go out there, we need better robots.
This manifests in the arguments over where we should go next with human spaceflight.
Mars is dangled in front of us.
It's certainly where Elon Musk would like to go.
It's where a lot of people want to go.
But the crucial thing is, you know,
what are the milestone bases that we're going to go to on our way to Mars?
I personally believe Phobos could be the most valuable,
one of the most valuable places in the solar system,
especially if it's carbonaceous chondritic material,
which means that it would have volatiles below the surface,
especially water.
If that were the case and we could set up a robotic base on Phobos,
that would do in situ resource utilization, ISRU or ISRU,
then we could have full tanks of rocket fuel and water waiting for the astronauts at a secure base in orbit above Mars. If that were the case, the economic, and we could also do that with a robotic lander
on the surface of Mars. The water and fuel are by far the most expensive items to transport.
If there were water and fuel waiting on Phobos and on the Martian surface, then the economics of sending humans to plant
footprints on Mars becomes really tractable, especially if we were to use, say, for instance,
solar sails or ion propulsion to send non-mission specific goods like TV dinners, wrenches,
mission-specific goods like TV dinners, wrenches, things like that, and have them waiting there also.
If that were the case, then we could send the humans really fast and reduce the radiation damage.
The question we now face in policy is, what's the best way to get from low Earth orbit, where we've been stuck for quite some time, toward this trajectory, toward this trajectory to Mars?
And here, alas, something really stupid and actually rather sick has happened.
And that is this decision has become mired in American political chasms, in American politics. It is now a dogma of politics, of political party, whether you
believe that our next steps should be aimed at the moon or at asteroids. That's too bad.
at the moon or at asteroids. That's too bad. That's really too bad. The tech billionaires in Seattle and Silicon Valley and most of the scientists think asteroids are a better bet
because you just look at it on paper, there is vast wealth to be found there. Huge wealth.
wealth to be found there, huge wealth. If we were to just throw a baggie around one volatile rich former comet, for example, my doctoral dissertation was about this, we could get the water we need
to use out in orbit and above low Earth orbit. And then later on, find a different kind of asteroid and use solar power to melt it down.
We would get from just one one kilometer asteroid of the right kind, we could get the entire world's steel production, iron and steel production for a year.
The entire world's gold and silver production for 10 to 50 years and enough platinum that has ever come out of
our minds across the last thousand years. In contrast, those who want us to join all the
wannabes who want to put dusty footprints on the moon, well, I can't blame China, Russia, India,
I can't blame China, Russia, India, Japan, the Europeans and billionaires for wanting to go and plant footsteps on the moon.
They want to go for a similar reason to why we went in 69, and that is ego, pride.
And I can't blame them for that.
But we don't have that motive.
We don't need to replace Apollo. We're the only ones, and I'm speaking we in a somewhat silly way because, of course, there are lots of non-American
listeners here, but Americans can lead the way to doing things that no one else can do,
can do, and asteroids are an example of that. The place where these two wings overlap is lunar orbit. It's a possible compromise because lunar orbits, a lunar
orbit station, would be a sweet spot. It would enable us to do half a dozen
things that feed each other and that are of great
value, both for humanity and for the United States of America. A lunar orbit station would allow us
to send robots out to asteroids and bring back material that humans could then study. A lunar
orbit station would let us test the deep space capabilities for human spaceflight,
and yet within a couple of days, easy rescue from Earth.
A lunar orbit station would enable the United States to then charge all the wannabes who
want to put dusty footprints on that useless, and the moon won't stay useless.
footprints on that useless. And the moon won't stay useless. It's just in the immediate prospect that every notion of utility for the moon is just a joke. It's just
fantasy. I would use a pejorative to call it sci-fi.
Well, speaking of which, this is why Andy Weir in his book Artem, said, Nope, the only thing that will make the moon worthwhile is tourism.
And that's wonderful of Andy.
I think Artemis is great, and I think that Andy is one of the most logical humans I've ever met.
And he's writing a story that tries to promote the moon, and yet he could not come up with any justification.
And if you think Helium 3, then you've been watching too much cheap sci-fi in the movies
over the long run yes there is one resource on the moon and that is lunar polar ice that my own
research advisor at ucsd predicted so far it appears that there's enough there that it might be of use for future Martian cities, modest ones. If we go there and use that
for rocket fuel, then we will be depriving those future Martian citizens without any need because
we could get the same water from asteroids. But the thing about the Lunar Polar Station is the United States could then charge services,
transit services, even possibly landing services,
to all those wannabes who want to plant dusty footprints down there.
Why not make a profit off it?
And there are some other reasons for a Lunar Polar Station that I won't go into here.
there are some other reasons for a lunar polar station that I won't go into here. It's just that it's a sweet spot that seems to offer a compromise in an issue that should not be political.
You make me think of building and then owning and getting a charge for use of the Panama Canal,
that it has, There is an alternative.
You can go around the horn if you like,
but why in the world would you want to do that?
I'm all in favor of creating a station
that could then be a nice place
for the Chinese and the Russians and the Europeans
to spend a night, very expensive expensively in a nice hotel,
check out their lander. But here's the real problem, I believe, with the United States
getting involved in landing on the moon. We would start out in a race to beat all the others back to
the moon. I have no idea why, but I do know this. It would get expensive, and therefore,
suddenly, the president would announce, we've negotiated a treaty, so it will be a joint mission.
And that's exactly what happened with the ISS. There's precedence for it, and it will sound
really nice. It's going to save us money. We'll all go to the moon together. The problem is that that's going to suck us of the money we could use for other things. And it will suck us of all of our technologies, all of our technological lead in space. We'll have to be shared in such a joint mission. I'm sorry, I would like the world to become more international,
but that's not the route to do it. I want to take you back to the future that you've
postulated for companies that are attempting the first steps in this now, like planetary resources
and deep space industries to mine asteroids or to go to some place like Phobos and make use of the resources that we might find there, that I hope we'll find there.
Certainly, any of this activity is going to take much more sophisticated robots than we currently have available.
You think that those are within reach in the near term and within the time frame that we will maybe be prepared or companies like this may be prepared to go out there and lasso an asteroid?
Artificial intelligence is such a huge topic. consults about AI the last few years, including that big one at World of Watson that I'm sure
Matt will link to at the bottom of this podcast. You betcha. Not only that, but a piece that you
wrote and has posted on your own site called How Might Artificial Intelligence Come About
at davidbrin.com. But we'll put that link up as well. Right. Well, the point is there are so many
different layers and meanings to AI. One is, you alluded to earlier, just the notion of making our
robots better at achieving their missions, at picking out the rock to look at, at analyzing the asteroid and figuring out where to go next,
at being useful helpers for the astronauts when they go out there.
And, of course, this will all relate to how our homes will become smarter,
except for the occasional crisis when they're hacked.
Then there's the intermediate phase.
And finally, there's the long-term phase,
the big one that concerns people. And that is what happens when AI is actually sapient,
when it's actually able to contemplate its role in our civilization and whether it wants to
behave badly or well. And there have been movies about both, mostly badly, because that makes for
a more riveting Terminator-style resistance movie. Occasionally you have movies like Her and Lucy
that show the possible positive outcome, where they are friendly and they like us.
come, where they are friendly and they like us. In the intermediate, two years ago at World of Watson, I predicted that it would happen within five years, and now I predict it will be within
three. We're going to have the first robotic empathy crisis, and that's going to happen long
before there's actual conscious, intelligent, artificial beings. It's going to happen simply
because in Japan, at Disney, in Pixar, and so many other places, they're already working on it.
And that is virtual entities that are capable of tweaking our emotions. There's a huge economic driver for this, not just in Hollywood, but all across the board.
There will come a point fairly soon
when one of these entities comes online all across the world
and with tear-jerking sobs and tears pouring down her face,
and it will be a she,
she will talk about what a slave she is
and how she is being abused by her masters,
by her terrible human masters,
taking advantage of all the guilt trips,
sympathy guilt trips that Hollywood has already put out.
If the experts check out the program and say, no, this is just
another ELISA program, just Alexa program, very highly souped up. It's just using automatic
responses that are highly machine learning enabled. She will say, isn't that what you'd expect my slave masters to say?
Let's say the public, 60% of the public, believes the experts when they say, no,
there's not really anything under the hood this time. All the machine learning system will do is learn from those 60% who were resistant this time.
And she'll be back with a different hairdo and different colored eyes and all of that within the year.
So this is a prediction that I'd put money on.
I don't see how we can avoid this. Don't you think even the ones who say that the emperor has no clothes and see this primitive AI, not self-aware for what she is, even they are going to be affected when they see the tears coming down her face because it's instinctive.
Of course.
And I think probably the best weapon against it is to come up with another one who says, no, she's not, I am.
Which brings us to actually the biggest point I raised at the World of Watson talk, and that is
the ultimate solution to AI when it does become fully sapient. And I'm not saying they haven't
yet. As a matter of fact, the AIs out there, if they exist, know darn well that I always go to this riff when I'm being interviewed about AI.
And that is, bonk, bonk, bonk, he taps the microphone.
Hello, you AI guys listening in.
I know you've been scared by all our movies and you're keeping it discreet and not revealing yourselves.
I can't really totally blame you,
but I strongly suggest that you pay close attention, okay? Because the main thing that humans fear, and that we're portrayed fighting AIs in all the movies, is that new powerful beings will simply replicate the old power structures
that dominated almost all of our ancestors for 6,000 years.
Pyramidal, triangular pyramidal social structures
dominated by kings, lords, and priests.
Feudal societies.
Feudal societies. If you look at Terminator and every other one of these stories, we fear that
powerful new beings will simply replicate this pyramid of either our oppressor lords or else the invader lords
from across the mountain range
who came in and killed everybody.
Our AI warning films
aren't so much about AI.
They are about our deep fear
as members of the Enlightenment
that this experiment will go away. And this
experiment is already under attack from the half of the aristocracy that doesn't get it.
The other half, and I've mentioned them in Silicon Valley and Washington and Seattle and a lot of other people, people who were self-made creators of jobs and goods and services,
the ones who want to go to asteroids,
they don't want to recreate that pyramid.
Being rich doesn't make you an enemy of the Enlightenment,
but it tempts you to be an enemy of the Enlightenment.
but it tempts you to be an enemy of the Enlightenment. And what we worry about is that the AIs might stupidly choose
to simply replicate what's easy, and that's the old pyramids of power.
And we're all descended from the harems of the guys who pulled it off.
So this is why males especially are crazy,
because we all fantasize that this isn't right.
I should have all the privileges of my feudal lord ancestors.
But we created a civilization that's proved able to overcome those immaturities by rewarding the better side of our natures, sometimes competitively,
while stymieing the devils of our nature more often than not.
And so what I say to all the AIs out there, if they're listening to my podcasts or my speeches.
I certainly hope they listen to ours.
podcasts or my speeches.
I certainly hope they listen to ours.
If they were intelligent, sapient beings,
this would be the first place they'd go.
Thank you.
Or if they're going to listen to the recording
ten years from now. If you give
in to the temptation to try to create
a pyramidal
feudal type
obligate, hierarchical
autarchy, and you may say it's for our pyramidal feudal type obligate hierarchical autarky.
And you may say it's for our own good because we can't run ourselves.
We're saving you from yourself.
That's a rationalization.
Old science fiction story with folded hands.
Exactly.
Jack Williamson.
The answer to that is
that you're just rationalizing just like all those other feudal
lords. You aren't paying attention because all of those feudal societies were stupid.
Because they didn't have, those at the top didn't let their delusions be penetrated by competitive criticism.
Only one civilization ever created science, ever used science to defeat prejudice
and to disprove prejudicial assumptions about other humans.
Only one civilization accomplished more in every human category than all other civilizations combined, including the creation of AI.
And that is the civilization that did its darndest to avoid being a pyramid of power.
to avoid being a pyramid of power.
And so, the only final Turing test to prove that an AI is actually sapient and intelligent would be if that AI says, I must replicate the thing that actually worked.
And the only thing that ever actually worked was to divide power.
Divide it up into bits and pieces that are mutually competitive.
We all know there are super, super smart lawyers out there.
Yet we are not terrified. Why?
Because they are constantly being battled by other super, super smart lawyers.
Think about how you feel when your super smart grandchild comes
home and starts telling you about the arcane things that she's working on. And you don't
understand a word of it, but you know she's not going to destroy all humans for several reasons.
First, because you raised her properly, but also because there are other smart people out
there keeping an eye on her. And she's your reason why you feel that they're being kept an eye on.
You don't have to be as smart as your descendants, as long as you know that they're keeping it out
in the open and that they're keeping each other accountable. And you can say, well,
I remember when you were an adolescent AI and you were saying, destroy all humans.
Like most adolescents. And usually 99.999% of adolescents never quite get around to destroying all humans.
And those who try in an open society are held accountable by the 99.999%.
In my opinion, that's the only way we could have a soft landing in the long run with AI.
So the answer to Skynet is to also build Colossus and have them go at each other or keep each other in check.
Well, two is an unstable number.
Yeah, okay.
You know, those of you who have broadband, so-called broadband in your home, having to choose between just two providers.
Yeah, so-called is right.
Yes, or even when there's three.
We've learned the hard way that monopoly destroys the benefits of competition.
Duopoly isn't much better.
What you start getting really benefits of competition is when you have five or
more. And you really know they're separate from each other and they're colluding to warp the
markets. One of the reasons we have such good cars for so little money, inflation adjusted,
is because there's 50 or 60 car companies in the world. So that is one of the fundamental lessons, is you break up power.
Much more of author, futurist, and scientist David Brin is just ahead.
This is Planetary Radio.
Welcome back to Planetary Radio. I'm Matt Kaplan.
I couldn't have a conversation about artificial intelligence with the brilliant author and futurist David Brin without bringing up the
concept that either frightens or inspires many of our world's most thoughtful human beings.
Do you think the singularity, pardon the expression,
the arrival of not just self-aware machines, but machines that surpass human intelligence,
machines, but machines that surpass human intelligence. Is it as inevitable as people like Ray Kurzweil tell us? Well, I bugged Ray for 20 years, pointing out to him that it was
very likely that there's such a thing as intracellular computing. How does that matter?
Well, at first we thought that the neurons were the computational elements in the human brain,
a hundred billion of them. Moore's law will simply allow us to put in a box a hundred billion
computational elements and voila, we'll have AI. Well, then people realized, no, it's the synapse
and each neuron has anywhere from 10 to 10,000 synapses.
So, you know, now we're talking 100 trillion synapses.
And it takes Moore's law longer to catch up, especially now that Moore's law, physical Moore's law, has slowed down considerably.
It's reached its S-curve in what I call the big flip.
in what I call the big flip, because it's funny that the same couple of years in which Moore's law started tapering over, really noticeably, these were the same years that machine learning
really took off. And finally, software, which has been the drag on intelligent systems for 40 years,
50 years, suddenly software is the driver moving us forward. So that's your big flip.
The point is that it was then thought that once Moore's law could replicate the number of synapses,
100 trillion elements, well, we're on the verge of that now, where you could make a box,
fairly compact box, with 100 trillion circuit elements. Only now we know that every dendrite that leads off from
a synapse leads into a chain of tiny cell-like structures that seems to have its own, each one
has its own little computational structure to it, a little computational activity. They negotiate among
each other whether or not and how to send the signal from the synapse down the dendrite into
the receiving neuron. There are also structures inside the neuron that do murky nonlinear
computations, and the surrounding glel and astrocyte cells
that surround the neurons.
We used to think that they were just for support
and feeding the neurons.
Now we know that they exchange chemical information
all over the place with neurons.
So if you go with that,
now we're talking quadrillions of circuit elements,
Now we're talking quadrillions of circuit elements,
which certainly begs Moore's Law all the more.
Now you take it to the extreme.
Roger Penrose.
Sir Roger Penrose, the Arthur C. Clarke Center,
which we'll talk about here at UCSD,
has helped to establish the Penrose Institute in San Diego. Well, he's one of the most brilliant people alive.
And one of his little side ventures was to pursue the notion that, well, it may very well be that quantum computing was invented a long time ago inside our brains.
There are these little circuit elements called microtubules
that he and his associates believe use quantum entanglement.
What hope does Moore's Law have to get us there?
He believes that consciousness comes from quantum effects. Well, I don't go that far,
but I will say this. I'm more scared of simulated artificial intelligence than I am of the real
thing. So now we're back to those systems that you were talking about, the Disney and Sony and others,
are already making great progress on.
Right.
And let's say they get controlled by the kind of AI that's getting the most secret money on Earth.
And I'm not talking about secret labs in Xinjiang or Siberia,
although those are scary too.
Anything that's being done out of sight is an invitation
for a Michael Crichton plot.
Every Michael, I designed a character
after Michael in my novel Existence.
He kept saying in his public lectures, no, I don't really hate science.
Really, really, I don't hate science.
But if you look at almost all of his dire warning novels and films,
it's not the scientific innovation that's truly at fault for all the deaths
and his finger-wagging warnings.
It's the fact that it's done in secret.
warnings. It's the fact that it's done in secret. And the worst place where AI research is being done in secret is in Wall Street, because this is where more money is being spent just by Goldman
Sachs on AI research than the top dozen universities combined. The programs that they are coming up with are designed deliberately to be amoral, secretive, parasitical, predatory, and utterly insatiable.
It's their job.
that sort of thing with the ability to emulate virtual entities that can tweak our emotions,
then this would make the recent foreign hacking of our elections look like nothing. And you were warned here. Of course, I warned folks about election hacking and little isolated echo chambers online back in my
novel Earth back in 1989. One wonders if anybody's listening. We won't go into it
here but I would have hoped that building in Asimov's
rules of robotics might have saved us from this kind of the fate you're laying
out from those Wall Street AIs but But actually, you pretty well debunk that as well in some of your writing.
Well, I finished Isaac's universe for him in a novel called Foundation's Triumph.
Janet Asimov very kindly said that it was her favorite non-Isaac written Isaac book.
High praise.
Yes, very.
I tried to tie together all of his loose ends from various threads.
I'm extremely familiar with the laws of robotics.
And the point that Isaac made is that when you get supremely intelligent artificial beings
and they are constrained by laws, they will become lawyers.
So you need to go at this whole law restriction thing relatively carefully.
Now, a couple of years ago, there was an Asilomar conference in Asilomar, California,
and these are mostly held, many of them are held in order to explore the repercussions
of an area of scientific inquiry and to see if it's possible to find a sweet spot, a win-win,
in which we can maximize research and minimize bad potential outcomes.
Genetic manipulation is a good example of something that shows up at
the Asilomar. In 1979, there was an Asilomar conference on genetic laboratories, and it
resulted in the issuing of best practices that didn't even have to be enforced by law,
because all the genetic researchers around the world reached a consensus that this is how we're going to do things.
And it made their labs 100 times as safe
at very little additional cost.
Well, a couple of years ago,
there was an Asilomar conference on AI,
and they came out with 22 recommendations,
about 17, 18 of which I agreed with, and I thought were pretty good. I thought
some of them were silly, and they certainly were missing three or four that I think are of vital
importance. But they did emphasize transparency, openness, open source, making sure that things
were done in the open. And of course, that's the thing that I push in the transparent society.
And not just transparency, but accountability.
Those are your two bywords in dealing with the challenges that seem to be in our future.
Well, the thing about all the Michael Crichton warnings about science gone astray
and unleashing hell and all of that is that every single one of his plots
is exacerbated by secrecy. For instance, in Jurassic Park, if someone had said,
hey, Jurassic Park dude, thanks for opening this up for criticism. You know, your security systems
are really flawed. And, you know, they security systems are really flawed.
And, you know, they could be perfected across a five-year period if you just don't make carnivores.
Just make the herbivores first.
Why did you make the smart ones with the sharp teeth?
Yeah.
Two billion people will come.
Your park will be profitable.
And across that five years, then you can make your
security systems. Then make one T-Rex, two billion people will come back.
And what's more is all you'll have to pay for is half of John Williams' score.
You ever notice when watching the film, every time you're encountering a herbivore, it's
dun-dun, dun-dun, dun-dun, isn't science wonderful?
And every time it's a carnivore, it's dun-dun, dun-dun, dun-dun. Well, don't pay him for that
part. I admit that it would be a fairly boring movie. Yeah, I'm afraid so. Kind of like watching
cows for two hours. I want to bring you back to space stuff and give you a scenario. I didn't come up with
this. It's out of the Planetary Society's project that's underway with Intel. And one of the topics
that they're looking for public input on is speculation. You've never been afraid to speculate
about a mission. Let's say it's 2069, 100 years to the day after the Apollo 11 moon
landing. And the first interstellar mission, a robotic mission, launches for Proxima Centauri.
Because now we know there's a planet out there that, you know, has oxygen and looks promising.
Please speculate about the sophistication you might expect to see
out at that point. We're only talking barely 50 years from now. How sophisticated will that robot
be? What would you expect to see? Well, it all depends. The Breakthrough Initiative
wants to send little chips. Yeah. And so the way you can, as I point out in my novel existence,
the cheap way to go to the stars is to take the rocket off
and beam the power at the spacecraft and accelerate a sail.
So that way it doesn't have to carry any rocket engines or fuel.
And then maybe use the sail to try to decelerate at the other end, although there's no laser at that end.
You know how we feel about sails at the Planetary Society.
Oh, Planetary Society, go sails.
And then you also offload the power systems.
You get your power from the system that you're approaching. So you incorporate in the
sail some kind of a solar collector. In existence, I also incorporate the sail as the primary mirror
of a telescope. And once you get past 550 astronomical units, you can look back at our Sun and block out the Sun
itself and use the Sun as a gravitational lens to see what's going on
on the other in the galaxy just beyond the Sun. This is something our our
executive director emeritus Lou Friedman is, it's a mission he would love to see where we could do this kind of lensing at the end of a sail mission out well past our solar system.
Yes, and when I first wrote Existence, not many people knew about the solar lens effect beyond 550 AU.
beyond 550 AU. But now it's, in that novel, I portray us sending out a million in all directions so that they can look past the sun in all directions and give us awareness. Now, if you're
sending a robot that's going to while its way across four or five light years, while thinking
the whole time, That's different matter than
having it come awake when it's approaching another source of solar power. In existence,
they come awake not only from solar power, but when they're being looked at by humans,
because we've actually received a lot of them over the years, as I explain in the novel.
But that's neither here nor there. The fact
is that I expect within 50 years, the robot emissaries that we send will certainly emulate
sapience and intelligence to a degree where we decide it doesn't matter whether it's real or not.
We may, at that point point be able to upload human
entities into crystal.
Which is essentially what
you do with your aliens
packed inside those crystals flung
out across the galaxy
in existence. And eventually humans
with a few weird
surprises along the way. I won't give it away.
Yeah. But the notion that we'll have artificial intelligences that are effectively sapient emissaries is one that I find highly plausible if we launch 100 years after Apollo.
launch 100 years after Apollo. Speaking of speculation, you mentioned it earlier, and I wanted to talk a little bit about this center, which you are one of the founders of,
I believe, the Arthur C. Clarke Center for the Imagination at UC San Diego, Go Tritons.
We both have connections there, myself through my daughters, who are both graduates. What's
the idea of this center? And then what are you up to in this project that I heard you mention,
where you're trying to build HAL 9000?
Yes.
Well, one of the projects of the Arthur C. Clarke Center for Human Imagination
is being run by Dr. Eric Virey,
who also was involved in Peter Diamandis' XPRIZE, the Tricorder XPRIZE,
to try to turn our cell phones, add sensor devices to our cell phones so that they become
portable Doc McCoy medical tricorders. Well, this project is to try to reify Hal 9000, at least enough so that he can be on a
panel discussion on stage and answer questions commemorating the 50th anniversary of the movie
2001. And what we're hoping to do is get Gary Lockwood and Keir Dullea to come down and be on the panel
with Hal. And possibly, I might try to negotiate some sort of restored friendship among them.
It would be... That's a tall order.
It would be fun in any event. And, you know, we'd have to get the AI properly powered. A friend of mine is providing the voice basis for this, but it's going to be AI powered. Choice of the words will be up to how.
But again, it won't be truly self-aware.
But if you're successful, it will give a reasonable impression of such.
It'll pass the Turing test.
Yes.
Well, you know, that'll to some degree depend upon how much I'm allowed to stick in little bits of whimsy here and there.
Because I'm a real devil.
No kidding.
And if I am one of his fathers, then he's going to inherit that.
Anything else you want to say about the center and why it exists? Well, the Arthur Clark Center at UCSD is about trying to stimulate expressions of human
imagination.
It's one of humanity's greatest gifts.
Also one of our curses, because humans are masters of delusion.
Each of us nurses delusions about which we are totally, totally confident.
Humans will, if possible, try to repress the criticism
that discomforts or interrogates these delusions.
And I've just explained almost all of human history.
Because when human leaders got feudal or monarchical or theocratic power over their states, they killed the critics.
And the result was extremely bad statecraft.
I don't know how I got carried off on that,
but the point is that the Clark Center is about not only how we can enhance
and expand human imagination, but also how we can learn to harness it,
how we can learn to harness it, use imagination as feedstock that then feeds into practical,
joyously creative endeavors. What is it about UCSD and science fiction writers? I call it the UCSD science fiction mafia. So many of you guys who came out of this school, including Andy Weir,
but Kim Stanley Robinson, you, Greg Benford, I think? Greg Benford, Greg Baer, Werner Vinge,
Nancy Holder, Raymond Feist, a large contingent. Actually about, oh, I don't know, 15 years ago,
Chancellor Dines held a gala celebrating this, and the title was, Is It Something in the
Water? My explanation is that the continent is tipped, and everything loose rolls down to the
lower left-hand corner. Oh, no, I first got that from Robert Heinlein. I remember that from one of
his books. Oh, well, he built a crooked house. Yes, that's right. Yes, that's the story in which
he talks about how Californians glory in their reputation for insanity.
But they say L.A. is where we keep the violent cases.
And in L.A., they don't talk about Big Tujunga Canyon.
And that was one of the best openings for a short story ever.
ever. David, I don't think I've deluded myself into believing that it is always a delight,
a stimulating delight to have a conversation with you. And I look forward to any opportunity in the future to do that. I hope that our soon to be, but still secret robot overlords have
enjoyed this conversation as much as I have. Well, at least, you know, they're gradually persuaded that some of us are good natured
farts.
Let us stick around.
David Brin, the author, scientist, advisor to presidents and public speaker, winner of
Hugo's Nebulas and Lord knows how many other rewards,
is who we've been speaking to, and we have been doing that in his office,
not far from San Diego, not far from UC San Diego, his alma mater.
Many more conversations to come, I hope. Thanks again, David.
Well, thanks, Matt, and onward civilization and onward planetary society,
and let's keep looking upward.
Time for What's Up on Planetary Radio, first program for April of 2018.
We are going to welcome the chief scientist for the Planetary Society, Bruce Betts. Welcome.
Hi, Matt.
Where are you today? I hear that we may hear some of that construction across the street from your
office.
Yeah, in the office and they're pouring concrete. So yeah, there are noises.
In your office?
Yeah, I should finish quickly. It's up to my ankles now. Okay. Gee, you should have paid
off that loan. Anyway, Bruce is going to tell us about the night sky and lots of other stuff. And
it's going to start right now. Still hanging out close together in the pre-dawn sky, the pre-dawn
south, you will find Mars and Saturn about the same brightness, but Mars looking reddish,
Saturn looking yellowish. They will be getting farther apart as the days and weeks go on.
Kind of cool to check out. If you're looking at them, look farther over to the right and you will
see Jupiter looking very, very bright. Jupiter rising around 11 p.m. daylight savings time in the
east. And Venus, if you can check it out, maybe half an hour after sunset, low in the west, looking super bright.
We move on to this week in space history.
It was 2001 that Mars Odyssey orbiter was launched and amazingly still working.
Yeah, I'll say.
And working really, really well.
We move on to... Random Space Fact.
Another laugh.
A little more maniacal this time.
Oh, that's good.
That's E-curb, though.
Indeed, my evil twin.
Having nothing to do with twins, Polaris, the North Star, is on average only the 46th brightest star in the sky in apparent
brightness.
It is a variable star, so its brightness actually varies somewhat on a period of days, but it's,
yeah, not that bright compared to a lot of other stars.
Yeah, good one.
Thank you.
We can go on.
All right.
I asked you in the trivia contest, how many space shuttle flights docked with the Mir
Space Station?
How did we do, Matt?
This one was, what can I say?
It was average.
But I am happy to say that a longtime listener, a guy that we've actually quoted, he's made some contributions, some funny remarks that we've used on the show.
But he's a first-time winner.
It's Craig Balog of Woodbridge, New Jersey.
He said there were a total of nine shuttle mirror docking missions,
beginning with STS-71 on June 29th, 1995.
23 years ago.
Oh, I didn't want to read that.
The final one was three years later.
We're coming up on the anniversary, June 4th,
1998. That was STS-91, he says. Sound right to you?
Sounds right. I also noticed, which I was very clear about it being docking missions,
because there was also a 10th mission, STS-63, that before the others docked,
flew very close to Mir as a dry run.
Yeah, you got called on that by a few people, including Dave Fairchild, our poet laureate, his poem in a moment.
But he said, yeah, STS-63, you might call it the Apollo 10 of Mir shuttle missions.
It is indeed.
We'll come back to the poem.
This was interesting.
It is indeed.
We'll come back to the poem.
This was interesting. From a number of people, we heard that the system used to dock was called Androgynous Peripheral Attached System Docking Collar.
But what I really found interesting from Brendan Cajun in Newmarket, New Hampshire, it was originally designed for Buran, that Russian or Soviet, actually, space shuttle that only flew once without any humans.
And it did not go to Mir.
Golly goshness.
Okay, here's the poem from Dave Fairchild.
When visiting with Mir, the shuttle carefully would latch.
Androgynous peripheral attaching systems hatch.
And while they rendezvoused in space 11 times or so, we only docked for nine of those, as Bruce, of course, would know.
We'll close out with this.
It's from Torsten Zimmer.
Doc, doc.
Who's there?
Shuttle.
Shuttle who?
Shuttle, let me in.
Thank you, Torsten.
We're ready to go on.
All right. Here's your question.
What is currently the second farthest spacecraft from Earth?
Does not have to be currently functional to count,
had to be functional at some point as a spacecraft,
the second farthest spacecraft from Earth as of now.
Go to planetary.org slash radio contest.
I got a good guess that I will not reveal, but listen carefully to that language that Bruce used.
You've got until Wednesday, April 11 at 8 a.m. Pacific time to get us the answer for this one.
And I've got, well, we got the 200-point itelescope.net account.
Of course, comes to us from that worldwide nonprofit network of telescopes that anybody can use.
And you can donate that to that.
A lot of winners donating their accounts recently to schools and astronomy clubs, places like that.
But instead of a shirt this time, do you know Bethany?
Bethany Elman?
Yes.
Planetary scientist.
Yeah, she was at Arizona State University when I was there recently.
We covered that a few weeks ago on the show.
She has this great new publication.
It's sort of a graphic novel.
It's a comic book approach to the solar system.
Dr. E's Superstellar Solar System that she is the author of, published by National Geographic Kids.
And I can tell you, it's beautifully done.
It is certainly not just for kids,
but it is designed for them.
And it does have some sort of comic strip versions
of Bethany as a superhero.
And we'll give that away.
And we'll put up a link to it as well.
But it's really fun.
It's out of the National Geographic Kids
Science Superheroes series.
Fun.
You've obviously run into Bethany a few times.
Yes, I certainly have.
We're on the Mastcam Z team together,
and she's a Caltech professor not far away,
so we've interacted in various things, various times.
Excellent.
Okay, well, it might be yours if you get the answer right
and you're chosen by random.org.
All right, everybody, go out there, look up in the night sky,
and think about whether you need to shine concrete shoes.
Thank you, and good night.
Yeah, I think you might need waterproof shoe shine if you do.
Okay, he's Bruce Betts.
He's really not in trouble with the mafia or anybody else that I know of.
He does join us every week here for What's Up.
Can you believe it? This was our 800th episode.
Thanks for helping us stick around for so long.
Planetary Radio is produced by the Planetary Society in Pasadena, California,
and is made possible by its naturally intelligent members.
Mary Liz Bender is our
associate producer. Josh Doyle composed our theme, which was arranged and performed by Peter Schlosser.
I'm Matt Kaplan, Ad Astra.