Lex Fridman Podcast - #217 – Rodney Brooks: Robotics
Episode Date: September 4, 2021Rodney Brooks is a roboticist, former head of CSAIL at MIT, and co-founder of iRobot, Rethink Robotics, and Robust.AI. Please support this podcast by checking out our sponsors: - Paperspace: https://g...radient.run/lex to get $15 credit - GiveDirectly: https://givedirectly.org/lex to get gift matched up to $300 - BiOptimizers: http://www.magbreakthrough.com/lex to get 10% off - Four Sigmatic: https://foursigmatic.com/lex and use code LexPod to get up to 60% off - SimpliSafe: https://simplisafe.com/lex and use code LEX to get a free security camera EPISODE LINKS: Rodney's Twitter: https://twitter.com/rodneyabrooks Rodney's Blog: http://rodneybrooks.com/blog/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:33) - First robots (28:58) - Brains and computers (1:01:47) - Self-driving cars (1:21:57) - Believing in the impossible (1:32:47) - Predictions (1:43:49) - iRobot (2:11:11) - Sharing an office with AI experts (2:23:21) - Advice for young people (2:27:07) - Meaning of life
Transcript
Discussion (0)
The following is a conversation with Rodney Brooks, one of the greatest roboticists in history.
He led the computer science and artificial intelligence laboratory at MIT, then co-founded I-Robot,
which is one of the most successful robotics companies ever. Then he co-founded Rethink Robotics,
that created some amazing collaborative robots like Baxter and Sawyer. Finally, he co-founded robust.ai,
whose mission is to teach robots common sense,
which is a lot harder than it sounds.
To support this podcast, please check out our sponsors
in the description.
As a side note, let me say that Rodney
is someone I've looked up to for many years,
and might now over twocade journey in robotics,
because one, he's a legit, great engineer of real-world systems, and two, he's not afraid
to state controversial opinions that challenge the way we see the AI world.
But of course, while I agree with him on some of his critical views of AI, I don't agree
with some others,
and he's fully supportive of such disagreement. Nobody ever built anything great by being
fully agreeable. There's always respect and love behind our interactions, and when a conversation
is recorded, like it was for this podcast, I think a little bit of disagreement is fun.
As usual, I'll do a few minutes of ads now, no ads in the middle.
I try to make these interesting, so hopefully you don't skip, but if you do, please still
check out the sponsor links in the description.
It is the best way to support this podcast.
I use their stuff and enjoy it, maybe you will too.
This show is brought to you by PaperSpace Gradient.
These guys are amazing.
It's a platform that lets you build, train, and deploy machine learning models of any size
and complexity.
I love how powerful and intuitive it is.
I'm likely going to use PaperSpace.
For a couple of machine learning experiments I'm doing as part of an upcoming video.
Fast.ai, of course I highly recommend you to use it.
That's a course by Jeremy Howard, who is as legit as it gets in the machine learning
space as an educator, as a programmer, I highly recommend his stuff.
And plus, he's just a good and brilliant human being.
You can host notebooks on there, you can swap out the compute instance anytime, so you
can start out on a small scale GPU or CPU instance and then swap out once your compute needs increase. To give Gradient a
try visit Gradient.run slash selects and use the sign up link there. You'll get
$15 in free credit, which you can use to power your next machine learning
application. That's gradient.run slash selects. I hope you use it. I hope you enjoy it.
And I hope to make a bunch of machine learning videos in the near future.
This shows also brought to you by Give Directly, a nonprofit,
unless you send money directly to people living in extreme poverty. Give Directly donors
includes previous guests of this very podcast, like Jack Dorsey, Elon Musk, Vitalik Buterin, Wilma
Caskill, and Peter Singer.
It may seem like there's more optimal ways to do it, but that's not actually the case.
Hundreds of independent studies have shown that direct giving can have positive impacts
on health, nutrition, income, education, and more.
The studies show giving cash unconditionally can more than double incomes, increase school
enrollment and entrepreneurship, decrease skipped meals, illness and depression, and cut
domestic violence by one-third.
It does not decrease hours' work, or increase spending on temptations like tobacco and alcohol.
There's a spillover effect to where every dollar given amounts to $2.60 in the local
economy.
In the last decade, GIVD has delivered $400 million to over 900,000 recipients across
nine countries. Visit GIVDirectly.org-lex and your first gift will be matched up to $300.
Let's give directly.org-lex.
The next sponsor is Buy Optimizers that have a new magnesium supplement.
When I fast or am doing keto or carnivore, sodium potassium and magnesium are essential.
Magnesium I think is the most tricky one to get right of those.
That's why I use magnesium breakthrough from buy optimizers.
Most supplements contain only one or two forms of magnesium, like
Glacinate or citrate. When in reality there are at least seven that your body needs and
benefits from. I recently did Instagram live with Andrew
Huberman, or we talked about magnesium, and then we talked about magnesium offline as well.
He educated me about it quite a bit. Again, it's tricky to get right, and there's a lot
of benefits if you get it right. And by optimizers, it's just an easy way to get it right.
It's kind of incredible when you get this part right, how easy it is to do the keto or the fast thing.
Anyway, go to magbreakthrough.com slashlex for a special discount. That's mag breakthrough.com slash Lex.
This show is also sponsored by four
sigmatic, the maker of delicious mushroom coffee,
implant based protein. Does the coffee taste like mushrooms? You ask, no,
it does not. It's delicious. And also just the ritual of drinking hot coffee
delicious and also just the ritual of drinking hot coffee that has this aromatic flavor and obviously smell to it in the morning.
That's just something that's like a catalyst for my mind to get extremely focused in the
morning.
It's the caffeine, the aroma, the hot coffee, the cup, the steam coming off of it.
That's when I know I'm raised again in the zone.
In the first two, three hours, I try to make sure I allocate to deep work and deep thinking,
really focusing on the difficult tasks where my mind is sharp and jumping from one element of a
task to another and getting stuff done. Anyway, get up to 40% off and free shipping on Martian coffee bundles if you go to 4sigmatic.com
slash Lex.
That's 4sigmatic.com slash Lex.
This show is also brought to you by SimplySafe, a home security company designed to be simple
and effective.
It takes only 30 minutes to set up.
You can customize the system for your needs on SimplySafe.com slash Lex.
I have a set up in my place and love it.
It's the whole thing.
The cameras, the monitoring, the response from the setup to the day-to-day operation.
It's super simple.
Like I've said many times before, I just like it when people design solutions and implement
them really well.
And in the home security company space, simply safe is the best.
You can go on the internet and ask around on Reddit
and elsewhere like what is the best home security company
and a lot of people say simply safe.
Anyway, go to simplysafe.com slash Lex
to customize your system and get a free security camera
plus a 60 day risk free trial.
Again, that's simplysa safe.com slash Lex.
This is the Lex Friedman podcast and that you've ever had the chance to work with?
I think it was Domemog, which was made by one of my grad students, Aaron Edsinger. It now sits in Daniela
Russo's office, director of C-Sail, and it was just a beautiful robot. And Aaron was really clever.
He didn't give me a budget ahead of time. He didn't tell me what he was going to do. He just started
spending money. He's a lot of money. He and Jeff Weber, who is a mechanical engineer, who Aaron insisted he bring with him when
he became a grad student, built this beautiful gorgeous robot, Donma, which is a Apertor
So, humanoid, two arms with three finger hands and face eyeballs, not the eyeballs, but everything else,
series elastic actuators.
You can interact with it, cable driven,
all the motors are inside, and it's just gorgeous.
The eyeballs are actuated too, or no?
Oh yeah, the eyeballs are actuated with cameras,
and so it had a visual attention mechanism.
Wow.
Looking when people came in and looking in their face
and talking with them.
Why was it amazing?
The beauty of it.
You said what was the most beautiful?
What is the most beautiful?
It's just mechanically gorgeous.
As everything Aaron builds,
there's always been mechanically gorgeous.
It's just exquisite in the detail.
I was talking about mechanically,
like literally the amount of actuators, the actuators, the detail. I was talking about mechanically, like literally the amount of actuators,
the actuators, the cables, he, um, analyzes different parts, different colors, and it just looks like a
work of art. What about the face? Is that do you find the face beautiful and robust? Um, when you make
a robot, it's making a promise for how well it will be able to interact.
So I always encourage my students not to over-promise.
I'm even with its essence,
like the thing it presents, it should not over-promise.
Yeah, so the Joko make, which I think you'll get,
is if your robot looks like Albert Einstein,
it should be the smartest Albert Einstein.
So the only thing in Domo's face is the eyeballs
and because that's all it can do,
it can look at you and pay attention.
And so there is no, it's not like one of those
Japanese robots that looks exactly like a person at all.
But see the thing is us humans and dogs too,
don't just use eyes for as a
attentional mechanisms. They also use it to communicate as part of the communication.
Like a dog can look at you, looking at another thing and look back at you and that
designates that we're going to be looking at that thing. Yeah, or intent, you know,
in both Baxter and Sawia at Rethink Robotics. They had a screen with graphic eyes,
so it wasn't actually where the cameras were pointing,
but the eyes would look in the direction
it was about to move its arm,
so people in the factory nearby
were not surprised by its motions,
because it gave that intent away.
Before we talk about Baxter,
which I think is a beautiful robot, let's go back to
the beginning. When did you first fall in love with robotics? We're talking about beauty and love
to open the conversations. This is great. I've got these, I was born in the end of 1954 and I grew up in
the Adelaide, South Australia, and I had these two books that are dated 1961.
So I'm guessing my mother found them in a store in 62 or 63.
How and why wonder books?
How am I wonder book of electricity?
And how and why wonder book of giant brains and robots?
And I learned how to build circuits, you know, when I was eight or nine, simple circuits.
And I read, you know, the binary system and sort of all these drawings mostly of robots.
And then I tried to build them for the rest of my childhood.
Wait, 61, you said?
This was when the two books, I've still got the at home. What does the robot mean in that context?
No, they were. Some of the robots that they had were
arms, big arms, to move nuclear material around, but they had pictures of welding robots that
look like humans under the sea, welding stuff underwater. So they weren't real robots, but they were, you know,
what people were thinking about for robots. What were you thinking about? Were you thinking
about humanoids? Were you thinking about arms with fingers? Were you thinking about faces or
cars? No, actually, to be honest, I realized my limitation on building mechanical stuff. So I just built the brains mostly out of different technologies as I got older.
I built a learning system which was chemical based and I had this ice cube tray each
well was a cell and by applying voltage to the two, it would build up a copper bridge.
So over time, it would learn a simple network
so I could teach it stuff.
And that was mostly things were driven by my budget
and nails as electrodes and an ice cream,
I mean, an ice cube tray was about my budget
at that stage.
Later I managed to buy transistors and then I could build gates and flip flops and stuff.
So, one of your first robots was an ice cube tray.
Yeah.
And it was very cerebral because it went to ad.
Very nice.
Well, just a decade or so before in 1950, Alan Turing wrote a paper that formulated the
Turing test, and he opened that paper with a question, can machines think?
So let me ask you this question.
Can machines think?
Can your ice cube tray one day think?
Certainly, machines can think because I believe your machine and I'm a machine and I believe we both think
I think I think any other philosophical position is sort of a little ludicrous
What does think mean if if it's not something that we do and we and we are machines
So yes machines can but do we have a clue how to build such machines?
That's a very different question.
Are we capable of building such machines? Are we smart enough? We think we're smart enough to do
anything, but maybe we're not. Maybe it's just not smart enough to build. It's not like us.
The kind of computer that Alan Turing was thinking about, do you think there is something fundamentally
or significantly different between the computer
between our ears, the biological computer that humans use, and the computer that he was
thinking about from a sort of high level philosophical?
Yeah, I believe that it's very wrong.
In fact, on halfway through, I think it'll be about a 480 page book titled, The Working
Title is Not Even Wrong. And if I may, I'll tell you a bit about that book. So there's
two, two, two, well, three thrusts to it. One is the history of computation, what we call
computation. It goes all the way back to some manuscripts in Latin from 1614 and 1620 by
Napier and Kepler through Babbage and Lovelace and then
Turing's 1936 paper is what we think of as the invention of modern
computation and that paper, by the way, did not set out to, you know, invent computation.
It set out to negatively answer one of Hilbert's three later set of problems. He called it,
as an effective way of getting answers. And Hilbert really worked with rewriting rules as did a church who also, at the same time,
a month earlier, and Turing disproved Hilbert's one of these three hypotheses. The other two had
been already been disproved by Godel. So Turing set out to disprove it because that's always easier to disprove these things than to prove that there is an answer. And so he needed, and it really came from his professor, I was
an undergraduate at Cambridge who had turned it into, is there a mechanical process? So he
wanted to have a show, a mechanical process that could calculate numbers,
because that was a mechanical process
that people used to generate tables.
They were called computers, the people at the time.
And they followed a set of rules where they had paper
and they would write numbers down
and based on the numbers, they'd keep writing other numbers.
And they would produce numbers for these tables, engineering tables, that
the more iterations they did, the more significant digits came out.
And so Turing in that paper set out to define what sort of machine could do that mechanical
machine, where it could produce an arbitrary number of digits in the same way a human computer did.
And he came up with a very simple set of constraints where there was an infinite supply of paper,
this is the tape of the Turing machine, and each Turing machine had a set, I mean, came with a set of instructions that as a person could
do with pencil and paper, write down things on the tape and erase them and put new things there.
And he was able to show that that system was not able to do something at Hilbert,
at a hypothesized, so he disproved it. But he had to show that this was this this system was good enough to do
Whatever could be done, but couldn't do this other thing. Yeah, and there he said
And he says in the paper. I don't have any real arguments for this, but based on intuition
So that's how he defined computation
And then if you look over the next from 1936 up until really around 1975,
you see people struggling with, is this really what computation is?
And so Marvin Minsky, very well known in AI, but also a fantastic mathematician
in his book, Finite Nymphen Machines from the mid 60s, which is a beautiful,
beautiful mathematical book.
It says at the start of the book 60s, which is a beautiful, beautiful mathematical book.
Says at the start of the book, well, what is computation? Shuring says this and
yeah, I sort of think it's that. It doesn't really matter whether the stuff's made of word or plastic. It's just, you know, but relatively cheap stuff can do this stuff. And so yeah, it seems like computation. And Donald Knuth in his first volume of his, you know,
art of computer programming in around 1968 says, well, what's computation? It's the stuff
like Turing says that a person could do each step without too much trouble. And so one of his
examples of what would be too much trouble was a step which required
knowing whether Fermat's last theorem was true or not because it was not known at the
time.
And that's too much trouble for a person to do as a step.
And Hubcroft and Olam sort of set a similar thing later that year.
And by 1975 in the A.O. Hubcroft and O almond book, they're saying, well, you know, we don't really know what computation
is, but intuition says this is sort of about right. And this is what it is. That's computation.
It says sort of agreed upon thing, which happens to be really easy to implement in silicon.
And then we had Moore's Law, which took off,
and it's been a incredibly powerful tool. Certainly, we're not you with that. The version we have
of computation, incredibly powerful. Can we just take a pause? So what we're talking about is there's
an infinite tape with some simple rules of how to write and on that tape, and that's that's what we're
kind of thinking about. This is computation. Yeah, and it's modeled after humans,
how humans do stuff.
And I think it's a,
cheering says in the 36 paper,
one of the critical facts here is that a human
has a limited amount of memory.
So that's what we're gonna put
onto our mechanical computers.
So,
so, you know, I'm like a mass,
I'm like mass or charge or, yeah not it's not given by the universe.
It was this is what we're going to call computation.
Yeah.
And then it has this really, you know, it had this really good implementation, which has
completely changed our technological work.
That's computation.
Second part of the book, I, or argument in the book, I have this 2x2 matrix with science in the top row, engineering
in the bottom row, left column is intelligence, right column is life.
So in the bottom row, the engineering, there's artificial intelligence and there's artificial life.
In the top row, there's neuroscience and abeogenesis. How does living matter turn in, how does non-living matter become living matter? Four disciplines. These four disciplines all
came into the current form in the period 1945 to 1965.
came into the current form in the period 1945 to 1965. That's interesting.
There was neuroscience before, but it wasn't effective neuroscience.
There was at least gang leader and there's electrical charges,
but no one knows what to do with it.
Furthermore, there were a lot of players who were common across them.
I've identified common players except for artificial intelligence and aviogenesis.
I don't have, but for any other pair, I can point to you people who work in, and a whole bunch of them, by the way,
were at the research lab for electronics at MIT, where Warren McCulloch held forth.
And in fact, McCulloch, the pits, let, Matt Jirana, wrote the first paper on functional
neuroscience called What the Frogs Eye Tells the Frogs Brain, where instead of it just
being a bunch of nerves, they sort of showed what different anatomical components were doing
and telling other anatomical components and, you know and generating behavior in the frog.
Would you put them as basically the fathers or one of the early pioneers of what are now
called artificial neural networks?
Yeah, I mean, Macollock and Pitts was a much younger than him, in 1943 had written a paper inspired by Bertrand Russell on a calculus
for the ideas eminent in neural systems, where they had tried to, without any real proof,
they had tried to give a formalism for neurons, basically in terms of logic and gates, or gates and not gates with no real
evidences. That was what was going on, but they talked about it. That was picked up by Minsky
for his 1953 dissertation on which was a neural network. We call it today. It was picked up by
John Von Neumann when he was designing the Edback computer in 1945.
He talked about its components being neurons based on, and in references, he's only got
three references and one of them is the McCulloch Pets paper.
So all these people and then the AI people and the artificial life people, which was John
Von Neumann originally, Zach Overlap, let's of them? They're all going around at the same time.
And three of these four disciplines turned to computation as their primary metaphor.
So I've got a couple of chapters in the book. One is titled,
wait, computers are people, because that's where our computers came from.
Yeah. And you know, from people who are computing stuff.
And then I've got another chapter.
Wait, people are computers, which is about computational neuroscience.
Yeah.
So there's this whole circle here.
And that computation is it.
And, you know, I have talked to people about, no, maybe it's not computation that goes
on in the head.
Of course it is.
Yeah. Okay. Well, when Elon Musk's rocket goes up, is it computing? Is that how it gets
into orbit by computing? But we've got this idea, if you want to build an AI system,
you're right, a computer program. Yeah. In the sense, so the word computation very quickly starts doing a lot of work that
was not initially intended to do.
It's the second and say, if you talk about the universe as essentially performing a computation
yeah, right.
Well, from thousands, he turns it into computation.
You don't turn rockets into computation.
Yeah.
By the way, when you say computation in our conversation, do you tend to think of computation
narrowly in the way touring, thought of computation?
It's gotten very...
Okay.
It's, you know, squishy.
Yeah, squishy.
Okay.
But computation in the way Turing thinks about it and the way most people think about it actually fits very well with
Thinking like a hunter-gatherer
There are places and there can be stuff in places and the stuff in places can change and it stays there until someone changes it
and it's this
Metaphor of place and container, which, you know, is a combination of our place cells in our hippocampus and cortex.
But this is how we use metaphors for mostly to think about.
And when we get outside of our metaphor range, we have to invent tools,
which we can sort of switch on to you.
So calculus is an example of a tool.
It can do stuff that
our raw reasoning can't do. And we've got conventions of when you can use it online.
But sometimes, you know, people try to, all the time, we always try to get physical
metaphors for things, which is why quantum mechanics has been such a problem for a hundred
years. Because it's a particle. No, it's a wave. It's got to be something we understand. And I say no,
it's some weird mathematical logic that's different from those, but we want
that metaphor. I suspect that a hundred years or two hundred years from now,
neither quantum mechanics nor dark matter will be talked about in the same terms. In the same way that, um,
flogetons theory eventually went away, um, because it just wasn't an adequate explanatory metaphor.
That metaphor was the stuff.
There is stuff in the burning. The burning is in the matter.
It turns out the burning was outside the matter.
It was the oxygen.
So our desire for metaphor and combined
with our limited cognitive capabilities,
guess us into trouble.
That's my argument in this book.
Now, and people say, well, what is it there?
And I say, well, I wish I knew that, right?
Look about that.
But I give some ideas.
So there's the three things.
Computation is sort of a particular thing we use.
Can I tell you one beautiful thing? One, yes, I think of him. So, you know, I used an example of a thing that's different
from computation. You hit a drum and it vibrates. And there are some some stationary points on
the drum surface, you know, because the waves are going up and down the stationary points. Now, you could compute them to arbitrary precision, but the drum just
knows them. The drum doesn't have to compute. What was the very first computer program ever
written by Adelavelace? The compute Bernoulli numbers. Bernoulli numbers are exactly what
you need to find those stable points in the drummer drama surface. And there was a bug in her program.
The arguments to divide were reversed in one place and it still worked.
Well, she's never got the run. They've never built the analytical engine. She wrote the program
without it. So the computation is sort of a thing that's become dominant as a metaphor, but is it the right
metaphor?
All three of these four fields adopted computation and a lot of its worlds were around Warren
McCulloch and all his students and he funded a lot of people.
And our human metaphors, our limitations to human thinking can play into this three themes of the book.
So I have a little to say about computation.
So you're saying that there is a gap
between the computer or the machine that performs
computation and this machine that appears to have consciousness and intelligence.
Yeah.
Can we, um, that piece of meat in your head, piece of meat, and maybe it's not just the
meat in your head, it's the rest of you too.
I mean, you have, you have, you actually have a neural system
in your gut.
I tend to also believe, not believe,
but we're now dancing around things we don't know,
but I tend to believe other humans are important.
Like, so we are almost like,
I just don't think we would ever have achieved
the level of intelligence we have
with other humans. I'm not saying so confidently, but have an intuition that some of the intelligence
is in the interaction.
Yeah, and I think it seems to be very likely, again, this speculation, but we are species
and probably in the end, to some extent, because you
can find old bones where they seem to be counting on them by putting notches that when the
event in the antithose had done, we are able to put some of our stuff outside our body
into the world, and then other people can share it.
And then we get these tools that become shared tools.
And so there's a whole coupling that would not occur
in the single deep learning network,
which was fed, you know, whole of literature or something.
Yeah, the neural network can't step outside of itself.
But is there some, can we explore this dark room a little bit and try to get it something?
What is the magic? Where does the magic come from in the human brain that creates the mind?
What's your sense? As scientists that try to understand it and try to build it,
what are the directions that followed might be productive?
Is it creative, interactive robots? Is it creating large deep neural networks that do self-supervised
learning and just like we'll discover that when you make something large enough, some interesting
things will emerge? Is it through physics and chemistry biology, like artificial life, angle, like we'll sneak up in this four quadrant matrix,
the dimension. Is there anything you're most, if you had to bet all your money,
financials, I wouldn't. Okay. So every intelligence we know, and who's animal intelligence, we know, and who's animal intelligence, dog intelligence, octopus intelligence, which
is a very different sort of architecture from us.
All the intelligence, as we know, perceive the world in some way and then have action in
the world, but they're able to perceive objects in a way which is actually
pretty damn phenomenal and surprising. We tend to think that the box over here between us,
which is a sound box, I think, is a blue box. But a blue
ness is something that we construct with with teleconcency. It's not a it's not a it's not the
blueness is not a direct function of the photons we're receiving. It's actually context, which is why you can turn, you know, you maybe seen the examples where someone turns
a stop sign into a, some other sort of sign by just putting a couple of marks on them and
the deep learning system gets it wrong.
Everyone says, but the stop sign's read.
You know, why is it, why is it thing to say other sort of sign?
Because redness is not intrinsic in just the photons. It's actually a construction of an understanding of the whole world
and the relationship between objects to get color constancy.
But our tendency in order that we get an archive paper really quickly
is to just show a lot of data and give the labels and hope it figures it out.
But it's not figuring it out in the same way we do.
We have a very complex perceptual understanding of the world. Dogs have a very different perceptual understanding based
on smell. They go smell a post. They can tell how many different dogs have visited in the last
10 hours and how long ago. There's all sorts of stuff that we just don't perceive about the world.
And just taking a single snapshot is not perceiving about the world. It's not perceiving the registration between us and
the object. And registration is a philosophical concept. Brian Kentland Smith
talks about a lot very difficult squirmy thing to understand. But I think none of
our systems do that. We've always talked to the
NAI about the symbol grounding problem, how symbols that we talk about are grounded in
the world. And when deep learning came along and started labeling images, people said,
ah, the grounding problem has been solved. No, the labeling problem was solved with some
percentage accuracy, which is different from the grounding problem. So you agree with Hans Marveck and what's called the Marveck's paradox that highlights
this counterintuitive notion that reasoning is easy, but perception and mobility are hard.
Yeah, we shared an office when I was working on computer vision and he was
working on his first mobile robot. What were those conversations like? That
was great. Did you still kind of maybe you can elaborate and do you still
believe this kind of notion that perception is really hard and like can you
make sense of why we humans have this poor intuition about what's hard and like can you make sense of why we humans have this poor intuition
about what's hard and not?
Well, let me give us sort of another story.
If you go back to the original teams working on AI
from the late 50s into the 60s,
and you go to the AI lab at MIT.
Who was it that was doing that?
Was it a bunch of really smart kids who got into MIT?
And they were intelligent.
So what's intelligence about?
Well, the stuff they were good at, playing chess,
doing integrals, that was hard stuff.
But, you know, a baby could see stuff. That wasn't intelligent.
Anyone could do that. It's not intelligence. There was this intuition that the hard stuff
is the things they were good at. The easy stuff was the stuff that everyone could do.
Maybe I'm overplaying it a little bit, but I think there's an element of that. Yeah, I mean there I don't know how much truth there is to like chest for example
as was for the longest time seen as the highest level of intellect
Right until we got computers who were better at it than people and then we realized you know
But if you go back to the 90s, you'll see that you know the stories in the press around and
Taskberoff was beaten by deep blue. Oh, this is the end of all sorts of things computers are gonna better do anything from now on And we saw exactly the same stories with alpha zero the go-playing program
Yeah, but still to me
Reasoning is a special thing and perhaps, actually, we're really bad at reasoning.
We just use these analogies based on our hum together
intuitions.
But why is that not, don't you think the ability
to construct metaphor is a really powerful thing?
Oh, yeah, it is.
That's a story.
It is.
It's the constructing the metaphor and registering that
to something called registering.
Like, isn't that what we're
doing with vision too, and we're telling
our stories, we're constructing good models of the world.
Yeah, yeah. But I think we jumped between what we're capable of and how we're doing it.
Right. There was a little confusion that went on. Sure.
As we were telling each other stories. Yes, exactly. Trying to dilute each other. No, I just think I'm not exactly. So I'm sure. As we're telling each other stories. Yes, exactly.
And trying to dilute each other.
No, I just think I'm not exactly,
so I'm trying to pull apart this Marvix paradox.
I don't view it as a paradox.
What did evolution, what did evolution spend its time on?
Yes, it spent its time on getting us to perceive
and move in the world.
That was, you know, 600 million years
as multi-salt and creatures doing that.
And then it was relatively recent that we were able to hunt or gather, or even animals
hunting.
That's much more recent.
And then anything that we, you know, speech, language, those things are, you know, a couple of hundred thousand years probably,
if that long, and then agriculture, 10,000 years, you know, all that stuff was built on top
of those earlier things, which took a long time to develop.
So, if you then look at the engineering of these things, so building it into robots, what's the hardest part of robotics do you think?
As the decades that you worked on robots, in the context that we're talking about,
vision, perception, the actual sort of the biomechanics of movement, I'm kind of
drawing parallel here between humans and machines always like what do you think is the hardest part of robotics?
I sort of think all of them.
There are no easy parts to do well.
We sort of go reductionist and we reduce it.
If only we had all the location of all the points in 3D, things would be great.
If only we had labels on the images, things would be great. If only we had labels on the images,
things would be great, but as we see that's not good enough.
Some deeper understanding.
But if I came to you and I could solve
one category of problems and robotics instantly,
what would give you the greatest pleasure?
I mean, is it, you know, you look at robots
that manipulate objects.
What's hard about that, you know?
Is it the perception?
Is it the reasoning about the world, the common sense reasoning, is
it the actual building of robot that's able to interact with the world?
Is it like human aspects of a robot that's interacting with humans in that game theory
of how they work well together?
Let's talk about manipulation procession, because I had this really blinding moment.
I'm a grandfather, so grandfather this really blinding moment. You know, I'm a grandfather, so
grandfathers had blinding moments. Yes, just three or four miles from here. Last year, my 16-month-old
grandson was in his new house, first time, right? First time in this house. And he'd never been
able to get to a window before but
this had some low windows and he goes up to this window with a handle on it
that he's never seen before and he's got one hand pushing the window and the
other hand turning the handle to open the window. He knew two different
hands, two different things he knew how to how to put together.
Yeah.
And he's 16 months old.
And there you are watching an off.
Yeah.
In an environment, environment he'd never seen before.
How did he do that?
Yes, that's a good question.
How do we do that?
That's why it's like, okay, like you could see the the leap of genius from using one hand
to perform a task to combining doing I mean first of all in manipulation that's really difficult
is like two hands both necessary to complete the action. In complete be different and it never
seen a window open. Yeah. But he inferred somehow a handle open something. Yeah, there may have been a lot of
slightly different failure cases that you didn't see. Yeah, not with a window, but with other
objects of turning and twisting in handles. There's a great counter to
great counter to, you know, reinforcements learning. We'll just give, you know, the robot,
or you give the robot plenty of time to try everything. Yes. Actually, can I tell a little side story here? So I'm in deep mind in London, this is three, four years ago, where, you know, there's a big
Google building, and then you go inside and
then you go through this more security, and then you get to deep mind where the other
Google employees can't go.
And I'm in a conference room, a bare conference room with some of the people, and they tell
me about their reinforcement learning experiment with robots, which are just trying stuff out. And then my robots, they're sores that we sold them.
And they really like them because sores are compliant and
consents forces so they don't break when they're bashing into walls. They stop and they do all this
stuff. And you know, so you just let that robot do stuff and eventually it figures stuff out.
By the way, so we're talking about robot manipulation, so robot arms and so on.
Yeah, so it's only a robot.
Yeah, I'm just going to go with so here.
So here's a robot arm that my company rethink robotics built.
Thank you for the context.
Sorry.
Okay, cool.
So we're in deep mind.
And you know, it's in the next room, these robots are just bashing around to try and use reinforcement learning to learn how to act and go, can I go see them? Oh, no, they're secret.
That's hilarious. Okay. Anyway, the point is, you know, this idea that you just let reinforcement learning figure everything out is so counter to how a kid does stuff. So again, story about my grandson,
I gave him this box that had lots of different lock mechanisms. He didn't randomly, you know,
and he was 18 months old. He didn't randomly try to touch every surface or push everything.
He found he could see what where the mechanism was and he started exploring the mechanism
for each of these different lock mechanisms.
There was reinforcement, no doubt, of some sort going on there. But he applied a pre-filter
which cut down the search space dramatically. I wonder to what level we're able to
introspect what's going on because what's also possible is you have something like reinforcement
learning going on in the mind and the space of imagination. So like you have a good model
of the world you're predicting and you may be running those tens of thousands of like
loops, but you're like as a human, you're just looking at yourself trying to tell a story
of what happened. And it might seem simple, but maybe there's a lot of computation going on.
Whatever it is, but there's also a mechanism that's being built up.
It's not just random search.
That mechanism prunes it dramatically.
Yeah, that pruning step.
But it doesn't, it's possible that that's, so you don't think that's akin to a neural network
inside a reinforcement learning algorithm. Is it possible?
It's, yeah, until it's possible. But, but, you know, I, I, I, I would, I'll be incredibly surprised
if that happens. I'll also be incredibly surprised that, you know, after all the decades that I've been doing this,
where every few years someone thinks, now we've got it.
Now we've got it.
You know, four or five years ago, I was saying, I don't think we've got it yet.
And then I was saying, you don't understand how powerful AI is.
I had people tell me, you don't understand how powerful it is.
I had people tell me you don't understand how powerful it is
I you know I I sort of had a
Track record of what the world had done to think well, this is no different from before
Well, we have bigger computers. We had bigger computers in the 90s and we could do more shit stuff
But okay, so let me let me push back. I'm I'm generally sort of optimistic and try to find the beauty and things.
I think there's a lot of surprising and beautiful things that neural networks,
this new generation of deep learning revolution
as revealed to me is continually been very surprising
the kind of things it's able to do.
Now, generalizing that over saying, this we've solved intelligence that's another
big leap, but is there something
surprising and beautiful to you about you know, and that works that where actually you said back and said
I did not expect this. Oh
I think I think the performance
The performance on ImageNet was shocking.
The computer vision, those early days, it was just very like, wow, okay.
That doesn't mean that they're solving everything in computer vision.
We need the solve or in vision for robots.
What about Alpha Zero and self-play mechanisms and reinforcement learning?
Isn't that?
Yeah, that was all in, in Donald Mickey's 1961 paper.
Everything there was there,
which introduced reinforcement learning.
No, but come on.
So, no, you're talking about the actual techniques,
but isn't it surprising to you the level
that's able to achieve with no human supervision
of chess play like,
to me, there's a big, big difference.
Maybe it's blue.
And maybe what that's saying is how overblown our view of ourselves is.
You know, we, the chess is easy.
Yeah, I mean, I came across this 1946 report that, I I'd seen this as a kid in one of those books that my
mother had given me actually, 1946 report which pitted someone with an abacus against an electronic
calculator and he beat the electronic calculator. So there at that point was, well,
humans are still better than machines at calculating. Are you surprised today that a machine can
do billion floating point operations a second and you're puzzling for minutes to do one?
to do one. So, you know, I am, I mean, I don't know, but I am certainly surprised. There's something to me different about learning. So, a system that's able to learn. Learning, now
she's seen how you're going to one of the deadly sins. Because of using terms overly broadly.
Yeah, I mean, there's so many different forms of learning.
Yeah.
There's so many different forms.
You know, I learned my way around the city.
I learned to play chess.
I learned Latin.
I learned to write a bicycle.
All of those of, you know,
are very different capabilities.
Yeah.
And if someone, you know, as I,
well, in the old days, people would write a paper about learning something.
Now the corporate press office puts out a press release about how company X has leading
the world because they have a system that can.
Yeah, but here's the thing.
Okay.
So what is learning?
What I refer to learning is many things.
But I have a suitcase word. It's a suitcase word. Okay, so what is learning? What I refer to learning is many things, but I
Suitcase word it's a suitcase word, but it's let loose the
There's a dumb system and over time it becomes smart
Well, it becomes less dumb at the thing that it's doing. Yeah, smart is a
Loaded wood. Yes, let's let's dumb with the thing is it gets better performance under some measure, yeah, under some set of conditions at that thing.
And, and most of these learning algorithms, learning systems fail when you change
the conditions just a little bit in a way that humans don't.
So, right, I was at deep mind.
Um, the Alpha Go had just come out. And I said, what would have happened if you'd
given a 21 by 21 board instead of a 19 by 19 board? They said fail totally. But a human player would
actually, you know, well, would actually be out of play. And actually funny enough, if you look
at DeepMind's work since then, they are presenting a lot of algorithms that would do well at the
bigger board.
So they're slowly expanding this generalization.
To me, there's a core element there.
It is very surprising to me that even in a constrained game of chess or go, that through
self-play, by a system playing itself that can achieve superhuman
level performance through learning alone.
So like, okay.
So, you know, you didn't, you didn't, you didn't like it when I referred to Donald Mickey's
1961 paper.
There, in the second part of it, it came a year later, they had self-play on
electronic computer at Tic-Tac-toe. Okay, it's not us, but it learned to play Tic-Tac-toe
through self-play. That's not what it learned to play optimally.
What I'm saying is, I have a little bit of a bias, but I find ideas beautiful, but only when they actually
Realize the promise that's another level of beauty like for example
What Bayzo's and Elon Musk are doing with rockets. We had rockets for a long time, but doing reusable cheaper rockets
It's very impressive in the same way. I okay. Yeah, I would have not predicted. First of all, when I was
started and fell in love with AI, the game of Go was seen to be impossible to solve. Okay, so I thought
maybe, you know, I, maybe it'd be possible to maybe have big leaps in a Moore's Law style of
way in computation, I'll be able to solve it.
But I would never have guessed that you could learn your way.
However, I mean, in the narrow sense of learning,
learn your way to beat the best people in the world
at the game of go without human supervision,
not studying the game of experts.
But okay, so that's surprising.
Using a different learning technique,
Arthur Samuel in the early 60s,
and he was the first person to use machine learning,
God had a program that could beat the world champion
that check his now.
And that time was considered amazing.
By the way, Arthur Samuel had some fantastic advantages.
Do you wanna hear Arthur Samuel's advantages?
Two things.
One, he was at the 1956 AI conference.
I knew Arthur later in life.
He was at Stanford when I was grad student there.
He wore a tie and a jacket every day.
The rest of us didn't.
The life for man, the life for man.
It turns out Claude Shannon,
in a 1950 scientific American article
outlined on chess playing,
outlined the learning mechanism that Arthur Samuel used
and they had met in 1956.
I assume there was some communication,
but I don't know that for sure.
But Arthur Samuel had been a vacuum
tube engineer on getting reliability of vacuum tubes and then had overseen the first transistorized
computers at IBM. And in those days, before you ship to computer, you ran it for a week
to get early failures. So here you have this whole farm of computers running random code
for hours and hours,
for each computer.
He had a whole bunch of them.
So he ran his chess learning program
with self-play on IBM's production line.
He had more computation available to him than anyone else
in the world, and then he was available to him than anyone else in the world.
And then he was able to produce a chess playing program, I mean, a check is playing program
that could beat the world champion.
So that's amazing.
The question is, what I mean, surprised, I don't just mean it's nice to have that accomplishment,
is there is a stepping towards something that feels
more intelligent than before? And that question is that's a new overview of the world.
Okay, well, let me then, that doesn't mean I'm wrong.
No, not that's.
So the question is if we keep taking steps like that, how far that takes us,
I were going to build a better recommender systems,
are we going to build like a better robot or will we solve intelligence? So, you know, I'm putting my bat on, but
it's still missing a whole lot, a lot. And why would I say that? Well, in these games,
they're all, you know, 100% information games. But again, each of these systems is a very short description of the current state, which
is different from registering and perception in the world.
Which gets back to the MyRubx paradox.
I'm definitely not saying that chess is somehow harder than perception or any kind of even any kind of robotics in the
physical world.
I definitely think is way harder than the game of chess.
So I was always much more impressed by like the workings of the human mind that is incredible.
The human mind is incredible.
I believe that from the very beginning, I want to be a psychiatrist for the longest time.
I always thought that's way more incredible in the game of chess.
I think the game of chess is, I love the Olympics.
It's just another example of us,
humans picking a task and then agreeing
that a million humans will dedicate their whole life
to that task and that's the cool thing
that the human mind is able to focus on one task
and then compete against each other
and achieve like weirdly incredible levels of performance.
That's the aspect of chess that's super cool. Not that chess in itself is really difficult.
It's like the Fermat's last theorem is not in itself to me that interesting, the fact that
thousands of people have been struggling to solve that particular problem is fascinating.
So can I tell you my disease in this way? Sure. Which actually is closer to what you're saying. So as a child, you know,
I was building various, I called them computers. They went, General purpose
computer, I skipped tray, I skipped tray was one, but I built other machines.
And what I liked to build was machines that could beat adults at a game.
And they couldn't, they adults couldn't beat my machine. Yes.
So that was you were like, uh, that's powerful. Like that powerful. Like, that's a, that's a way to rebel.
Yeah, I, I by the way, um,
did you, when was the first time you built something
that I'd performed you? Do you remember? Like, well, I,
I knew how it worked. I was probably nine years old,
and I built a thing that was a game where you take turns
and taking matches from a pile,
and either one who takes the last one or the one
who doesn't take the last one wins I forget.
And so it was pretty easy to build that out of wires
and nails and little coils that were like plugging in
the number and a few light bulbs.
The one that though I was proud of,
I was 12 when I built a thing out of old telephone and switchboard
switches that could always win at Tic Tac Toe. That was a much harder circuit to design. But again,
it was just, it was no active components. It was just three position switches, empty X, zero,
just three positions, which is empty x, zero, and nine of them and light ball one, which move it one at next.
And the human will go and move that.
See, there's magic in that creation.
It was.
Yeah.
I tend to see magic and robots that, like, I also think that intelligence is a little bit overrated.
I think we can have deep connections with robots very soon.
And we'll come back to connections, bro.
But I do want to say, I think too many people make the mistake of seeing that magic and
thinking, well, we'll just continue. But each one of those, each one of those is a hardfoot battle for the next step, the next step.
Yes.
Maybe the open question here is, and this is why I'm playing devil's advocate, but I often do
when I read your blog post in my mind, because I have like this eternal optimism is, it's not clear
to me, so I don't do what, obviously, the journalists do or like give into the hype, but it's not clear to me, so I don't do what obviously the journalists do or like give into the hype, but it's not obvious to me how many steps away we are from
from a truly
transformational
Understanding of what it means
To build intelligence systems like or how to build intelligence systems. I'm also aware of the whole history of artificial intelligence, which is where your deep grounding
of this is, is there has been an optimism for decades.
And that optimism, just like reading old optimism is absurd because people were like, this
is, they were saying things are trivial for decades since the 60s.
They're saying everything is true.
Computer vision is trivial. But
I think my mind is working crisply enough to where I mean we can dig into if you want. I'm
really surprised by the things deep minds that I don't think they're so they're yet
close to solving intelligence, but I'm not sure it's not 10 years away. What I'm referring to
is interesting to see when the engineering takes that idea to scale and the idea works.
And it fools people. Okay, honestly, Ronnie, if it was you, me, and I'm in this inside of
room, forget the press, forget all those things, just as a scientist, as a robotist.
You know, that wasn't surprising to you that at scale.
So we're talking about very large numbers.
Okay, let's pick one that's the most surprising to you.
Okay, please don't yell at me.
GPT-3.
Okay.
Well, that's a colossal thing.
I was gonna say, I'm gonna bring that out.
Okay, thank you.
Alpha zero, my Alpha- AlphaGo0, Alpha0,
and then AlphaFold1 and 2.
So, do any of these kind of have this core of,
forget usefulness or application or so on,
which you could argue for AlphaFold,
like as a scientist,
was those surprising to you that it worked,
as well as it did.
Okay, so if we're gonna make the distinction between
surprise and usefulness and I have to explain this.
I would say alpha fold and one of the problems
at the moment with alpha fold is, you know, it gets a lot of them right,
which is a surprise to me,
because they're a really complex thing.
But you don't know which ones it gets right,
which then is a bit of a problem.
Now, they've come out with a reason
to me in the structure of the protein,
it gets a lot of those right.
Yeah, it's a surprising number of them right.
Yeah, it's been a really hard problem.
So that was a surprise how many it gets right.
So far, the usefulness is limited because you don't know which ones are right or not. And
now they've come out with a thing in the last few weeks, which is trying to get a useful
tool out of it, and they may well do it.
And that sense of least alpha fold is different because your alpha fold 2 is different.
Because now it's producing datasets that are actually,
potentially revolutionizing competition biology,
like they will actually help a lot of people.
But you would say potentially revolutionizing.
We don't know yet.
But yeah, that's true.
Yeah, but I got you.
I mean, this is, okay, so you know what?
This is gonna be so fun.
So let's go
right into it. Speaking of robots that operate in the real world. Let's talk about self-driving cars.
Because you do, you have built robotics companies. You're one of the greatest roboticists in
history and that's not just in
the space of ideas. We'll also probably talk about that, but in the actual building and execution
of businesses that make robots that are useful for people and that actually work in the real
world and make money. You also sometimes are critical of Mr. Elon Musk or less more specifically focused on this
particular technology which is autopilot and cytestless.
What are your thoughts about Tesla autopilot or more generally vision-based machine learning
approach to semi-autonomous driving?
These are robots that are being used in the real world by hundreds of thousands of people and
If you want to go there, I can go there, but that's not too much which there let's say they're on par safety wise as humans currently
meaning human alone versus human plus
Robocaine so first let me say I really like the car I came here in here today, which is
2021
Model Mercedes E450. I am impressed by the
machine vision. So now other things I'm impressed by what it can do. I'm really impressed with
many aspects of it and I'm
It's able to stay in lane. Is it oh, yeah, it does the lane stuff
it
It's it's looking on the side of me. It's telling me about nearby cars or blind spots and so on
Yeah, when I when I when I'm going in close to something in the park, I get this beautiful, gorgeous,
top-down view of the world.
I am impressed up the wazoo of how, you know, registered and symmetrical.
Oh, so it's like multiple cameras in this all very thing together to produce the 360.
You kind of like 360 view, synthesized.
So it's above the car to go to produce the 360 view. 360 view synthesized.
So it's above the car.
And it is unbelievable.
I got this car in January.
It's the longest I've ever owned a car without digging it.
So it's better than me.
It's for me and it together better.
So I'm not saying technology is bad or not useful. But here's my point, yes. It's just, it's a replay of the same movie.
Okay, so maybe you've seen me ask this question before. But when, when, when did the first car go over 55 miles an hour for over 10 miles on a public freeway
with other traffic around driving completely autonomously?
When did that happen?
Was it the immune 80s or something?
It was a long time ago.
It was actually in 1987 in Munich.
A Munich there.
At the Bundeswehr.
Yeah.
So they had it running in 1987.
When do you think, and Elon has said he's going to do this, when do you think we'll have
the first car drive coast to coast in the US, hands off the wheel, hands off the wheel, feet off the
pedals, coast to coast.
As far as I know, a few people have claimed to do it.
1995, that was time again now.
I didn't know, but all that was the code.
They didn't claim, did they claim 100%?
Not 100%.
Yeah, not 100%.
But, and then there's a few marketing people who have claimed 100% since then.
But my point is that you know
I what I see happening again is someone sees a demo and
They over generalize and say we must be almost there
Yeah, well, we've been we've been working on it for 35 years
So that's demos, but this is gonna take us back to the same conversation with alpha zero. Are you not?
Okay, I'll just say what I am,
because I thought, okay, when I first started interacting
with the mobile I implementation,
it's the autopilot.
I've driven a lot of, you know,
I've been in Google stuff driving car since the beginning.
I thought there was no way before I sat and used mobile I,
I thought there's just no in computer vision, I thought there's no way it I set and used mobile I thought there's just no
in computer vision. I thought there's no way it could work as well as it was working.
So my model of the limits of computer vision was way more limited than the actual
implementation of mobile I. So let's one example I was really surprised. It's like,
wow, that was that was incredible. The second surprise came when Tesla threw away mobile I
and started from scratch.
I thought there's no way they can catch up to mobile I.
I thought what mobile I was doing was kind of incredible.
Like the amount of work and the annotation.
Yeah, mobile I started my own Shresher
and used a lot of traditional, you know,
hard-fought computer vision techniques.
But they also did a lot of good,
sort of like non-research stuff, like actual,
like just good, like what you do
to make a successful product, right?
It scaled all that kind of stuff.
And so I was very surprised when they from scratch
were able to catch up to that.
That's very impressive.
And I've talked a lot of engineers
though, it was involved.
This is, that was impressive.
And the recent progress, especially under,
well, the involvement under Kapati,
the, what they were, what they're doing with the data engine,
which is converting into the driving task
and see these multiple tasks, and then doing this edge case discovery
when they're pulling back,
like the level of engineering
made me rethink what's possible.
I don't, I still, you know,
I don't know, to that intensity,
but I always thought it was very difficult
to solve a time with driving with all the sensors,
with all of the computation.
I just thought it was a very difficult problem,
but I've been continuously surprised
how much you can engineer.
First of all, the data acquisition problem.
Because I thought, you know,
just because I worked with a lot of car companies,
they're so a little bit old school
to where I didn't think they could do this at scale,
like AWS style data collection.
So when Tesla was able to do that,
I started to think, okay, so what are the limits of this?
I still believe that driver like sensing
and the interaction with a driver
and like studying the human factor psychology problem
is essential.
It's it's always going to be there.
It's always going to be there even with fully autonomous driving, but I've been surprised what is the limit, especially a vision-based alone?
How far that can take us?
So that's my levels of surprise.
Now,
So that's my level of surprise. Now, can you explain in the same way you said like Alpha Zero,
that's a homework problem that scaled large in his chest,
like who cares, go with, here's actual people using an actual car and driving,
many of them drive more than half their miles using the system.
Right. So, and, yeah, they're doing well with pure vision.
For pure vision, yeah.
And, you know, and now no radar, which is,
I suspect that can't go all the way and one reason is,
without, without new cameras that have a dynamic range closer
to the human eye, because human eye has incredible dynamic range.
And we make use of that dynamic range
in its laminar,
in order to magnitude of some crazy number like that.
The cameras don't have that,
which is why you see the bad cases
where the sun on a white thing and the lines
in a way it wouldn't blind the person.
I think there's a bunch of things to think about before you say, this is so good, it's just
going to work.
Okay.
And I'll come out of for multiple angles.
And I know you've got a lot of time.
Yeah.
Okay.
I have thought about these things.
Yeah.
I know.
You've been writing a lot of great blog posts about it for a while
before Tesla had autopilot, right?
So you've been thinking about a time
is driving for a while from every angle.
So a few things.
You know, in the US, I think that the death rate
from Motivee Collexelence is about 35,000 a year, which is an outrageous number.
Not outrageous compared to COVID deaths, but there is no rationality.
And that's part of the thing.
People have said, engineers say to me, well, if we cut down the number of deaths by 10%
by having autonomous driving, that's going to be great. Everyone will love it. And my prediction is that if autonomous vehicles kill
more than 10 people a year, they'll be screaming and
hollering, even though 35,000 people a year,
I've been killed by human drivers.
It's not rational.
It's a different set of expectations.
And that will probably continue.
So there's that aspect of it.
The other aspect of it is that when we introduce new technology, we often change the rules
of the game.
So when we introduce cars, first, you know, into our daily lives, we completely rebuilt our cities and we changed
all the laws.
J-walking was not an offense.
That was pushed by the car companies so that people would stay off the road so there
wouldn't be deaths from pedestrians getting hit.
We completely changed the structure of our cities and have these foul smelling things,
you know, everywhere around us.
And you know, now you see pushback in cities like Barcelona, it's really trying to exclude
cars, etc.
So I think that to get to self-driving, we will, um, large adoption. It's not going to be just take the current situation, take out the driver,
and put the same car doing the same stuff, because the end cases too many. Um, here's an interesting question.
How many, um, fully autonomous, um, um, train systems do we have in the US?
I mean, do you call them as fully autonomous?
I don't know because they're usually as a driver, but they're kind of autonomous, right?
No, let's get rid of the driver
Okay, I don't know. I'll be it's either 15 or 16 most most of them are in airports
Okay
There's a few that go about five,
two that go about five kilometers out of airports.
Yeah.
Um, uh, when do, when, when is the first fully autonomous
train system for mass transit expected to operate fully
autonomously?
No driver.
Uh, in the US city. I was expected to operate in 2017 in Honolulu. It's delayed, but they
will get there. But by the way, it was originally going to be autonomous here in the Bay Area. I mean,
they're all very close to fully autonomous, right? Yeah, but the getting the closest to the thing. And I have, I often gone on a fully autonomous train in Japan, one that goes out to that fake island in the middle
of Tokyo Bay, I forget the name of the, and what do you see when you look at that? What do you see
when you go to a fully autonomous train in an airport? It's not like regular trains. There's at every station, there's
a double set of doors, so that there's a door of the train and there's a door off the platform.
And it's really visible in this Japanese one because it goes out in amongst buildings.
The whole track is built so that people can't climb onto it. Yeah. So there's an engineering
that then makes the system safe and makes them acceptable. I think we'll see
similar sorts of things happen in the US. What surprised me, I thought, wrongly,
the US. What surprised me, I thought wrongly that we would have special purpose lanes on 101
in the Bay Area, the leftmost lane, so that it would be normal for Tesla's or other cars to move into that lane and then say, okay, now it's autonomous and have that dedicated lane.
I was expecting movement to that.
You know, five years ago, I was expecting we'd have a lot more movement towards that we
haven't.
And it may be because Tesla has been over promising by saying, even calling the system
fully self-driving, I think they may have been gotten there quicker by collaborating to change the infrastructure. This is one of the problems
with long-haul tracking being autonomous. I think it makes sense on freeways at night for
the trucks to go autonomously. But then there's that how do you get on to enough of the freeway?
What sort of infrastructure do you need for that?
Do you need to have the human in there to do that? Oh, can you get rid of the human?
So I think it was ways to get there, but it's an infrastructure
argument
because the long tail of cases
is very long and the acceptance of it will not be at the same level as human drivers.
So, with you still, and I was with you for a long time, but I am surprised how well,
how many edge cases of machine learning and vision-based methods can cover.
This is what I'm trying to get at is I think there's something fundamentally
different with vision-based methods and Tesla autopilot and any company that's trying to
do the same. Okay, well, I'm not going to argue with you because, you know, I, we're speculating.
Yes. But I, you know, my gut feeling tells me it's going to be things will,
things will speed up when there is engineering of the environment. Because that's what happened
with every other technology. I'm a bit, I don't know about you, but I'm a bit cynical that
infrastructure which relies on government to help out in these cases.
If you just look at infrastructure in all domains,
it's just a government always drags behind
on infrastructure.
There's so many, just well in this country.
In this, sure, sorry, yes.
In this country, and of course, there's many, many countries
that are actually much worse on infrastructure.
Oh yes, there's not.
Many of them are much worse on infrastructure. Oh, yes. There's not many that much worse than the somewhat, you know, like high-speed rail
countries, so not much better.
I guess my question is like, which is at the core, what I was trying to think through here
and ask is like, how hard is the driving problem?
As it currently stands. So you mentioned like, we don't want to just take the human out
and duplicate whatever the human was doing.
But if we were to try to do that,
what, how hard is that problem?
Because I used to think it's way harder.
Like I used to think it's,
with vision alone, it would be three decades, four decades.
Okay, so I don't know the answer to this thing I'm about to pose, but I do notice that on Highway
280 here in the Bay Area, which largely has concrete surface, around the black top surface.
The white lines that are painted there now have black boundaries around them.
the white lines that are painted there now have black boundaries around them.
And my lane drift system in my car would not work without those black boundaries. Interesting. So I don't know whether they started doing it to help the lane drift, whether it is an instance of
infrastructure following the technology, but it, but it, my car would not perform as well without that change in the way they paint the line.
Unfortunately, really good lane keeping is not as valuable.
Like, it's orders of magnitude more valuable to have a fully autonomous system.
Like, but for me, lane keeping is really helpful because I'm lazy at it.
But you wouldn't pay 10 times.
Like, the problem is there's not financial.
Like it doesn't make sense to revamp the infrastructure
to make lanekeeping easier.
It does make sense to revamp the infrastructure.
Oh, I see what you have a large fleet of autonomous vehicles.
Now you change what it means to own cars.
You change the nature
of transportation, that means. But that for that, you need autonomous vehicles. Let me
ask you about Waymo then. I've gotten a bunch of chances to ride in a Waymo self-driving
car. And they're, I don't know if you'd call them self-driving, but I mean, I wrote in one before they were
called Weimar, still at X.
So there's currently, there's a big, another surprisingly, but I didn't think it would
happen, which is they have no driver currently.
Yeah, in Chandler.
In Chandler, Arizona, and I think they're thinking of doing that in Austin as well, but they're
expanding.
Although, although, you know, I do an annual checkup on this.
So as of late last year, they were aiming for hundreds of rides a week, not thousands.
And there is no one in the car, but certainly safety people in the loop, and it's not clear how many, you know,
what the ratio of cows to safety people is.
I, it wasn't obviously,
they're not 100% transparent about this.
No, none of them are 100% transparent.
This is a very untransparent.
But I'd at least the way they're,
I don't want to make definitely,
but they're saying there's no tele operation.
E, E, E. For like, they're, I mean, okay.
And, and, and, and, and that sort of fits with, with, um, YouTube videos I've seen of people being trapped in the car.
Yeah. Um, by, uh, red cone on the, on the street.
And they do, they do have rescue vehicles that come.
Yeah. And then a person gets in and drives it.
Yeah.
you vehicles that come and a person gets in and drives it. Yeah. But isn't it incredible to you? It wasn't to me to get in a car with no driver and watch
the steering wheel turn. Like for somebody who has been studying at least the, certainly the
human side of autonomous vehicles for many years and you've been doing it for way longer.
Like it was incredible to me that this was actually could happen.
I don't care if that scale is 100 cars.
This is not a demo.
This is not, this is me as a regular.
No, the argument I have is that people make
interpolations from that.
Interpolations.
That out, you know, it's here, it's done.
You know, it's just, you know, we've solved it.
No, we haven't yet.
And that's my argument.
Okay, so I'd like to go to
you keep a list of predictions. Yeah. And you're amazing blog posts. They'd be fronted, go to them.
But before that, let me ask you about this. You have, you have a harshness to you sometimes in your criticisms of what it's perceived. That's hype.
And so like, because people extrapolate, like you said, and they kind of buy into the hype,
and then they kind of start to think that the technology is way better than it is.
But let me ask you maybe a difficult question.
Sure.
Do you think if you look at history of progress,
don't you think to achieve the quote impossible?
You have to believe that it's possible.
Absolutely.
Yeah.
Look, his two great runs, great, unbelievable. 1903, first human, you know, heavier than air flight.
1969, we land on the moon.
That's 60, 60 years.
I'm 60, 60 years old.
In my lifetime, that span of my lifetime,
we went to barely, you know, flying,
I don't know what it was, 50 feet,
or the length of the first flight, or something, flying, I don't know what it was, 50 feet, the length of the first flight
or something, the landing on the moon. Unbelievable. Yeah.
Fantastic. But that requires, by the way, one of the right brothers, both of them, but
one of them didn't believe it's the impossible, like a year before. Right? So like not just
possible soon, but like, yeah. So, so, so, so, you know, how important is it to believe and be optimistic is what I guess?
Oh, yeah, it is important.
It's when it goes crazy when, when, when I, you know, you said that what, what, what,
what was the word you used for my bad harshness?
Hushness, yes.
I just get so frustrated.
Yes. I just get so frustrated when people make these leaps and tell me that I don't understand.
Right.
You know, yeah.
There's just from I robot, which I was co-founder of.
Yeah.
I don't know the exact numbers now because I haven't, it's 10 years since I stepped off
the board.
But I believe it's well over 30 million robots cleaning houses
from that one company.
And now there's lots of other companies.
Was that a crazy idea that we had to believe
in 2002 when we released it?
Yeah, that was, we had that, you know,
believed that it could be done done Let me ask you about this
So I robot one of the greatest robotics companies ever
In terms of many fact creating a robot that actually works in the real world probably the greatest robot is company ever
You were the co-founder of it
If if the Rodney
Brooks of today talk to the Rodney of back then
What would you tell him because I have a sense that
Would you pet him on the back and say?
Well, you're doing is going to fail
But go at it anyway. That's when I'm referring to with the harshness
You've accomplished an incredible thing there one of several things we'll talk about
Well, like that's what I'm trying to get at that line. No, it's when my harshness is reserved for people who are not doing it, who claim it's just,
well, this shows that it harshness for Elon too.
And no, it's a different harshness. No, it's a different, um, I'm with Elon. You know, I, I think space X is amazing.
Tumpley on the other hand, you know, I, in one of my blog posts, I said, what's easy and
what's hard? I said, yeah, space X vertical landing rockets,
it had been done before. Grid fins have been done since the 60s, every soil has them.
Reusable space DCX, reuse those rockets that landed vertically.
There's a whole insurance industry in place for rocket launches, all sorts of infrastructure.
That was doable.
It took a great entrepreneur, a great personal expense.
He almost drove himself bankrupt doing it, a great belief to do it.
Whereas Hyperloop, there's a whole bunch more stuff that's never been
thought about and never been demonstrated. So my estimation is Hyperloop is a long, long,
long, long further off. But and if I've got a criticism of Elon, it's that he doesn't make
distinctions between when the technology's coming along and ready and then he'll go off and
and see I mouth off about other things which then people go and compete about and try and do and
so yeah this is where I understand what you're saying I tend to draw a different distinction
I I have a similar kind of harshness towards people who are not telling the truth, who are
basically fabricating stuff to make money or to...
Well, he believes what he says.
I just think that's the very important difference.
Yeah, I'm not, I'm not.
Because I think in order to fly, in order to get to the moon, you have to believe even when
most people tell you you're wrong
and most likely you're wrong, but sometimes you're right. I mean, that's the same thing I have
with Tesla Autopilot. I think that's an interesting one. I was especially when I was, you know,
an MIT and just the entire human factors in the robotics community were very negative towards
Elon. It was very interesting for me to observe colleagues at MIT. I wasn't sure what to make of that. That was very upsetting to me because I understood
where that's coming from. And I agreed with them. And I kind of almost felt the same thing
in the beginning until I kind of opened my eyes and realized there's a lot of interesting ideas here. There might be over hype.
If you focus yourself on the idea that you shouldn't call
a system full self-driving when it's obviously not
autonomous, fully autonomous, you're going to miss
the magic.
Oh, I'll probably.
You are going to miss the magic, but at the same time,
there are people who buy it, literally
pay money for it and take those words as given.
So it's, that's, but I haven't, so, that I take words as given as one thing.
I haven't actually seen people that use autopilot, that believe that the behavior is really important,
like the actual action.
So like, this is like to push back on the very thing
that you're frustrated about, which is like journalists
and general people buying all the hype and going out.
In the same way, I think there's a lot of hype
about the negatives of this too,
that people are buying without using.
People use the way, this was this opened this was, this opened my eyes actually. The
way people use a product is very different than the way they talk about it. This is true
with robotics, with everything. Everybody has dreams of how a particular product might
be used or so on. And then when it meets reality, there's a lot of fear of robotics, for example,
that robots are somehow dangerous and all those kinds of things. But when you actually have
robots in your life, whether it's in the factory or in the home,
making your life better, that's going to be, that's way different.
The perceptions of it are going to be way different.
And so my just tension was, like, here's an innovator.
What is it?
Sorry, super cool.
It's from Cadillac.
It was super interesting, too. That's a really
interesting system. We should be excited by those innovations. Okay, so can I tell you something
that's really annoyed me recently? It's really annoyed me that the press and friends of mine on
Facebook are going, these billionaires and their space games. You know, why are they doing that? That's been a very first thing.
Really pisses me off.
I must say, I applaud that.
I applaud it.
It's the taking and not necessarily the people
who are doing the things, but that I keep having
to push back against on realistic expectations
when these things can become real.
Yeah. This is interesting, this was interesting.
Anna, because there's been a particular focus for me,
is autonomous driving,
Elon's prediction of when certain milestones will be hit.
There's several things to be said there that I always, I thought about,
because whenever you said them, it was obvious that's not going to me as a person that kind of
Not inside the system. It was obviously it's unlikely to hit those
There's two comments I want to make one. He legitimately believes it and
two much more importantly I
think
that I think that having ambitious deadlines drives people to do the best work of their life, even
when the odds of those deadlines are very low.
Two point, and I'm not talking about anyone.
I'm just saying it's a line there, right?
You have to have a line because you over extend them.
It's demoralizing. But I will say that there's an additional thing here
that those words also drive the stock market. And we have, because of the way that rich people in
the past have manipulated the robes through investment. We have developed almost about
what you're allowed to say. There's an area here which is...
I tend to be, maybe I'm naive, but I tend to believe that engineers, innovators, people like that, they're not, they're my,
they don't think like that, like manipulating the price of the stock price, but it's possible
that I'm certain, it's possible that I'm wrong.
It's a very cynical view of the world because I think most people that run companies, especially original founders, they...
Yeah, I'm not saying that's the intent. I'm saying it's a...
Eventually, you fall into that kind of behavior pattern. I don't know.
I tend to...
I wasn't saying it's falling into that intent.
It's just that you also have to protect investors in this market.
Yeah.
Okay.
So you have, first of all, you have an amazing blog that people should check out.
But you also have this in that blog, a set of predictions, such a cool idea.
I don't know how long ago you started like three, four years ago.
It was January 1, 2018.
Yeah.
And I made these predictions and I said that every January
1st, I was going to check back on how my predictions are. That's such a great thought. For 32 years.
Oh, you see, you said 30 years. I said 32 years because it's thought that'll be January 1st, I will just turn 95.
Nice.
And so people know that your predictions,
at least for now, are in the space of artificial intelligence.
Yeah, I didn't say I was going to make new predictions.
I was just going to measure this set of predictions that I made.
Yeah, I was sort of annoyed that everyone could make predictions.
They didn't come true and everyone forgot.
So I should hold myself to have high standards.
But also just putting years and like date rangers and things.
It's a good thought exercise.
Yeah.
Like, and like reasoning your thoughts out.
And so the topics are artificial intelligence,
autonomous vehicles and space.
Yeah.
I was wondering if we could just go through some that stand out maybe from memory.
I can just mention to you some let's talk about self-driving cars like some predictions that
you're particularly proud of or particularly interesting from flying cars to the other element
here is like how widespread the location where the deployment of the autonomous vehicles is
and there's also just a few fun ones. Is there something that jumps to mind that you remember from the predictions?
Well, I think I did put in there that there would be a
dedicated self-driving lane on 101 by some year and I think that was over optimistic on that one.
Yeah, actually, yeah, actually do remember that but But you, uh, I think you were mentioning,
like difficulties at different cities. Yeah. Uh, so the Cambridge Massachusetts, they think
it was an example of like in Cambridge port, you know, yeah, I lived in Cambridge port for a number
of years and, you know, the roads in narrow and getting, getting anywhere as a human drivers
incredibly frustrating when you
start to put. And people drive the wrong way on one way streets there. It's just...
So your prediction was driverless taxi services operating on all streets in Cambridge port,
Massachusetts in 2035. Yeah, and that may have been too optimistic.
You think so.
You know, I've gotten a little more pessimistic since I made these internally on some of these things.
So what can you put a year to a major milestone of deployment of a taxi service in a few major cities.
Like something where you feel like
a ton of vehicles are here.
So let's take the grid streets of San Francisco,
north of market.
Okay.
Relatively benign environment. The streets are wide. The major problem is delivery trucks
stopping everywhere, which made things more complicated. A taxi system there where somewhat
designated pickup of drop offs.
I'm like with Uber and Lyft,
where you can sort of get to any place
and the drivers will figure out how to get in there.
We're still a few years away.
I live in that area.
So I see the self-driving car companies cars,
multiple ones every day.
Now if they drive back crews,
Zooks less often, Waymo all the time,
different ones come and go.
And there's always a driver.
There's always a driver at the moment,
although I have noticed that sometimes the driver does not have the authority to take over without talking to the home office, because they will sit there waiting for a long time. making a decision. That's fast. So you can see whether they've got the hands on the wheel
or not. And it's the incident resolution time that tells you, gives you some clues.
So what year do you think, what's your intuition, what date range are you currently thinking
San Francisco would be autonomous taxi service from any point A to any point B without a driver.
Are you still, I think, 10 years from now, 20 years from now, 30 years from now?
Certainly not 10 years from now.
It's going to be longer.
If you're allowed to go south and mark it way longer.
Unless it's reengineering of roads. By the way, what's the biggest
challenge you can mention in view? Is it the delivery trucks? Is it the edge cases, the computer
perception? Well, it is a case that I saw outside my house a few weeks ago, about 8 p.mpm on a Friday night. It was getting dark before the solstice.
It was a cruise vehicle to come down the hill, turned right, and stopped dead,
covering the crosswalk. Why did it stop dead? Because there was a human just two feet from it.
Now I just glanced, I knew what was happening. The human was a woman was at the door of her car trying to unlock it with one of those things that you know when you don't have a key. Yes.
That car thought oh she could jump out in front of me any second. Yeah. As a human I could tell no
she's not gonna jump out. She's busy trying to unlock her, she's lost her keys, she's trying to get in the car. And it stayed there for until I got bored.
Um, yeah.
And so the human driver in there did not take over.
But here's the kicker to me.
A guy comes down the hill with a stroller.
I assume there's a baby in there.
And now the crosswalk's blocked by this cruise vehicle.
What's he going to do? Clevverly, I think, he decided not to go in front of the car.
He went, but he had to go behind it. He had to get off the crosswalk out into the intersection
to push his baby around this car, which was stopped there and no human driver would have stopped there for that length of time.
They would have got out of the way.
And that's another one of my pet peeves that safety has been compromised for individuals who didn't sign up for having this happen in their neighborhood.
Yeah, but you can say that's an edge case, but.
Yeah, well, I'm in general not a fan, which of anecdotal evidence for stuff like this is
one of my biggest problems with the discussion of autonomous vehicles in general people that
criticize them or support them using any case.
Okay.
So let me.
But I got you, you know, you know, your question is when is it going to happen in San Francisco?
I say not soon, but I'm going to be one of them.
But where where it is going to happen is in limited domains, campuses of various sorts,
gated communities, where the other drivers are not arbitrary people.
The people who know about these things, they've been warned about them,
and at velocities where it's always safe to stop dead.
You can't do that on the freeway.
That, I think, we're going to start to see.
And they may not be shaped like, you know,
current cars, I may be, you know, things like, you know, may mobility has those things and various
companies have these. Yeah, I wonder if that's a compelling experience. To me, as always,
it's not just about automation, it's about creating a product that like, that makes your,
it's not just cheaper, but makes you this fund the ride.
One of the least fun things is for a car that stops and waits.
There's something deeply frustrating for us humans.
For the rest of the world to take advantage of us as we wait.
But, think about not you as the customer, but someone who's in there, 80s in a, you
know, in a time and village, kids have said, you're not driving anymore.
And this gives you the freedom to go to the market.
That's a hugely beneficial thing, but it's a very few orders of magnitude less impact
on the world.
It's not, it's just a few people in a small community using cars as opposed to the entire of the world.
I like that the first time that a car
Equipped with some version of a solution to the trolley problem is what's NIML stand for like not in my life?
Not in my life. I define my lifetime as up to 2050.
Yeah find my lifetime as up to 2050. Yeah.
You know, I asked, I asked you, when, when have you had the side, which person should I kill? No, you put the
brakes on and you break the status you can. I mean, I
think that it is, you know, I do think autonomous vehicles or
semi-autonomous vehicles do need to solve the whole pedestrian
problem that has elements of the trolley problem within it?
But it's not.
Yeah, well, so here's a, and I talk about it in one of the articles of blog posts that I wrote.
Here's, here's, and people have told me, one of my co-workers has told me he does this.
He, he torches autonomously driven vehicles and pedestrians will torture them. Now, once they realize that putting them on foot off the curb makes the car think that they might walk into the road.
Kids, teenagers will be doing that all the time.
I, by the way, one of my, and it's the whole other discussion because my main issue with robotics is HRI, human robot interaction.
I believe that robots that interact with humans will have to push back.
Like they can't just be bullied because that creates a very uncompeling experience for the humans.
Yeah, well, you know, Waymo before it was called Waymo discovered that, you know, they had to do that
at four-way intersections. They had the knowledge forward to give the queue that they were going to go.
Otherwise, the other drivers would just beat it more with time.
So, you co-founded Irohbot, as we mentioned, one of the most successful robotics companies
ever. What are you most proud of with that company? And the approach you took to robotics. Well, I'm quite proud of there, which may be a surprise,
but I was still on the board when this happened.
It was March 2011, and we sent robots to Japan,
and they were used to help shut down the Fukushima,
Fukushima Daiichi nuclear power plant, which was everything was about up in this since I was there in 2014 and the robots, some of the robots were still
there. I was proud that we were able to do that. Why were we able to do that? And people have said, well, Japan is so good at robotics.
It was because we had had about 6,500 robots deployed in Iraq and Afghanistan,
teleopt, but within intelligence, dealing with roadside bombs.
So we had, I think it was at that time, nine years of in-field experience with the robots in harsh conditions,
whereas the Japanese robots, which were getting, it goes back to what annoys me so much,
getting all the hype.
Look at that.
Look at that Honda robot.
It can walk.
Wow.
The features here couldn't do a thing because they weren't deployed, but we had deployed in really harsh conditions
for a long time, and so we're able to do something very positive in a very bad situation.
What about just the simple, and for people who don't know, one of the things that I robot
has created is the Rumba vacuum cleaner. What about the simple robot that is the Rumbus?
That's deployed in tens of millions of homes. What do you think about that?
Well, I make the joke that I started out live as a pure mathematician and turned into
vacuum cleaner salesman.
If you're going to be an entrepreneur, be ready to do anything.
But I was, you know, there was a, there was a wacky lawsuit that I got
posed for not too many years ago.
And I was the only one who had email from the 1990s. And no one in the company had it. So I went and went through my email and it reminded me of, you know, the joy of
what we were doing. And what was I doing? What was I doing at the time we were building the Rumba. One of the things was we had this incredibly tight
budget because we wanted to put it on the shelves at $200. There was another home cleaning
worry about at the time. It was the ElectroLux Trilobite, which sold for 2,000 euros, and to us that was not going to be a consumer product.
So we had reason to believe that $200 was a thing that people would buy at.
That was our aim.
But that meant we had, you know, that's on the shelf making profit.
That means the cost of goods has to be minimal. So I find all these emails of me going,
I'd be in Taipei for a MIT meeting.
And I'd stay a few extra days, I'd have a Shin Shu
and talk to these little tiny companies,
lots of little tiny companies outside of TSMC, Taiwan
Semican duct, Taiwan Semican duct,
the manufacturing corporation, which let all these little companies be fabulous.
They didn't have to have their own fab, so they could innovate.
And then they were building their innovations where they built strip down 68.02s.
68.2s, what was in an Apple one?
Get rid of half the silicon and still have it be viable.
And I had previously got some of those for some
earlier failed products of a viral buck. And then that was in Hong Kong going to all these
companies that the, you know, the weren't gaming in the current sense, there were these handheld
games that you would play or birthday cards because we had about a 50 cent budget for
computation.
So, I'm tracking from place to place, looking at their chips, looking at what they've
removed.
Oh, the interrupt, the interrupt handling is two week for a general purpose.
So, I was doing deep technical detail, and then I found this one from a company called
Winbond which had, I'm not forgotten, it had this much RAM, it had 512 bytes of RAM and it was in
our budget and it had all the capabilities we needed. Yeah, so you're excited. Yeah, and I was reading
all these emails, Colin, I found this. So did you think, did you ever think that you guys could be so successful?
Like eventually this company would be a so successful, could you possibly have imagined?
And no, we never did think that.
We had had 14 failed business models up to 2002.
I mean, we had two winners the same year.
No, and then, you know, we, I remember the board, because by this time we had some
venture capital in, the board went along with us building some robots for, you know, aiming at
the Christmas 2002 market, and we went three times over what they authorized and
built 70,000 of them and sold them all in that first because we released on September 18th and
I was sold by Christmas. So it was, so we were gotsy, but yeah, you didn't think it so take over the world well, this is
So a lot of amazing robotics companies have gone under over the past few decades
Why do you think it's so damn hard to run a successful?
Well, there's a there's a body company. There's a few things
one is successful. Well, there's a there's a about X company. There's a few things. One is expectations of capabilities by the founders that are off base. The
founders not the consumer and the founders.
Yeah, expectations what what can be
delivered. Sure. Miss pricing.
And what a customer thinks is a valid price is not rational necessarily.
Yeah.
And expectations of customers.
And just the sheer hardness of getting people to adopt a new technology.
And I've suffered from all three. I've had more
failures and successes in terms of companies. I've suffered from all three. So do you
think one day there will be a robotics company, and by robotics company, I mean where your primary source of income is from robots, there will be a trillion plus dollar company.
And it's so what would that company do?
I can't, you know, because I'm still starting robot companies.
Yeah.
I'm not making any such predictions in my own mind.
I'm not thinking about a trillion dollar company.
And by the way, I don't think, you know,
in the 90s, anyone was thinking that Apple
would ever be a trillion dollar company.
So, it's a very hard to predict.
But, sorry to interrupt,
but don't you,
because I kind of have a vision in a small way, and it's a big
vision in a small way, that I see that there would be robots in the home at scale, like Roomba,
but more. And that's trillion dollar.
Right. And I think there's a real market pull for them because of the demographic inversion,
market pull for them because of the demographic inversion, you know, who's, who's going to do the stuff for the older people? There's too many, you know, I'm leading here. It's going
to be too many of us. And, but we don't have capable enough robots to make that economic
argument at this point. Do I expect that that will happen?
Yes, I expect that will happen.
But I got to tell you, we introduced the Rumba in 2002, and I stayed another
nine years.
We were always trying to find what the next time robot would be in
still today, the primary product of 20 years, late, almost 20 years later, 19 years later,
the primary product is still a Rumba. Iribot hasn't found the next one.
Do you think it's possible for one person that garage to build it versus like Google launching
Google self-driving car that turns into Waymo? Do you think this is almost like what it
takes to build a successful robotics company? Do you think it's possible to go from the ground up or is it just too much capital investment?
Yeah, so it's very hard to get there without a lot of capital.
I mean, it's not easy, you know, fair chunks of capital for some robotics companies.
You know, Series B's was like, so on yesterday for $80 million. I think it was for co-variant.
But it can take real money to get into these things and you may fail along the way. I certainly failed at
re-think robotics. And we go lost $150 million in capital. So, okay, so re-think robotics is another amazing robotics company you've
founded. So what was the vision there? What was the dream and what are you
most proud of with re-think robotics? I'm most proud of the fact that we got
robots out of the cage in factories that were safe, absolutely safe for people
and robots to be next to each other.
So these are robotic arms.
Robotic arms for people to pick up stuff
and interact with humans.
Yeah, and that humans could retarget them
without running code.
And now that's sort of become an expectation
for a lot of other little companies and big companies
are advertising. That's both an interface problem and also a safety problem. Yeah. Yeah. So I'm
most proud of that. I can complete, I let myself be talked out of what I wanted to do, and you know, you always got, you know, I can't replay the tape.
You know, I can't replay it. Maybe, maybe, you know, if I'd been stronger on, and I remember
the day, I remember the exact meeting. Can you take me through that meeting? Yeah. So
I'd said that I'd set as a target for the company that we were going to build $3,000
robots with force feedback that was safe for people to be around.
Wow.
That was my goal.
And we built, so we started in 2008 and we had prototypes built of plastic, plastic gearboxes, and the $3,000 lifetime, or $3,000, I was saying we're going to go
after not the people who already have robot arms and factories, the people who never have
a robot arm.
We're going to go after a different market.
So we don't have to meet their expectations.
And so we're going to build it out of plastic. It doesn't have to have a 35,000
hour lifetime. It's going to be so cheap that it's op-hicks, not cappix. And so we had a prototype
that worked reasonably well. But the control engineers were complaining about these plastic gearboxes with a beautiful little planetary gearbox.
But we could use something called Sirius Elastic actuators.
We embedded them in there.
We could measure forces.
We knew when we hit something, et cetera.
The control engineers were saying, yeah, but this is torque ripple because these plastic
gears, they're not great gears.
And there's this ripple and trying to do
force control around this ripple is so hard and I'm not going to name names but I remember
one of the mechanical engineers saying we'll just build a metal gearbox with spur gears
and it'll take six weeks will be done problem-solved.
Two years later, we got the spur gearbox working.
Yeah. We cost reduced at every possible way we could.
Yeah. But now the price went up to,
and then the CEO at the time said,
well, we have to have two arms, not one arm.
So our first robot product backstaffed
and now cost $25,000.
And the only people who are going to look at that were people who had arms in factories,
because that was somewhat cheaper for two arms than arms in factories. But they were used to
0.1 millimeter reproducibility of motion and certain velocities. And I kept thinking, but that's not what we're
giving you. You don't need position repeatability. Use force control like a human does. No, no,
but we want that repeatability. We want that repeatability. All the other robots have that repeatability.
Why don't you have that repeatability? So can you clarify force controls you can grab the arm and you can move it or you can move it around but but suppose you
Can you see that? Yes, suppose you want to
Yes
Suppose this this thing is a you know precise thing. It's got a fit here in this right angle
And the position control you sent you you you have fixed your where this is you know where this is precisely
control, you sent your you you have fixed your where this is, you know where this is precisely and you just move it open, you know, and it goes there. If force control, you would do something
like slide it over here until we feel that and slide it in there. And that's how a human
gets precision. Yeah, they use force feedback. Yes. And get the things to mate rather than
just go straight to it. Yeah. Couldn't convince our customers who were in factories
and were used to thinking about things a certain way.
And they wanted it, wanted it.
So then we said, okay, we're gonna build an arm,
it gives you that.
So now we end up building a $35,000 robot with one arm
with, oh, what are they called?
A certain sort of gearbox made by a company whose name I can't remember right now, but it's
the name of the gearbox.
And but it's got torque ripple in it.
So now there's an extra two years of solving the problem of doing the force with the talk ripple. So we had to do the
the thing we had avoided
and for the plastic gearboxes we ended up having to do the robot was now overpriced and
they in that was your intuition from the very beginning kind of that this is not
you're opening a door to to solve a lot of problems there. You're eventually going to have to solve this problem anyway.
Yeah, and also I was aiming at a low price to go into a different model.
A low price that didn't have a $3,000.
$3,000 should be amazing.
Yeah, I think we could have done it for five.
But you know, you said, talked about setting the goal a little too far for the engineers.
Exactly.
So why would you say that company not failed but went under? We had buyers and there's this thing called the Committee on Foreign Investment in the
US, CIFIUS.
And that had previously been invoked twice around where the government could stop foreign
money coming into a US company based on defense requirements.
We went through due diligence multiple times.
We were going to get acquired, but every consortium had Chinese money in it. And all the bankers
would say at the last minute, you know, I'm going to get past Scypheus and the investors
would go away. And then we had two buyers, which were about to run out of money, two buyers.
And one used heavy-handed legal stuff with the other one.
Said that we're going to take it and pay more, dropped out when we were out of cash,
and then bought the assets at one thirtieth of the price they had offered a week before.
That was a tough week.
Do you, does it hurt to think about an amazing company that didn't, like I wrote about, didn't find a way?
Yeah, it was tough.
I said I was never going to start another company.
I was pleased that everyone liked what we did so much that the team was hired by three
companies within a week. Everyone had a job in one of by three companies within a week.
Everyone had a job in one of these three companies.
Some stayed in their same desks because another company came in and rented the space.
So I felt good about people not being out on the street.
So Baxter is a screen with a face.
That's a revolutionary idea for a robot manipulation, a robotic arm.
Well, the screen's opposition, did you get?
Well, first, the screen was also used during codeless programming, where you talk by demonstration
that showed you what its understanding of the task was.
So it had two roles.
Some customers hated it, and so we made it so that when the robot was running,
it could be showing graphs of what was happening. I'm not sure the eyes. Other people,
and some of them surprised me who they were saying, this one doesn't look as human as the old one.
We like the human looking. So there was a mixed bag.
We like the human looking. Yeah.
So there was a mixed bag.
But you think that's, I don't know, I'm kind of disappointed whenever I talk to roboticists,
the best robotics people in the world, they seem to not want to do the eyes type of thing.
They seem to see it as a machine, as opposed to a machine that can also have a human connection.
I'm not sure what to do with that.
It seems like a lost opportunity.
I think the trillion dollar company will have to do the human connection very well, no matter
what it does.
Yeah, I agree.
Can I ask you a ridiculous question?
Sure.
I'm not giving a ridiculous answer.
Do you think, well, maybe by way of asking a question, let me first
mention that you kind of critical of the idea of the touring test as a test of
intelligence. Let me first ask this question. Do you think we'll be able to build
an AI system that humans fall in love with and it falls in love with the human, like a romantic love.
But we've had that with humans falling in love with cars, even back in the 50s.
It's a different love, right? Well, I think there's a lifelong partnership where you can communicate and grow like,
I think we're a long way from that. I think we're a long, long
way. I think Blade Runner was, you know, at the time scale totally wrong. Yeah, but
so to me honestly, the most difficult part is the thing that you said with the Marvel
X paradox is to create a human form that interacts and perceives the world. But if we just look at a voice, like the movie, her, or just like an Alexa-type voice,
I tend to think we're not that far away.
Well, for some people, maybe not, but I, you know, I,
you know, as humans, as we think about the future, we always try to, and this is the premise of most science fiction movies. You've got the world just as is today and you
change one thing. Right. But that's not how, and it's the same with the self-driving car.
You change one thing. No, you change everything changes. Yes. Everything grows together. So surprisingly, I might be surprised
to hear my not.
I think the best movie about this stuff
was by Centennial Man.
And what was happening there?
It was a schmolty, you know,
what was happening there?
As the robot was trying to become more human,
the humans were adopting the technology of
the robot and changing their bodies.
So there was a convergence happening in that sense.
So we will not be the same.
You know, we're already talking about genetically modifying our babies.
You know, there's more and more stuff happening around that.
We will want to modify ourselves even more for all sorts
of things. We put all sorts of technology in our bodies to improve it. I've got things
in my ears so that I can sort of hear you. So we're always modifying our bodies. So I think it's hard to imagine exactly what it will be like in the future.
But on the touring test side, do you think, so forget about love for a second.
Let's talk about just like the Alexa prize actually, I was invited to be a,
what is the interviewer for the Alexa prize or whatever that's in two days. Their idea
is success looks like a person wanting to talk to an AI system for a prolonged period
of time, like 20 minutes. How far away are we and why is it difficult to build an AI
system with which you'd want to have a beer and talk for an hour or two hours?
Like not for to check the weather or to check music,
but just like to talk as friends.
Yeah, well, we saw We saw Weisenbaum back in the 60s
with his program Eliza being shocked
at how much people would talk to Eliz Liza. I remember in the 70s
typing stuff to a Liza to see what it would come back with. I think right now, and this is a thing
Amazon has been trying to improve with Alexa. There is no continuity of
Amazon's been trying to improve with the likes. There is no continuity of topic.
There's not, you can't refer to what we talked about yesterday.
It's not the same as talking to a person
where it seems to be an ongoing existence.
Right.
It changes.
We share moments together and they last in our memory together.
Yeah, but there's none of that.
And there's no sort of intention of these systems
that they have any goal in life, even if it's to be happy, you know, they don't even have
a semblance of that. No, I'm not saying this can't be done. I'm just saying, I think this is why
we don't feel that way about them. Or that's a sort of a minimal requirement. If you want the sort of
interaction you're talking about, it's a minimal requirement. Whether it's
going to be sufficient, I don't know. We haven't seen it yet. You don't know
what it feels like. I tend to think it's not as difficult as solving
intelligence, for example.
And I think it's achievable in the near term.
But on the Turing test, why don't you think the Turing test is a good test of intelligence?
Oh, because, you know, again, the Turing, if you read the paper, Turing wasn't saying
this is a good test.
He was using his rhetorical device
to argue that if you can't tell a difference between a computer and a person, you must
say that the computer is thinking because you can't tell a difference when it's thinking.
You can't say something different. What has it become as this sort of weird game of fooling people. So back at the AI lab
in the late 80s, we had this thing that still goes on called the AI Olympics. And one of the events
we had one year was the original imitation game as T talked about, because he starts by saying,
can you tell whether it's a man or a woman?
So we did that at the lab.
We had, you know, you'd go and type
and the thing would come back
and you had to tell whether it was a man or a woman.
And the, one of the, one of the, one of the,
One of the, one of the, one of the, one man came up with a question that he could ask, which was always a dead giveaway of whether the other person was really a man or woman.
You know, what he would ask them, did you have them, green plastic toy soldiers as a kid?
Yeah, what do you do with them?
And a woman, a woman trying to be a man would say,
oh, I lined them up, we had wars, we had battles,
and the man just being an asshole, I stomped on them.
I'd burn them.
I'd burn them.
So, you know, that's what the cheering test,
the cheering test with computers has become.
What's the trick question?
What's the that's why that's right?
It's sort of that's right.
Devolved into this.
We have never the less conversation not formulated as a test.
Is it pretty is a fascinatingly challenging dance?
That's a really hard to me.
Conversation when non poses a test is a more intuitive illustration how far away
we are from solving intelligence than my computer vision.
It's hard, computer vision is harder for me to pull apart, but with language, with conversation,
you could see that.
Look, his language is so human.
We can so clearly see it.
Shit, you mentioned something that I was going to go on off on. We can so we can so clearly see it
Shit you mentioned something I was gonna go on off on okay
I mean I have to ask you because you were the head of C cell AI left a long time
You're I don't know to me when I came to MIT. You're like one of the greats at MIT. So what was that time?
Like what? And plus you, you're, I don't know, friends with, but you knew Minsky and all
the, all the folks there, all the legendary AI people of which you're one. So what was
that time? Like, what are memories that stand out to you from that time?
From your time at MIT, from AI lab, from the dreams that they lab represented
to the actual like revolutionary work. Let me tell you first, the disappointment in myself.
You know, as I've been researching this book and so many of the players were active in the 50s and 60s, I knew many of them when
they were older.
I didn't ask them all the questions.
Now I wish I had asked.
I'd sit with them on Thursday lunches, which we had a faculty lunch.
I didn't ask them so many questions that now I wish I had.
You asked me that question because you wrote that.
You wrote that you were fortunate to know and are shoulders with many of the greats.
Those who founded AI Robotics and Computer Science and the World Wide Web.
And you wrote that your big regret nowadays is that often I have questions for those who
have passed on.
Yeah.
And I didn't think to ask them any of these questions.
Right.
Even as I saw them and said hello to them
on a daily basis.
So maybe also another question I want to ask,
if you could talk to them today, or question,
would you ask, or questions would you ask?
Oh, I'll look lighter.
I would ask him, he had the vision for humans
and computers working together and he really founded that at
DARPA and he gave the money to MIT which started Project Mac in 1963. And I
would have talked about what the successes were, what the failures were, what he
saw as progress, etc. I would have asked him more questions about that because now I could
use it in my book, you know, but I think it's lost. It's lost forever. A lot of the motivation.
So I lost. I should have asked Marvin why he and Seymour Pappet came down so hard on neural networks in 1968 in their book
Perceptrons because Marvin's PhD thesis was on neural networks.
How do you make sense of that?
He will destroy the field.
He probably, do you think he knew that the effect that book would have?
All the theorems and negative theorems.
Yeah.
Yeah.
So, yes.
That's the way of life.
Yeah.
It's still this kind of tragic that he was both the proponent and the destroyer of neural networks.
Yeah.
Is there other memory stand out from the robotics
and the AI work at MIT?
Well, yeah, but you're going to be most specific.
Well, I mean, like it's such a magical place.
I mean, to me, it's a little bit also heartbreaking
that with Google and Facebook, like DeepMind and so on,
so much of the talent,
it doesn't stay necessarily for prolonged periods of time
in these universities.
Oh yeah, I mean, some of the companies are more guilty
than others are paying fabulous salaries
to some of the highest producers.
And then just, you never hear from them again,
they're not allowed to give public talks,
they sort of walked away.
And it's sort of like collecting Hollywood stars
or something, and they're not allowed
to make movies anymore, I own them.
Yeah, that's tragic,
because I mean, there's an openness
to the university setting where you do research
to both the space of ideas and
speak like publication and all those kinds of things. Yeah, you know, and you know, there's
the publication and all that and often, you know, although these places say they publish
this pressure. But I think for instance, you know, NetNet, I think Google buying those eight or nine robotics company was bad
for the field because it locked those people away.
They didn't have to make the company succeed anymore, locked them away for years, and then
sort of all freddled away.
So do you have hope for MIT for MIT?
For MIT?
Yeah, why shouldn't I?
Well, I could be harsh and say that I'm not sure I would say MIT is leading the world in AI,
or even Stanford, or Berkeley.
I would say, I would say deep mind, Google AI, Facebook AI.
I would take a slightly different approach, a different answer.
I'll come back to Facebook a little bit, but I think those other places are following a dream of one of the founders and I'm not sure that it's well-founded the dream and I'm not sure
that it's going to have the impact he believes it is.
You're talking about Facebook and Google and so on.
I don't know about Google.
Google.
But the thing is, those research labs aren't, there's the big dream.
And I'm usually a fan of no matter what the dream is, a big dream is a
unifier, because what happens is you have a lot of bright minds working
together on a dream, what results is a lot of like adjacent ideas.
I mean, so much progress is made.
Yeah, so I'm not saying they're actually leading. I'm not saying that the universities are leading.
Yeah. But I don't think those companies are leading in general because they're, you know,
we saw this incredible spike in, you know, attendees at Newerb's. And as I said in my January 1st review this year for 2020,
2020 will not be remembered as a watershed year for machine learning or AI.
There was nothing surprising happen anywhere,
unlike when deep learning hit image net.
That was a shake.
There's a lot more people writing papers, but the papers are fundamentally boring.
Yeah.
And I'm interested in comment to work.
Yeah.
Is there a particular memory?
Is he up with men's key or somebody else in MIT that stand out?
Funny stories.
I mean, unfortunately, he's another one.
It's passed away.
You've known some of the biggest minds in AI.
Yeah, and they did amazing things
and sometimes they were grumpy.
But he was interesting, because he was very grumpy.
But that was, I remember him saying in an interview
that the key to success or being productive is to hate everything you've ever done in
the past. Maybe that explains the perceptron book.
And they were like, well, he told you exactly. But he, meaning like, just like, I mean, maybe it's the way to not treat yourself too serious,
he just always be moving forward.
That was his idea.
I mean, that crankiness, I mean, there's a, that's the scary thing.
So let me, let me, let me tell you what really, you know, the joy memories are about having access to technology before anyone else has seen it.
So, you know, I got to Stanford in 1977, and we had, you know, we had terminals that could show live video on them.
Digital sound system.
We had a Zerox graphics printer. We could print, it wasn't like a typewriter
ball hitting characters, it print arbitrary things, they were the first personal computers.
And, you know, I cost $100,000 each.
And I could, you know, I got there early enough in the day.
I got one for the day.
Couldn't stand up.
That's keep working.
So, having that, like like direct glimpse into the future. Yeah and you know I've
had email every day since 1977 and you know the host field was only eight bits in a lot of
that many places but I could send the email to other people at a few places. So that was pretty exciting to be in that world,
so different from what the rest of the world knew.
Let me ask you, I probably edited this out,
but just in case you have a story.
Hanging out with Don Knuth for a while tomorrow,
did you ever get a chance to see
a different world than yours?
He's a very kind of theoretical computer science,
the puzzle of computer science and mathematics,
and you're so much about the magic of robotics,
like the practice of it.
You mentioned him earlier for like,
not about computation, did your world cross?
They didn't.
You know, I know him now, we told.
But let me tell you my Donald Knuth story.
So besides analysis algorithms, he's well known for writing
tech, which is in the late tech, which
is the academic publishing system.
So he did that at the AI lab, and he would work overnight at the AI lab.
And one day, the mainframe computer went down.
And a guy named Robert Paul was there, he needed his PhD at the Media Lab at MIT.
And he was an engineer.
And so he and I tracked down what were the problem was.
It was one of the big refrigerator size
or washing machine size describes and failed.
And that was what brought the whole system down.
So we got panels pulled off.
And we're pulling circuit cards out
and Donald Knuth, who's a really tall guy, walks in and he's
looking down and says, when will it be fixed?
Because he wanted to get back to Ryan.
His system will get a stonk.
And so we figured out, it was a particular chip, 7400 series chip, which was socketed.
We popped it out.
We put a replacement in, put it back in, smoke comes out, because
we put it in backwards, because we were so nervous that the old canoe is standing over us.
Anyway, we actually got up fixed and got the mainframe running again.
So that was your little, where was that again?
Well, that must have been before October 79, because we moved out of that building, so sometimes
probably 78, sometime early 79.
Yeah, those all those figures is just fascinating.
All the people with past through MIT is really fascinating.
Is there a, let me ask you to put on your big wise man hat?
Is there advice that you can give to young people today,
whether in high school or college or thinking about their career or
thinking about life
How to live a
Life they're proud of a successful life
Yeah, so many people ask me for advice and have asked for I give I talked a lot of people all the time
And there is no one way. There's a lot of pressure to produce papers
that will be acceptable and be published. Maybe I come from an age where I could be a rebel against that and still succeed.
Maybe it's harder today.
But I think it's important not to get too caught up with what everyone else is doing.
And if it depends on what you want to life.
If you want to have real impact,
you have to be ready to fail a lot of times.
So you have to make a lot of unsafe decisions.
And the only way to make that work
is to keep doing it for a long time.
And then one of them will be workout.
And so that will make something successful. Or not. All you just made, you know, end up, you know, not having, you know,
having a lousy career. I mean, it's certainly possible. Taking the risk is the thing. So,
but it, it, it, but there's no way to, to make all safe decisions and actually really contribute.
Do you think about your death, about your mortality?
I got to say when COVID hit, I did, because in the early days we didn't know how bad it was going to be.
That made me work on my book harder for a while. But then I'd
started this company and now I'm doing full time, more than full time at a company, so
the book's on hold. But I do want to finish this book.
And you think about it, are you afraid of it? I'm afraid of dribbling. I'm losing the details of, okay, yeah, yeah, but the fact that the ride ends, I've known
that for a long time.
So, yeah, but there's knowing and knowing.
It's such a, yeah, and it really sucks.
It feels a lot closer.
So my, in my blog with my predictions, my sort
of pushback against that was I'm going to I said, I'm going to review these every year for 32 years.
That puts me into my mid 90s. So, you know, it's my whole every every time you write the blog post,
you get closer and closer to your own prediction. That's sure of your death.
Yeah. Yeah. What do you hope your legacy is? You're one of the greatest roboticist
AI researchers of all time. What I hope is that I actually finished writing this book.
book and that there's one person who reads it and sees something about changing the way they're thinking and that leads to the next big.
And then there'll be an apache as a hundred years from now saying, I once read that book. And that changed everything.
What do you think is the meaning of life?
This whole thing, the existence,
the all the hurried things we do on this planet.
What do you think is the meaning of it all?
Well, I think we're all really bad at it.
Life or finding meaning are both.
Yeah, we get caught up in the, it's easy to do the stuff that's immediate and not through
the stuff that's not immediate.
So, the big picture, we're bad at it.
Yeah.
Do you have a sense of what that big picture is?
Like why you ever look up to the stars and ask why the hell are we here?
You know, my my my my atheism tells me it's just random, but you know, I want to understand the way random in the in the it's what I talk about in this book, how order comes from disorder.
Yeah. But it kind of sprung up like most of the whole thing is random, but this little
pocket of complexity they would call earth. That like, why the hell does that happen?
And what we don't know is how common that those pockets of complexity are, or how often
because they may not last forever. Which is more exciting, slash sad to you,
if we're alone or if there's infinite number of...
Oh, I think it's impossible for me to believe that we're alone.
That was just too horrible, too cruel.
Could be, like, the sad It could be like the sad thing,
it could be like a graveyard of intelligent civilizations.
Oh, everywhere.
Yeah, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that, that's, that's, on average, everyone has been forgotten in history.
Yeah, right. Yeah. Most people are not remembered, but you're on the generational
too. I mean, yeah, well, not just on average, basically, very close to 100% of
people who have ever lived are forgotten. Yeah, I mean, you know, long
archive. I don't know anyone alive who remembers my great grandparents because we didn't meet them. So still this fun, this, this, this life
is pretty fun somehow. Yeah, even the immense absurdity and, and at times, meaninglessness
of it all, it's pretty fun. And one of the, for me, one of the most fun things is robots
and I've looked up to your work. I've looked up to you for a long time.
That's right. Rod. It's an honor that you would spend your valuable time with me today,
talking to those amazing conversations. Thank you so much for being here.
Well, thanks for talking with me. I enjoyed it.
Thanks for listening to this conversation with Rodney Brooks.
To support this podcast, please check out our sponsors in the description. I enjoyed it. Thanks for listening to this conversation with Rodney Brooks.
To support this podcast, please check out our sponsors in the description.
And now, let me leave you with the three laws of robotics from Isaac Asimov.
1.
A robot may not injure a human being, or through inaction allow a human being to come to
harm.
2.
A robot must obey the orders given to it by human beings except one such orders would
conflict with the first law.
And 3. A robot must protect its own existence as long as such protection does not conflict
with the first or the second laws.
Thank you for listening.
I hope to see you next time. you