Lex Fridman Podcast - #374 – Robert Playter: Boston Dynamics CEO on Humanoid and Legged Robotics
Episode Date: April 28, 2023Robert Playter is CEO of Boston Dynamics, a legendary robotics company that over 30 years has created some of the most elegant, dextrous, and simply amazing robots ever built, including the humanoid r...obot Atlas and the robot dog Spot. Please support this podcast by checking out our sponsors: - NetSuite: http://netsuite.com/lex to get free product tour - Linode: https://linode.com/lex to get $100 free credit - LMNT: https://drinkLMNT.com/lex to get free sample pack EPISODE LINKS: Boston Dynamics YouTube: https://youtube.com/@bostondynamics Boston Dynamics Twitter: https://twitter.com/BostonDynamics Boston Dynamics Instagram: https://www.instagram.com/bostondynamicsofficial Boston Dynamics Website: https://bostondynamics.com PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:18) - Early days of Boston Dynamics (15:39) - Simplifying robots (19:37) - Art and science of robotics (24:20) - Atlas humanoid robot (41:14) - DARPA Robotics Challenge (55:34) - BigDog robot (1:09:23) - Spot robot (1:30:48) - Stretch robot (1:33:36) - Handle robot (1:39:10) - Robots in our homes (1:47:57) - Tesla Optimus robot (1:56:39) - ChatGPT (1:59:43) - Boston Dynamics AI Institute (2:01:14) - Fear of robots (2:11:36) - Running a company (2:17:13) - Consciousness (2:24:46) - Advice for young people (2:26:42) - Future of robots
Transcript
Discussion (0)
The following is a conversation with Robert Plader, CEO of Boston Dynamics, a legendary robotics
company that over 30 years has created some of the most elegant, dexterous, and simply amazing robots
ever built, including the humanoid robot atlas and the robot dog spot. One or both of whom you've probably seen on the internet, either dancing, doing
back flips, opening doors, or throwing around heavy objects. Robert has led both the development
of Boston Dynamics, Human Rights and their physics-based simulation software. He has been
with the company from the very beginning, including its roots at MIT, where
he received his PhD in aeronautical engineering.
This was in 1994, at the legendary MIT Leg Lab.
He wrote his PhD thesis on robot gymnastics, as part of which he programmed a bipedal robot
to do the world's first 3D robotics on our salt.
Robert is a great engineer, roboticist and leader, and Boston Dynamics, to me as a roboticist, is a truly inspiring company. This conversation was a big honor and pleasure,
and I hope to do a lot of great work with these robots in the years to come.
And now a quick few second mention of each sponsor. Check them out in the description.
It's the best way to support this podcast.
We've got NetSuite for business management software, Linnode for Linux systems, and Element
for zero sugar electrolytes.
Choose wisely my friends.
Also, if you want to work with our team, we're always hiring.
Go to luxframedman.com slash hiring.
And now onto the full ad rates.
As always, no ads in the middle.
I try to make this interesting,
but if you must commit the horrible, terrible crime
of skipping them, please do check out our sponsors.
I do enjoy their stuff.
I really do.
And maybe you will as well.
This show is brought to you by NetSuite
and all in one cloud business management system.
Running a business, as this podcast reveals, from Robert Plader and Boston Dynamics, is really
hard.
It's not just about the design of the system, it's not just about the engineering, the software,
the hardware, all the complicated research that goes into it, all the different prototypes,
all the failure upon failure in their early, all the different prototypes, all the failure,
upon failure, upon failure,
in the early stages and the middle stages
of getting these incredible robots to work.
It's all the glue that ties the company together.
And for that, you have to use the best tools for the job.
I hope to run a business, a large business
that actually builds stuff one day and
boys and much more than just the innovation in the engineering. I understand that deeply and you
should be hiring the best team for that job and I use the best tools for that job and that's where
that suite can help out. Hopefully it can help you out. You can start now with no payment or interest for six months, go to netsuite.com slash
Lex to access their one-of-a-kind financing program that's netsuite.com slash Lex. This episode is
also brought to you by Linode, now called Akamai and their incredible Linux virtual machines. I
think praises to the greatest operating systems of all time,
which is Linux.
There's so many different beautiful flavors of Linux.
My favorite is probably the different sub-plavars of Ubuntu,
Ubuntu Mat day.
That's why I use my personal development projects in general
when I want to feel comfortable and fully customized,
but I've used so many other Linux's distribution Linux's.
But that's not what Linnore is about, or it is in part, but it actually takes those Linux
boxes and scales them arbitrarily to where you can do compute, not just on one machine,
but on many machines, customize them, make sure everything works reliably and when it
doesn't, there's amazing human customer
service with real humans.
That's something that should be emphasized in this day of Chad G.P.T.
Real human beings that are good at what they do and figure out how to solve problems
if they ever come up.
Linode, not called Akamai, is just amazing.
If compute is something you care about for your business, for your personal life, for
your happiness, for anything, then now you should check them out.
Visit linode.com slash lex for free credit.
This episode is brought to you by a thing that I'm currently drinking, as I'm saying
these words, is the element electrolyte drink mix spelled L-M and T
my favorite is the watermelon, that's what I was drinking,
you know, we have all explored.
In college, things got wild, things got a little crazy,
things got a little out of hand.
All of us have done things we regret.
Have eaten ice cream, we should not have eaten.
I've eaten ice cream with dairy queen so many times in my life,
especially through my high school years.
And to contradict what I just said, I regret nothing.
I think sneakers and if memory says me correctly,
there's something called the dairy queen blizzard
where you could basically shove in whatever you want
into the ice cream and blend it and it tastes delicious. Like I think my favorite would be
like the Snickers bar any kind of bar Mars bar and anything with kind of chocolate
caramel maybe a little bit coconut that that kind of stuff. You know I don't
regret it but we've experimented all of us have experimented with different
flavors with different things in life.
And I regret nothing. You should not regret any of it either because that path is what created the beautiful person that you are today.
And that path is also the reason I mostly drink the watermelon flavor of, I guess it's called watermelon salt. I don't know what it's called, but watermelon is in the word.
of I guess it's called watermelon salt. I don't know what it's called, but watermelon is in the word
Avaloment a highly recommended you could try other flavors chocolate is pretty good to like chocolate mint. I think it's called
Totally different thing all the flavors are very different and that's why I love it So you should explore anyway, it's a good way to get all the electrolytes in your system the salt the magnesium the
potassium not salt sodium is what I meant to say. It doesn't matter what
I meant to say. What matters is it's delicious and I'm consuming it and I'm singing it praises
and I will toast you when we see each other in person one day, friend. And we should drink
element, drink to our deepest fulfillment together as brothers and sisters in arms.
deepest fulfillment together as brothers and sisters in arms. Get a simple pack for free with any purchase.
Try it at drinkelement.com slash Lex.
This is Lex Reveal and Podcast.
To support it, please check out our sponsors in the description.
And now, dear friends, here's Robert Plato.
When did you first fall in love with robotics?
Let's start with love and robots. Well, love is
is relevant because I think the fascination, the deep fascination is really about movement. And I was visiting MIT,
looking for a place to get a PhD. And I want to do some laboratory
work. And one of my professors in the Arrow Department said,
go see this guy, Mark Raybert, down in the basement of the AI lab.
And so I walked down there and saw him.
He showed me his robots.
And he showed me this robot doing a summer salt.
And I just immediately went, whoa, you know, robots can do that.
And because of my own interest in gymnastics, there was like this immediate
connection.
I was interested in, I was in an arrow astro degree because flight and movement was also
fascinating to me.
And then it turned out that robotics had this big challenge.
How do you balance?
How do you build a legate robot that can only get around?
And that just, that was a fascination. And it still exists today. You're still working on perfecting motion in robots. What about the elegance and the beauty of the movement itself?
Is there something maybe grounded in your appreciation of movement from your gymnastics days, did you...
Was there something you just fundamentally appreciate about the elegance and beauty of movement?
We had this concept in gymnastics of letting your body do what it wanted to do.
When you get really good at gymnastics, part of what you're doing is putting your body into a position
where the physics and the body's inertia and momentum will kind of push you in the right
direction in a very natural and organic way. And the thing that Mark was doing in the
basement of that laboratory was trying to figure out how to build machines to take advantage
of those ideas. How do you build something so that the physics of the machine just kind of inherently wants to do what it wants to do?
And he was building these springy pogo stick type.
You know, his first cut at Legged Look Emotion was a pogo stick where it's bouncing and there's a spring mass system
that's oscillating, has its own sort of natural frequency there. And sort of figuring out how to augment those natural physics with also intent,
how do you then control that, but not overpower it.
It's that coordination that I think creates real potential.
We could call it beauty, you could call it, I don't know, synergy.
People have different words for it, but I think that that was inherent from the beginning, and that was clear to me, that that's part
of what Mark was trying to do. He asked me to do that in my research work. So, you know,
that's where it got going.
So part of the thing that I think I'm calling elegance and beauty in this case, which was
there, even with the pogos, because maybe the efficiency, so letting the body do what it
wants to do,
trying to discover the efficient movement.
It's definitely more efficient.
It also becomes easier to control in its own way
because the physics are solving some of the problem itself.
It's not like you have to do all this calculation
and overpower the physics.
The physics naturally inherently want to do the right thing.
There can even be,
you know, feedback mechanisms, stabilizing mechanisms that occur simply by virtue of
the physics of the body. And it's, you know, not all, not all in the computer or not
even all in your mind as a person. And I, there's something interesting in that, that,
melding.
You were with Mark for many, many, many years, but you were there in this kind of legendary space of a leg lab and a my team in the basement.
All great things happen in the basement.
There's some memories, is there some memories from that time that you have?
Because it's such cutting-edge work in robotics and artificial intelligence.
The memory is the distinctive lessons I would say I learned in that time period
and that I think Mark was a great teacher of was it's okay to pursue your interests, your curiosity, do something because you love it.
You'll do it a lot better if you love it.
That is a lasting lesson that I think we apply at the company still, and really is a core value. So the interesting thing is I got to,
with people like Rostadurik and others, like the students that work at those robotics labs
are like some of the happiest people I ever met.
I don't know what that is.
I meet a lot of PhD students,
a lot of them are kind of broken by the wear and tear
of the process,
but roboticists are, while they work extremely hard
and work a long hours, there's a happiness there.
The only other group of people I met like that
are people that skydive a lot.
Like for some reason there's a deep, fulfilling happiness.
Maybe from like a long period of struggle
to get a thing to work and it works
and there's a magic to it
I don't know exactly because it's so fundamentally hands-on and you're bringing a thing to life
I don't know what it is, but they're happy
We see, you know our attrition at the company is really low people come and they love the pursuit and I think part of that is that
There's perhaps a natural connection to it. It's a little bit easier to connect
when you have a robot that's moving around in the world. And part of your goal
is to make it move around in the world. You can identify with that.
And this is on this is one of the unique things about the kinds of robots
we're building is this physical interaction. Let's you perhaps
identify with it.
So I think that is a source of happiness.
I don't think it's unique to robotics.
I think anybody also who is just pursuing something
they love, it's easier to work hard at it and be good at it.
And not everybody gets to find that.
I do feel lucky in that way.
And I think we're lucky as an organization
that we've been able to build a business around this
and that keeps people engaged.
So if it's all right, let's link our mark
for a little bit longer, Mark Raybert.
So he's a legend.
He's a legendary engineer and roboticist.
What have you learned about life, about robotics and mark?
Through all the many years you've worked with him.
I think the most important lesson,
which was, you know, have the courage of your convictions
and do what you think is interesting.
Be willing to try to find big, big problems to go after.
And at the time, you know, like at locomotion,
especially in a dynamic machine, nobody had solved it.
And that felt like a multi-decade problem to go after.
And so, you know, have the courage to go after that because you're interested.
Don't worry if it's going to make money.
You know, that's been a theme.
So that's really probably the most important lesson I think that
I got from Mark. How crazy is the effort of doing ligad
robotics at that time, especially? You know, Mark got some stuff to work
starting from the simple ideas. So maybe the other, another important idea that has really become a value of the
company is try to simplify a thing to the core essence. And while Mark was showing videos
of animals running across the savanna or climbing mountains, what he started with was a
pogostick because he was trying to reduce the problem to something that was manageable and getting the pogostick to balance.
Had in it the fundamental problems that if we solve those, you could eventually extrapolate
to something that galloped like a horse.
And so look for those simplifying principles.
How tough is the job of simplifying a robot?
So I'd say in the early days the thing that made Boston
The researchers at Boston Dynamics special is that we
We worked on
under
figuring out what that
That central
principle was and then building software or machines around that principle and that was not easy in the early days. And it took
real expertise in understanding the dynamics of motion and feedback control principles. How to
build and, you know, with computers at the time, how to build a feedback control algorithm that was
simple enough that it could run in real time at a thousand hertz and actually get that machine to work. And that was not something everybody was doing
at that time. Now the world's changing now, and I think the approaches to controlling robots
are going to change, and they're going to become more broadly available. But at the time, there weren't
many groups who could really sort of work at that principled level with both the software
and make the hardware work. And I'll say one other thing about you are sort of talking
about what are the special things. The other thing was it's good to break stuff.
Use the robots, break them, repair them, fix and repeat, test fix and repeat, and that's also
a core principle that has become part of the company. And it lets you be fearless in your work.
Too often if you are working with a very expensive robot,
maybe one that you bought from somebody else
or that you don't know how to fix,
then you treat it with kit gloves
and you can't actually make progress.
You have to be able to break something.
And so I think that's been a principle as well.
So just to link on that,
psychological, how do you deal with that? And so I think that's been a principle as well. So just to link on that, psychologically,
how do you deal with that?
Because I remember I built an RC car
that had some custom stuff, like compute on it
and all that kind of stuff, cameras.
And because I didn't sleep much,
the code I wrote had an issue where it didn't stop the car and the car got confused
and it full speeded at like 20, 25 miles an hour, slammed into a wall. And I just remember sitting
there alone in a deep sadness. Sort of full of regret, I think, almost anger, but also sadness because you think about, well, these robots,
especially for autonomous vehicles, like you should be taking safety very seriously, even
in these kinds of things, but just no good feelings.
It made me more afraid, probably, to do this kind of experiments in the future.
Perhaps the right way to have seen that is positively.
It depends if you could have built that car
or just gotten another one, right?
That would have been the approach.
I remember when I got to grad school,
I got some training about operating a lay
that a mill up in the machine shop,
and I could start to make my own parts.
And I remember breaking some piece of equipment in the lab.
And then realizing,
because maybe this was a unique part,
and I couldn't go buy it.
And I realized, oh, I can just go make it.
That was an enabling feeling.
Then you're not afraid. You know, it might take time. It might
take more work than you thought. It was going to be required to get this thing done. But you can
just go make it. And that's freeing in a way that nothing else is. You mentioned that if you
bet control the dynamics, sorry for the romantic question, but is in the early days and even
now is the dynamics probably more appropriate for the early days, is it more art or science?
There's a lot of science around it.
And trying to develop, you know, scientific principles that let you extrapolate from like one-legged machine to another, you know, develop a core set of principles like a spring mass bouncing system.
And then figure out how to apply that from a one-legged machine to a two or a four-legged machine.
Those principles are really important, and we're definitely a core part of our work.
There's also, you know, when we started to pursue humanoid robots,
there was so much complexity in that machine that, you know, one of the benefits of the humanoid form is you have some intuition about how it should look while it's moving.
And that's a little bit of an art, I think.
And now it's just, or maybe it's just tapping into a knowledge
that you have deep in your body,
and then trying to express that in the machine,
but that's an intuition that's a little bit more
on the art side.
Maybe it predates your knowledge.
And before you have the knowledge of how to control it,
you try to work through the art channel.
And humanoid sort of make that available to you.
If it had been a different shape, maybe you wouldn't have had the same intuition about it.
Yes, so your knowledge about moving through the world is not made explicit to you.
So you just, that's why it's art.
It might be hard to actually articulate exactly.
There's something about, and being a competitive athlete, there's something about seeing a
movement.
A coach, one of the greatest strengths a coach has, is being able to see some little change
in what the athlete is doing, and then being able to articulate that to the athlete.
Then maybe even trying to say, and you should try to feel this. So there's something just in scene. And again, you know,
sometimes it's hard to articulate what it is you're seeing. But there's a just perceiving
the motion at a rate that is, again, sometimes hard to put into words. Yeah, I wonder how it is possible to achieve sort of truly elegant movement.
You have a movie like X-Malkana, I'm not sure if you've seen it, but the main actress
in that who plays the AI robot, I think, is a ballerina.
I mean, just the natural elegance and the, I don't know, eloquence of movement. It's, it looks
efficient and easy and just, it looks right. It looks right. It's sort of the key.
And then you look at, especially early robots, I mean, they're so cautious in the way they move,
they're so cautious in the way they move that it's not the caution that looks wrong. It's something about the movement that looks wrong that feels like it's very inefficient,
unnecessarily so, and it's hard to put that into words exactly.
We think that part of the reason why people are attracted to the machines we build is because the inherent
dynamics of movement are closer to right.
Because we try to use walking gates where we build a machine around this gate where you're
trying to work with the dynamics of the machine instead of to stop them.
Some of the early walking machines, you machines, you're essentially really trying hard to not let them fall over.
So you're always stopping the tipping motion. It's sort of the insight of dynamics, stability, and a loaded machine is to go with it.
Let the tipping happen. Let yourself fall, but then catch yourself with that next foot.
And there's something about getting those physics
to be expressed in the machine that people interpret
as life-like or elegant or just natural-looking.
And so I think if you get the physics right,
it also ends up being more efficient, likely. There's a benefit that it also ends up being more efficient likely. There's a benefit
that it probably ends up being more stable in the long run. You know, it could, it could
walk stable over a wider, a range, a range of conditions. And it's, and it's more beautiful
and attractive at the same time.
So how hard is it to get the humanoid robot atlas to do some of the things that's recently
been doing?
Let's forget the flips and all of that.
Let's just look at the running.
Maybe you can correct me, but there's something about running.
I mean, that's not careful at all.
That's your falling forward.
You're jumping forward and are falling.
So how hard is it to get that right?
Our first humanoid, we needed to deliver natural looking walking.
You know, we took a contract from the army. They wanted a robot that could walk naturally.
They wanted to put a suit on the robot and be able to test it in a gas environment.
And so they wanted the motion to be natural. And so our goal was a natural looking gate. It was surprisingly hard to get that to work.
But we did build an early machine, we called it Petman prototype.
It was the prototype before the Petman robot.
And it had a really nice looking gate where you know, it would stick the leg out, it would do heel strike first, before
it rolled onto the toes. You didn't land on the flat foot, you extended your leg a little bit.
But even then it was hard to get the robot to walk where when you're walking that it fully
extended its leg and essentially landed on an extended leg. And if you watch closely how you walk,
you probably land on an extended leg but then you immediately flex your knee as you start to make that contact. And getting
that all to work well took such a long time. In fact, I probably didn't really see the
nice natural walking that I expected out of our human ways until maybe last year. And the team was developing on our newer generation of Atlas,
some new techniques for developing a walking control algorithm.
And they got that natural looking motion
as sort of a byproduct of just a different process
that we're applying to developing the control.
So that probably took 15 years,
10 to 15 years to sort of get that.
From the Petman prototype was probably in 2008
and what was it, 2022?
Last year that I think I saw a good walking on Atlas.
If you could just like link on it,
what are some challenges of getting good walking?
So is it, is this partially like a hardware like actuator problem?
Is it the control? Is it the artistic element of just observing the whole system operating
in different conditions together? I mean, is there some kind of interesting quirks or
challenges you can speak to like the heel strike or all?
Yeah. So one of the things that makes the like this straight leg a challenge is you're sort
of up against a singularity, a mathematical singularity where you know when your leg is
fully extended, it can't go further the other direction, right?
There's only, you can only move in one direction.
And that makes all of the calculations around how to produce torques at that joint or positions makes it more complicated.
And so having all of the mathematics,
so it can deal with these singular configurations,
is one of many challenges that we face.
And I'd say in those earlier days, again,
we were working with these really simplified models.
So we're trying to boil all the physics of the complex human body into a simpler subsystem
that we can more easily describe in mathematics. And sometimes those simpler subsystems don't have
all of that complexity of the straight leg built into them. And so what's
happened more recently is we're able to apply techniques that let us take the
full physics of the robot into account and deal with some of those strange
situations like the straight leg. So is there a fundamental challenge here that
it's maybe you can correct me, but is it underactuated?
Are you falling?
Underactuated is the right word, right?
You can't push the robot in any direction you want to,
right?
And so that is one of the hard problems
of like at Leukamosh.
And you have to do that for a natural movement.
That's not necessarily required for natural movement.
It's just required, you know, it's just required, you know,
we don't have, you know, a gravity force that you can hook yourself onto to apply an external
force in the direction you want at all times, right? The only, the only external forces
are being mediated through your feet and how they get mediated depend on how you place
your feet. And, you know, you can't just, you you know God's hand can't reach down and give and push any
direction you want you know so. Is there is there some extra challenge to the fact that Alice is
such a big robot? There is. The humanoid form is attractive in many ways but it's also a challenge
in many ways. You have this big upper body that has a lot of mass and inertia
and throwing that inertia around increases the complexity of maintaining balance.
And as soon as you pick up something heavy in your arms, you've made that problem even harder.
And so in the early work in the leg lab and in the early days at the company,
you know, we were pursuing these quadripet robots, which had a kind of built-in simplification.
You had this big rigid body and then really light legs. So when you swing the legs,
the leg motion didn't impact the body motion very much. All the mass and inertia was in the body.
But when you have the humanoid,
that doesn't work. You have big heavy legs, you swing the legs, it affects everything else.
And so dealing with all of that interaction does make the humanoid a much more complicated platform.
And I also saw that at least recently you've been doing more explicit modeling of the stuff you pick up.
Yeah.
Yeah.
Just very, really interesting.
So you have to what model the shape, the weight distribution, I don't know, like you
have to, like include that as part of the modeling as part of the planning, because, okay, so
for people who don't know,
so Alice, at least in like a recent video, like throws a heavy bag.
Throws a bunch of stuff.
So what's involved in picking up a thing, a heavy thing,
and when that thing is a bunch of different
non-standard things, I think it's also picked up
like a barbell.
And to be able to throw in some cases,
what are some interesting challenges there? So we were definitely trying to show that the robot
and the techniques we're applying to the robot to Atlas, let us deal with heavy things in the world.
Because if the robot's going to be useful, it's actually got to move stuff around. And that needs to be significant stuff.
That's an appreciable portion of the body weight of the robot.
And we also think this different chase us from the other humanoid robot activities that
you're seeing out there.
Mostly, they're not picking stuff up yet.
I'm not heavy stuff anyway.
But just like you or me, you need to anticipate that moment.
You're reaching out to pick something up and as soon as you pick it up, your center of
mass is going to shift.
And if you're going to turn in a circle, you have to take that inertia into account.
And if you're going to throw a thing, you've got all of that has to be included in the
model of what you're trying to do.
So the robot needs to have some idea or expectation of what that
weight is and then and sort of predict, you know, think a couple of seconds ahead, how do I manage my
now my my body plus this big heavy thing together and to get and instill maintain balance, right? And so
I that's a big change for us. And I think the tools we've built are really allowing that to happen quickly now.
Some of those motions that you saw in that most recent video, we were able to create
in a matter of days.
It used to be that it took six months to do anything new on the robot.
And now we're starting to develop the tools and let us do that in a matter of days.
And so we think that's really exciting.
That means that the ability to create new behaviors for the robot is going to be a quicker
process.
So being able to explicitly model new things that it might need to pick up, new type of
things.
And to some degree, you don't want to have to pay too much attention to each specific
thing, right?
There's sort of a generalization here.
Obviously, when you grab a thing, you have to conform your hand, your end effector to the
surface of that shape.
But once it's in your hands, it's probably just the mass and inertia that matter.
The shape may not be as important. And so, you know, for some, in some ways, you want to pay attention to that detailed shape.
And in others, you want to generalize it and say, well, all I really care about is the
center of mass of this thing, especially if I'm going to throw it up on that scaffolding.
And it's easier if the body is rigid.
What if there's some, doesn't it throw like a sandbag type thing?
That tool bag, you know, had loose stuff in it.
So it managed that.
There are harder things that we haven't done yet.
You know, we could have had a big jointed thing
or I don't know, a bunch of loose wire or rope.
What about carrying another robot?
How about that?
Yeah, we haven't done that yet.
Carey spot.
I guess we did a little bit of a,
we did a little skit around Christmas where we had
Two spots holding up another spot that was trying to put you know a bow on a tree. So I guess we're doing that in a small way
Okay, that's pretty good. Let me ask the all-important question. Do you know how much Alice can curl?
I mean, you know. For us humans, that's really one of the most fundamental questions you can ask another human being, curl, bench.
It's not as much as we can yet, but I metric that I think is interesting is another way
of looking at that strength is you know the box jump. So if
how high of a box can you jump on to?
Question.
And Alice, I don't know the exact height, it was probably a meter high or something like
that. It was a pretty tall jump that Alice was able to manage when we last tried to do this.
And I have video of my chief technical officer doing the same jump and he
really struggled, you know, to the human, but the human getting all the way on top of this box,
but then Atlas was able to do it. We're now thinking about the next generation of Atlas and
we're probably going to be in the realm of a person can't do it, you know, with this, with the next
generation, the robots, the actuators are going to get stronger.
Where it really is the case that at least some of these joints, some of these motions will
be stronger.
And to understand how high can jump, you probably had to do quite a bit of testing.
Oh, yeah.
And there's lots of videos of it trying and failing.
And that's, you know, that's all, yeah.
We don't always release those, those videos, but they're a lot of fun to look at.
Uh, so we'll talk a little bit about that.
But can you talk to the jumping?
Because you talked about the walking
and it took a long time, many, many years
to get the walking to be natural.
But there's also really natural looking
robust, resilient jumping.
How hard is it to do the jumping?
Well, again, this stuff is really evolved rapidly in the last few years.
The first time we did a summer salt, there was a lot of manual iteration.
What is the trajectory? How hard do you throw you? In these early days,
I actually would, when I'd see early experiments that the team was doing, I might make suggestions
about how to change the technique. Again, kind of borrowing from my own intuition about
how backflips work. But frankly, they don't need that anymore. So in the early days, you
had to iterate kind of in almost a manual way, trying to change these trajectories of
the arms or the legs to try to get a successful
backflip to happen.
But more recently, we're running these model predictive control techniques where we're
able to, the robot essentially can think in advance for the next second or two about
how its motion is going to transpire.
And you can solve for optimal trajectories
to get from A to B.
So this is happening in a much more natural way.
And we're really seeing an acceleration
happen in the development of these behaviors.
Again, partly due to these optimization techniques,
sometimes learning techniques.
So it's hard in that there's a lot of
mathematics and behind it.
But we're figuring that out.
So you can do model predictive control for, I mean, I don't
even understand what that looks like when the entire
row was in the air, flying and doing a back.
Yeah, I mean, but that's the cool part, right?
So, you know, yeah, you know, the physics,
we can calculate physics pretty well using Newton's laws
about how it's going to evolve over time
and the road, you know, this, the sick trek,
which was a front summer salt with a half twist
is a good example, right?
You saw the robot on various versions of that trick.
I've seen it land in different configurations and it still manages to stabilize itself.
And so, you know, what this model predictive control means is, again, in real time, the
robot is projecting ahead, you know, a second into the future and sort of exploring options.
And if I move my arm a little bit more this way,
how is that gonna affect the outcome?
And so it can do these calculations, many of them,
and basically solve where,
given where I am now,
maybe I took off a little bit screwy
from how I had planned, I can adjust.
So you're adjusting in the air.
Adjust on the fly. So the model predictive adjusting in the air. Adjust on the fly.
So the model predictive control.
Let's you adjust on the fly.
And of course, I think this is what people adapt as well.
When we do it, even a gymnastics trick,
we try to set it up so it's as close to the same every time.
But we figured out how to do some adjustment on the fly.
And now we're starting to figure out
that the robots can do this adjustment on the fly. And now we're starting to figure out that the robots can do this adjustment on the fly
as well, using these techniques.
In the air.
I mean, it just feels from a robotics perspective,
just so real.
Well, that's sort of the, you talked about under-actuated,
right?
So when you're in the air, there's something,
there's some things you can't change, right?
You can't change the momentum while it's in the air
because you can't apply an external force or torque.
And so the momentum isn't going to change.
So how do you work within the constraint of that fixed momentum
to still get for me to be where you want to be?
That's really on track.
You're in the air.
I mean, you become a drone for a brief moment in time.
No, you're not even a drone, because you can't hover.
You can't hover, you can't.
You're gonna, you're gonna impact soon.
Be ready.
Yeah.
Are you considered like a hover type thing?
I know, no, it's too much weight.
I mean, it's just, it's just incredible.
It's just even to have the guts to try backflip
who's such a large body.
That's wild.
What, like, uh, We definitely broke a few robots, try. But that's
where the build it break it, fix it. Strategy comes in, you gotta be willing to break. And what
ends up happening is you end up by breaking the robot repeatedly, you find the weak points,
and then you end up redesigning it. So it doesn't break so easily next time. The breaking
process, you learn a lot,
like a lot of lessons and you keep improving
not just how to make the backflip work,
but everything, just how to build a machine better.
Yeah.
Yeah.
I mean, is there something about just the guts
to come up with an idea of saying,
you know, let's try to make it do a backflip?
Well, I think the courage to do a backflip in the first place
and to not worry too much about the ridicule of somebody saying,
why the heck are you doing backflips with robots?
Because a lot of people have asked that, you know, why?
Why are you doing this?
Why go to the moon in this decade and do the other things, JFK?
Not because it's easy, because it's hard.
Yeah, exactly.
Don't ask questions. Okay, so the jump thing, I mean, there's a lot of incredible stuff. If we
can just rewind a little bit to the DARPA Robotics Challenge in 2015, I think, which was for
people who are familiar with the DARPA challenges, it was first with autonomous vehicles,
and there's a lot of interesting challenges around that.
And the DARPA Robotics challenges when humanoid robots
were tasked to do all kinds of manipulation, walking, driving car,
all these kinds of challenges, if I remember correctly, some slight
capability to communicate with humans, but the communication was very poor.
So basically, it has to be almost entirely autonomous.
You can have periods where the communication was entirely interrupted, and the robot had
to be able to proceed.
But you could provide some high-level guidance to the robot,
basically low bandwidth communications to steer it.
I watched that challenge with tears in my eyes eating popcorn.
That's hard to do.
But I wasn't personally losing hundreds of thousands of millions of dollars,
and many years of incredible hard work by
some of the most brilliant roboticists in the world. So that was why the tragic, that's why
the tears came. So you know, what have you just looking back to that time, what have you learned
from that experience? Maybe if you could describe what it was, sort of the setup for people who
haven't seen it. Well, so there was a contest where a bunch of different robots were asked to do a series of tests,
some of those that you mentioned, drive a vehicle, get out, open a door, go identify a valve,
shut a valve, use a tool to maybe cut a hole in a surface and then crawl over some stairs and maybe some rough terrain. So it was, the idea was
have a general purpose robot that could do lots of different things. Had to be mobility and
manipulation on board perception. And there was a contest, which DARPA likes,
at the time, was running, sort of,
follow on to the grand challenge,
which was, let's try to push vehicle autonomy along, right?
They encouraged people to build autonomous cars.
So they're trying to basically push an industry forward.
And we were asked, our role in this was to build
a human way, at the time it was our first generation Atlas robot. And we built maybe ten
of them. I don't remember the exact number. And DARPA distributed those to various teams that sort of won a contest, showed that they could,
you know, program these robots and then use them to compete against each other. And then other
robots were introduced as well. Some teams built their own robots, Carnegie Mellon, for example,
built their own robot. And all these robots competed to see who could sort of get through this
maze of the fastest. And again, I think the purpose was to kind of push the whole industry
forward. We provided the robot and some baseline software, but we didn't actually compete
as a participant where we were trying to drive the robot through this maze.
We were just trying to support the other teams.
It was humbling because it was really a hard task.
And honestly, the robots, the tears were because
mostly the robots didn't do it.
You know, they fell down repeatedly.
It was hard to get through this contest. Some did and they were rewarded in one, but
it was humbling because of just how hard these tasks weren't all that hard. A person could
have done it very easily, but it was really hard to get the robots to do it.
The general nature of it, the variety of it, the variety. And also, that I don't know if the tasks were sort of,
the task in themselves help us understand
what is difficult, what is not.
I don't know if that was obvious
before the contest was designed.
So you kind of try to figure that out.
And I think Atlas is really a general robot platform
and it's perhaps not best suited for the specific
tasks of that contest.
For just, for example, probably the hardest task is not the driving of the car, but getting
in and out of the car.
And Atlas probably, if you were to design a robot that can get into the car easily and
get out easily, you probably would not make atlas.
That particular car.
Yeah, the robot was a little bit big
to get in and out of that car, right?
It doesn't fit.
This is the curse of a general-purpose robot
that they're not perfect at any one thing,
but they might be able to do a wide variety of things.
And that is the goal at the end of the day.
I think we all wanna build general purpose robots
that can be used for lots of different activities,
but it's hard.
And the wisdom in building successful robots
up until this point have been
go build a robot for a specific task
and it'll do it very well. And as long as you control
that environment, it'll operate perfectly. But robots need to be able to deal with uncertainty.
If they're going to be useful to us in the future, they need to be able to deal with unexpected
situations. And that's sort of the goal of a general purpose or multi-purpose robot.
And that's just darn hard.
And so some of the, you know, there's these curious little failures.
Like I remember one of the, a robot, you know, the first, the first time you start to try
to push on the world with a robot, you, you forget that the world pushes back and, and
we'll push you over if you're not ready for it.
And the robot, you know, reached to grab the door handle.
I think it missed the grasp of the door handle
was expecting that its hand was on the door handle.
And so when it tried to turn the knob,
it just threw itself over.
It didn't realize, oh, I had missed the door handle.
I didn't have, I was expecting a force back
from the door, it wasn't there.
And then I lost my balance.
So these little simple
things that you and I would take totally for granted and deal with the robots don't know
how to deal with yet. And so you have to start to deal with all of those circumstances.
Well, I think a lot of us experienced this in even one sober, but drunk too. You pick up a thing and expect it to be,
what is it?
Heavy, and it turns out to be light.
Yeah, and then you woo.
Oh, yeah.
And then, so the same, and I'm sure if you're deaf,
the perception for whatever reason is screwed up,
if you're drunk or some other reason,
and then you think you're putting your hand on the table,
and you miss it, I mean, it's the same kind of situation.
Yeah. But there's a... That's why you need to be able to predict forward
just a little bit.
And so that's where this model predictive control stuff
comes in.
Predict forward, what you think's gonna happen.
And then if that does happen, you're in good shape.
If something else happens, you better start predicting again.
So we regenerate a plan when you don't.
I mean, that also requires a very fast feedback loop
of updating what your prediction
how it matches to the actual real world.
Yeah, those things have to run pretty quickly.
What's the challenge of running things pretty quickly?
A thousand hurts of acting and sensing quickly.
You know, there's a few different layers of that. You want at the lowest level, you like
to run things typically at around a thousand hertz, which means that, you know, at each
joint of the robot, you're measuring position or force and then trying to control your actuator,
whether it's a hydraulic or electric motor,
trying to control the force coming out of that actuator. And you want to do that really fast,
something like a thousand hertz. And that means you can't have too much calculation going on at that
joint. But that's pretty manageable these days and it's fairly common. And then there's another
layer that you're probably calculating, you know, maybe
at a hundred hertz, maybe ten times slower, which is now starting to look at the overall body
motion and thinking about the larger physics of the robot. And then there's yet another loop that's
probably happening a little bit slower, which is where you start to bring your perception and your vision and things like that.
So you need to run all of these loops simultaneously.
You do have to manage your computer time so that you can squeeze in all the calculations
you need in real time in a very consistent way.
The amount of calculation we can do is increasing as computers get better,
which means we can start to do more sophisticated calculations. I can have a more complex model
doing my forward prediction. And that might allow me to do even better predictions as I
get better and better. And it used to be, again, we had, you know, 10 years ago,
we had to have pretty simple models
that we were running, you know, at those fast rates,
because the computers weren't as capable
about calculating forward with a sophisticated model.
But as computation gets better, we can do more of that.
What about the actual pipeline of software engineering?
How easy is it to keep updating Atlas?
Like, do continuous development on it?
So how many computers are on there?
Is there a nice pipeline?
It's an important part of building a team around it,
which means you need to also have software tool, simulation tools.
So we have always made strong use of physics-based simulation tools to do some of this calculation,
basically test it in simulation before you put it on the robot.
But you also want the same code that you're running in simulation to be the same code you're running on the hardware.
And so even getting to the point where it was the same code going from one to the other, we probably didn't really get that working until you know a few years ago.
But that was a bit of a milestone.
And so you want to work, certainly work these pipelines so that you can make it as easy as possible and have a bunch of people working in parallel, especially when you, we only have, you know, four of the Atlas robots, the modern Atlas robots at the company.
And, you know, we probably have, you know, 40 developers there all trying to gain access to it. And so you need to share resources and use some of these, some of the software
pipeline.
Well, that's a really exciting step to be able to run the exact same code and simulation
as on the actual robot. How hard is it to do? Realistic simulation, physics-based simulation
of, of Atlas such that, I mean, the dream is like, if it works, the simulation works perfectly in reality.
How hard is it to sort of keep working
on closing that gap?
The root of some of our physics-based simulation tools
really started at MIT.
And we built some good physics-based modeling tools there.
The early days of the company, we were trying
to develop those tools as a commercial product.
So we continued to develop them.
It wasn't a particularly successful commercial product, but we ended up with some nice physics-based
simulation tools so that when we started doing legged robotics again, we had a really
nice tool to work with.
And the things we paid attention to were things that weren't necessarily handled very well
in the commercial tools you could buy out the shelf, like interaction
with the world, like foot ground contact.
So, trying to model those contact events well in a way that captured the important parts
of the interaction was a really important element to get right, and to also do in a way that was computationally feasible
and could run fast,
because if your simulation runs too slow,
then your developers are sitting around
waiting for stuff to run and compile.
So it's always about efficient fast operation as well.
So that's been a big part of it.
I think developing those tools in parallel to
the development of the platform and trying to scale them has really been essential, I'd say,
to us being able to assemble a team of people that could do this. Yeah, how to simulate contact
periods, a foot-grab contact but sort of from manipulation, because don't you want to model all kinds of surfaces?
Yeah.
So it will be even more complex with manipulation, because there's a lot more going on, you
know.
And you need to capture, I don't know, things slipping and moving, you know, in your
hand.
It's a level of complexity that I think goes above foot, ground, contact when you really
start doing dexterous manipulation.
So there's challenges ahead still.
So how far are we away from me being able to walk with Atlas in the sand along the beach
and I was both drinking a beer.
Maybe Atlas could spill his beer because he's got nowhere to put it.
Atlas could walk on the sand.
So can it.
Yeah.
Yeah.
I mean, have we really had him out on the beach?
We take them outside often.
You know, rocks, hills, that sort of thing, even a surrender lab in Waltham.
We probably haven't been on the sand, but I'm a salt surface.
I don't doubt that we could deal with it.
We might have to spend a little bit of time
to sort of make that work, but we did take,
we had to take big dog to Thailand years ago.
And we did this great video of the robot walking in the sand, walking into the ocean, up to, I don't
know, it's belly or something like that, and then turning around and walking out, all while playing
some cool beach music. Great show, but then, you know, we didn't really clean the robot off and the
saltwater was really hard on it. So, you know, we put it in a box, shipped it back by the time it came back, we had some problems.
So it's a salt water. It's not like cold stuff. It's not like sand getting into the components
or something like this. But I'm sure if this is a big priority, you can make it like waterproof.
That just wasn't our goal at the time. Well, it's a personal goal, mind, to walk on the beach. But it's a human problem, too.
You get sand everywhere.
It's just a jam mess.
So soft surfaces are OK.
So I mean, can we just link on the robotics challenge?
There's a pile of rubble to walk over.
Is that's how difficult is that task?
In the early days of developing Big Dog, over is that's, how difficult is that task?
In the early days of developing Big Dog,
the loose rock was the epitome of the hard walking surface
because you step down and then the rock
and you have these little point feet on the robot
and the rock can roll.
And you have to deal with that last minute,
change in your foot placement.
Yes, so you step on the thing and that thing responds to you stepping on it.
Yeah. And it moves where your point of support is. And so it's really that that became kind of the
essence of the test. And so that was the beginning of us starting to build rock piles in our
parking lots. And we would actually build boxes full of rocks and bring them into the lab and then we
would have the robots walking across these boxes of rocks because that became the essential test.
So you mentioned BigDog. Can we maybe take a stroll through the history of Boston Dynamics? So
what and who is BigDog? By the way, is who?
Do you try not to anthropomorphize the robots? Do you try not to?
Do you try to remember that there? This is like the division I have because I for me it's impossible.
For me, there's a magic to the the being that is a robot. It is not human, but it is
the same magic
that a living being has when it moves about the world is there in the robot.
So, I don't know what question I'm asking, but should I say what?
I guess who is Big Dog?
What is Big Dog?
Well, I'll say to address the medic question, we don't try to draw hard lines around it
being an it or a him or her. It's okay, right?
People, I think part of the magic of these kinds of machines is by nature of their organic movement
of their dynamics. We tend to want to identify with them. We tend to look at them and sort of attribute maybe feeling to that because we've only seen
things that move like this that were alive.
And so this is an opportunity.
It means that you could have feelings for a machine and you know, people have feelings
for their cars.
You know, they get attracted to them, attached to them. So that's inherently could be a good thing as long as we
manage what that interaction is. So we don't put strong boundaries around this and ultimately think
it's a benefit, but it's also can be a bit of a curse because I think people look at these machines
and they attribute a level of intelligence that the machines don't have. Why? Because again, they've seen things move like
this that were living beings, which are intelligent. And so they want to attribute intelligence to
the robots that isn't appropriate yet, even though they move like an intelligent being. But you try to acknowledge that the anthropomorphization is there and try to, for sure, acknowledge
it's there and have a little fun with it.
You know, our most recent video, it's just kind of fun, you know, to look at the robot.
We started off the video with Atlas
kind of looking around for where the bag of tools was because the guy up on the scaffolding says,
send me some tools.
Atlas has to kind of look around and see where they are.
And there's a little personality there.
That is fun.
It's entertaining.
It makes our jobs interesting.
And I think in the long run,
can enhance interaction between humans and
robots in a way that isn't available to machines that don't move that way.
This is something to me personally is very interesting.
I've been, I happen to have a lot of legged robots.
I hope to have a lot of spots in my possession.
I'm interested in celebrating robotics
and celebrating companies,
and I also don't want to,
companies that are doing incredible stuff
like Boston Dynamics.
And there's, you know, I'm a little crazy.
And you say you don't want to,
you want to align, you want to help the company.
Because I ultimately want a company
that Boston Dynamics to succeed.
And part of that we'll talk aboutissary to succeed and part of that
we'll talk about success kind of requires making money. So the kind of stuff I'm particularly
interested in may not be the thing that makes money in the short term. I can make an
argument that it won the long term. But the kind of stuff I've been playing with is a robust
way of having the quadrupe as though the robot dogs communicate emotion
with their body movement.
The same kind of stuff you do with the dog, but not hard coded, but in a robust way, and
be able to communicate excitement or fear, boredom, all these kinds of stuff.
And I think as a base layer of function of behavior,
to add on top of a robot,
I think that's a really powerful way
to make the robot more usable for humans,
for whatever application.
I think it's gonna be really important.
And it's a thing we're beginning to pay attention to.
We really want to start,
a differentiator for the company has always been, we really want
the robot to work.
We want it to be useful.
Making it work at first meant the the like it locomotion really works.
It can really get around and it doesn't fall down.
But beyond that, now it needs to be a useful tool. Our customers are, for example, factory owners, people who are running a process-benefaxuring
facility, and the robot needs to be able to get through this complex facility in a reliable
way, you know, taking measurements.
We need for people who are operating those robots to understand what the robots are doing. If the robot gets into
the need to help or you know, as in trouble or something, it needs to be able to communicate. And
a physical indication of some sort, so that a person looked at the robot and goes, oh, I know
what that's the robot's doing. The robot's going to go take measurements of my vacuum pump with its thermal camera. You know, you want to be able to
indicate that. And we're even just the robots about to turn, you know, in front of you, maybe
indicate that it's great a turn. And so you sort of see and can anticipate its motion. So these,
this kind of communication is going to become more and more important. It wasn't sort of our starting point, but now the robots are really out in the world,
and we have about 1,000 of them out with customers right now.
This layer of physical indication, I think, is going to become more and more important.
We'll talk about where it goes, because there's a lot of interesting possibilities,
but if you're gonna return back to the origins
of Boston Dynamics, so that the more research
the R&D side before we talk about
how to build robots at scale, so big dog.
So who's big dog?
So the company started in 1992.
And in probably 2003 I believe is when we took a contract from
dark so basically 10 years, 11 years, we weren't doing robotics. We did a little bit of robotics with Sony. They had a Ibo, their
Ibo robot. We were developing some software for that that kind of got us a little bit
involved with robotics again. Then there's this opportunity to do a DARPA contract where
they wanted to build a robot dog. And we won a contract to build that. And so that was
the genesis of Big Dog. And it was a quadripet. It was the first time we built a contract to build that. And so that was the genesis of Big Dog.
And it was a quadripet.
And it was the first time we built a robot
that had everything on board.
You could actually take the robot out into the wild
and operate it.
So it had an onboard power plant, it had onboard computers,
it had hydraulic actuators that needed to be cooled.
So we had cooling systems built in.
Everything integrated into the robot.
And that was a pretty rough start, right? It was 10 years that we were not a robotics company. We
were a simulation company. And then we had to build a robot in about a year. So that was a little bit of a rough transition.
I mean, what can you just comment on the roughness of that transition? Because
I mean, what, can you just comment on the roughness of that transition? Because, uh, big dog, I mean, it's just big quadruped four legs robot.
We built a few different versions of them.
But the first one, the very earliest ones, you know, didn't work very well.
And we would take them out.
And it was hard to get, you know, uh, you know,-kart engine driving a hydraulic power.
And having that all worked while trying to get the robot to stabilize itself.
So what was the power plan?
What was the engine?
It seemed like my vague recollection.
I don't know.
It felt very loud and aggressive and kind of thrown together.
Absolutely was, right?
We weren't trying to design the best robot hardware at the time.
And we wanted to buy an off-the-shelf engine.
And so many of the early versions of Big Dog
had literally go-cart engines or something like that.
Usually it has power?
Like a gas powered, two-stroke engine.
And the reason why it was too strong is two-stroke engines are lighter weight, that they're also,
and we generally didn't put mufflers on them because we're trying to save the weight.
And we didn't care about the noise.
And some of these things were horribly loud.
But we're trying to manage weight because managing weight in a
legged robot is always important because it has to carry everything. That said that thing was big.
I've seen the videos. Yeah, I mean the early versions stood about, I don't know, belly high,
chest high. They probably weighed maybe a couple of hundred pounds. But, you know, over the course of probably five years,
we were able to get that robot
to really manage a remarkable level of rough terrain.
So, you know, we started out with just walking on the flat
and then we started walking on rocks
and then inclines and then mud and then slippery mud.
And, you know, by the end of that program,
we were convinced that a little
legged locomotion in a robot could actually work,
because going into it, we didn't know that.
We had built quadrupeds at MIT,
but they were, they used a giant hydraulic pump,
you know, in the lab, they used a giant computer
that was in the lab, they're always tethered to the lab.
This was the first time something that was sort of self-contained,
you know, walked around in the world and balanced.
But, and the purpose was to prove to ourselves
that the Legatloca motion could really work.
And so, big dog really cut that open for us.
And it was the beginning of what became a whole series of robots.
So once we showed to DARPA that you could make it like a robot that could work, there
was a period at DARPA where robotics got really hot and there was lots of different programs.
And we were able to build other robots.
We built other quadrupeds, like LS3 designed to carry heavy loads. We built Chita, which was designed
to explore what are the limits to how fast you can run. We began to build sort of a portfolio
of machines and software that let us build not just one robot, but a whole family of robots.
So push the limits in all kinds of directions. Yeah, and to discover those principles,
you know, you asked earlier about the art and science
of a nugget locomotion, we were able to develop
principles of nugget locomotions
so that we knew how to build a small nugget robot
or a big one.
So the like length, you know, was now a parameter
that we could play with.
Payload was a parameter we could play with.
So we built the LS3, which was an 800-pound robot
designed to carry a 400-pound payload.
And we learned the design rules,
basically developed the design rules.
How do you scale different robot systems
to their terrain, to their walking speed, to their payload?
So when was spot born? Around 2012 or so. So again, almost 10 years
in to sort of a run with DARPA where we built a bunch of different quadrupeds. We had a sort of
a different thread where we started building humanoids, we saw that probably an end was coming where
the government was going to back off from a lot of robotics investment. And in order to
maintain progress, we just deduced that, well, we probably need to sell ourselves to somebody
who wants to continue to invest in this area. And that was Google.
And so at Google, we would meet regularly with Larry Page, and Larry just started asking
us, you know, well, what's your product going to be?
And you know, the logical thing, the thing that we had the most history with, that we wanted
to continue developing was aqua
pan. But we knew it needed to be smaller, we knew it couldn't have a gas engine, we thought
it probably couldn't be hydraulically actuated. So that began the process of exploring if we
could migrate to a smaller, electrically actuated robot. And that was really the genesis of Spot.
So not a gas engine and the actuators are electric. Yes. So can you maybe comment on what it's like my Google with the working with Larry Page,
having those meetings and thinking of what will a robot look like? That could
be built at scale. What like starting to think about a product.
Larry always liked the toothbrush test.
He wanted products that you used every day.
What they really wanted was,
you know, a consumer level product,
something that would work in your house.
We didn't think that was the right next thing to do,
because to be a consumer level product,
cost is going to be very important.
Probably needed to cost a few thousand dollars.
And we were building these machines that cost hundreds of thousands of dollars,
maybe a million dollars to build.
Of course, we were only building two,
but we didn't see how to get all the
way to this consumer level product in a short amount of time. And he suggested that we
make the robot really inexpensive. And part of our philosophy has always been build the
best hardware you can, make the machine operate well so that you're trying to solve, you know,
discover the hard problem that you don't know about.
Don't make it harder by building a crappy machine, basically.
You build the best machine you can.
There's plenty of hard problems to solve that are going to have to do with, you know,
underactuated systems and balance.
And so we wanted to build these high-quality
machines still. And we thought that was important for us to continue learning about really what
was the important parts of the make robots work. And so there was a little bit of a philosophical
difference there. And so ultimately, that's why we're building robots for the industrial sector now. Because
the industry can afford a more expensive machine because their productivity depends on keeping
their factory going. And so if spot costs, you know, $100,000 or more, that's not such a big
expense to them. Whereas at the consumer level, no one's going to buy a robot
like that. And I think we'll might eventually get to a consumer level product that will be that
cheap, but I think the path to getting there needs to go through these really nice machines
so we can then learn how to simplify. So what can you say to the almost engineering challenge of bringing down cost of the robot.
So that presumably when you try to build the robot at scale, that also comes into play,
when you're trying to make money on robot, even in the industrial setting, but how interesting,
how challenging of a thing is that in particular, probably new to an R&D company.
Yeah, I'm glad you brought that last part up.
The transition from an R&D company to a commercial company,
that's the thing you worry about,
because you've got these engineers who love hard problems,
who want to figure out how to make robots work.
And you don't know if you have engineers
that want to work on the quality and reliability
and cost that is ultimately
required. And indeed, we have brought on a lot of new people who are inspired by those
problems, but the big takeaway lesson for me is we have good people. We have engineers
who want to solve problems. And the quality and cost and manufacturability is just another kind of problem.
And because they're so invested in what we're doing,
they're interested in and we'll go work on those problems as well.
And so I think we're managing that transition very well.
In fact, I'm really pleased that I mean,
it's a huge undertaking, by the way, right?
So, you know, even having to get reliability to where it needs to be,
we have to have fleets of robots that were just operating 24-7 in our offices
to go find those rare failures and eliminate them.
It's just a totally different kind of activity than the research activity
where you get it to work, you know, the one robot you have to work in a repeatable way, you know, at the high stakes demo. It's just very different.
But I think we're making remarkable progress, I guess.
So one of the cool things that got it just to visit Boston Dynamics, I mean, one of the
things that's really cool is to see a large number of robots moving about.
Because I think one of the things you notice in the research environment is the MIT, for
example, I don't think anyone ever has a working robot for prolonged periods.
So like most robots are just sitting there in a sad state of despair waiting to be born brought to life for a brief moment of time
The just to have I just remember there's like a there's a spot robot just
I had like a cowboy head on and was just walking randomly for whatever reason
I don't even know, but there's a kind of
Sense of sentience to it because it doesn't seem like anybody was supervising it.
It was just doing his thing.
I'm going to stop way short of the sentience.
It is the case that if you come to our office today and walk around the hallways, you're
going to see a dozen robots just kind of walking around all the time.
And that's really a reliability test for us. So we have these robots programmed to do autonomous missions,
get up off their charging dock,
walk around the building, collect data at a few different places,
and go sit back down.
And we want that to be a very reliable process,
because that's what somebody who's running a brewery, a factory,
that's what they need the robot to do.
And so we have to dog food our own robot, we have to test it in that way.
And so on a weekly basis, we have robots that are accruing something like 1500 or maybe 2000
kilometers of walking and you know, over a thousand hours of operation every week.
And that's something that almost,
I don't know if anybody else in the world can do,
because, hey, you have to have a fleet of robots
to just accrue that much information.
You have to be willing to dedicate it to that test.
And so that's essential.
That's how you get the reliability.
That's how you get it.
What about some of the cost cutting
from the manufacturer side?
What have you learned from the manufacturer side of the transition from R&D?
And we're still learning a lot there. We're learning how to cast parts instead of mill it all out of, you know,
bill it aluminum. We're learning how to get plastic molded parts. And we're learning about how to control that process so that you
can build the same robot twice in a row. There's a lot to learn there. And we're only partway
through that process. We've set up a manufacturing facility in Waltham. It's about a mile from
our headquarters. And we're doing final assembly and test of both spots and stretches, you know, at that factory.
And it's hard because, to be honest, we're still iterating on the design of the robot.
As we find failures from these reliability tests, we need to go engineer changes.
And those changes need to now be propagated to the manufacturing line.
And that's a hard process, especially when you want to move as fast as we do.
And that's been challenging.
And it makes it, you know, the folks who are working supply chain,
who are trying to get the cheapest parts for us,
kind of requires that you buy a lot of them to make them cheap.
And then we go change the design from underneath them, and they're like,
what are you doing? And so, you know, getting everybody on the same page here, that it, yep, we still need to move fast, but we also need to try to figure
out how to reduce costs. That's one of the challenges of this migration we're going
through. And over the past few years, challenges to the supply chain. I mean, I imagine you've
been a part of a bunch of stressful meetings. Yeah, things got more expensive and harder
to get. And yeah, so it's all better to.
Is there still room for simplification?
Oh, yeah, much more.
And these are really just the first generation
of these machines.
We're already thinking about what the next generation of
spots going to look like.
Spot was built as a platform.
So you could put almost any sensor on it.
We provided data communications, mechanical connections,
power connections.
But for example, in the applications
that we're excited about, where you're monitoring
these factories for their health, there's probably
a simpler machine that we could build that's really
focused on that use case.
And that's the difference between the
general purpose machine or the platform versus the purpose built machine. And so even though
even in the factory, we'd still like the robot to do lots of different tasks. If we really
knew on day one that we're going to be operating in a factory with these three sensors in it,
we would have it all integrated in a package that would be easier, more less expensive and more reliable.
So we're contemplating building, you know, a next generation of that machine.
So we should mention that the spot for people who are somehow not familiar, so it's a yellow,
robotic dog, and has been featured in many dance videos.
It also has gained an arm.
So what can you say about the arm that Spot has?
Boss the challenges of this design and the manufacturer of it.
We think the future of mobile robots is mobile manipulation.
That's where, you know, in the past 10 years, it was getting mobility
to work, getting the legible commotion to work. If you ask, what's the hard problem
in the next 10 years? It's getting a mobile robot to do useful manipulation for you. And
so we wanted Spont to have an arm to experiment with those problems. And the arm is almost as complex as the robot itself.
And it's an attachable payload.
It has several motors and actuators and sensors.
It has a camera in the end of its hand.
So you can see something in the robot will control the motion of its hand
to go pick it up autonomously.
So, in the same way the robot walks and balances,
managing its own foot placement to say balanced,
we want manipulation to be mostly autonomous,
where the robot you indicate, okay, go grab that bottle,
and then the robot will just go do it,
using the camera in its hand,
and then sort of closing in on that grasp.
But it's a whole other complex robot on top of a complex legged robot.
And so, and of course we made it the hand look a little like a head, you know, because
again, we wanted to be sort of identifiable.
In the last year, a lot of our sales
have been people who already have a robot now buying an arm
to add to that robot.
Oh, interesting.
And so the arm is for sale.
Oh yeah, it's an option.
What's the interface like to work with the arm?
Like is it pretty, so are they designed primarily? I could just ask that question
in general about robots from both the dynamics. Is it designed to be easily and efficiently operated
remotely by a human being? Or is there also the capability to push towards autonomy?
We want both. In the next version of the software that we release, which will be version 3.3, we're going
to offer the ability of if you have a autonomous mission for the robot, we're going to include
the option that it can go through a door, which means it's going to have to have an arm
and it's going to have to use that arm to open the door.
And so that'll be an autonomous manipulation task that
just you can program easily with the robot strictly through, you know, we have a tablet interface.
And so on the tablet, you know, you sort of see the view that spot sees, you say, there's the door
handle. You know, the hinges are on the left and it opens in, the rest is up to you. Take care of it. So it just takes care of everything. Yeah. So we want in for a task like opening doors, you can automate
most of that. And we've automated a few other tasks. We had a customer who had a high-powered
breaker switch essentially. It's an electric utility Ontario power generation. And they have to,
when they're going to disconnect, you know, their power supply, right, that could be a gas generator,
it could be a nuclear power plant, you know, from the grid, you have to disconnect this
breaker switch. Well, as you can imagine, there's, you know, hundreds or thousands of amps and
volts involved in this breaker switch. And it's a dangerous event because occasionally you'll get what's called an arc flash.
As you just do this disconnect, the power, the sparks jump across and people die doing
this.
And so Ontario Power Generation used our spot in the arm through the interface to operate
this disconnect in an interactive way.
And they showed it to us.
And we were so excited about it and said,
you know, I bet we can automate that task.
And so we got some examples of that breaker switch.
And I believe in the next generation of the software,
now we're gonna deliver back to Ontario Power Generation,
they're gonna be able to just point the robot
at that breaker. They'll be out, they'll indicate that's the switch. There's sort of two
actions you have to do. You have to flip up this little cover, press a button, then get
a ratchet, stick it into a socket, and literally unscrew this giant breaker switch. So there's
a bunch of different tasks, and we basically automated them so that the human says, okay, there's the switch go do that part
That right there is the socket where you're gonna put your tool and you're gonna open it up and so you can remotely sort of indicate this on the
Tablet and then the robot just does everything in between and it does everything all coordinated movement, of all the different actuators that includes body
and all the maintenance, it's balance,
it walks itself into position,
so it's within reach,
and the arm is in a position where it can do the whole task.
So it manages the whole body.
So how does one become a big enough customer
to request features?
Because I personally want a robot that gets me a beer.
I mean, that has to be like one of the most requests,
I suppose, in the industrial setting
that's a non-alcoholic beverage
of picking up objects and bringing the objects to you.
We love working with customers
who have challenging problems like this.
And this one in particular,
because we felt like what they were doing, A, it was a safety feature, B, we saw that the robot
could do it, because they tele-operated it the first time, probably took them an hour
to do it the first time, right? But the robot was clearly capable. And we thought, oh, this
is a great problem for us to work on to figure out how to automate a manipulation task.
And so we took it on, not because we were going to make a bunch of money from it and selling
the robot back to them, but because it motivated us to go solve what we saw as the next logical
step.
But many of our customers, in fact, we try to, our bigger customers, typically ones who
are going to run a utility or a factor or something
like that. We take that kind of direction from them. And if they're, especially if they're
going to buy 10 or 20 or 30 robots, and they say, I really needed to do this, well, that's
exactly the right kind of problem that we want to be working on. And so note to self,
buy 10 spots and aggressively push for beer manipulation.
I think it's fair to say it's notoriously difficult to make a lot of money as a robotics
company.
How can you make money as a robotics company?
Can you speak to that?
It seems that a lot of robotics companies fail.
It's difficult to build robots.
It's difficult to build robots at a low enough cost
where customers, even the industrial setting, want to purchase them.
And it's difficult to build robots that are useful, sufficiently useful.
So what can you speak to? And Boston Dynamics has been successful
for many years of finding a way to make money.
Well, in the early days, of course, the money we made was from doing contract R&D work.
And we made money, but we weren't growing
and we weren't selling a product.
And then we went through several owners who had a vision
of not only developing advanced technology,
but eventually developing products.
And so both Google and Softbank and now Hyundai had that vision, and we're willing to provide
that investment.
Now our discipline is that we need to go find applications that are broad enough that you
could imagine selling thousands of robots because it doesn't work if
you don't sell thousands or tens of thousands of robots. If you only sell hundreds, you will
commercially fail. And that's where most of the small robot companies have died.
And that's a challenge because, you know, A, you need to feel the robots, they need to start to
become reliable. And as we said,, they need to start to become reliable.
And as we said, that takes time and investment to get there.
And so it really does take visionary investment to get there.
But we believe that we are going to make money in this industrial monitoring space
because if a chip fab, if the line goes down because of vacuum pump
failed someplace, that can be in a very expensive process. It can be a million dollars a day
in lost production, maybe you have to throw away some of the product along the way.
And so the robot, if you can prevent that by inspecting the factory every single day, maybe every hour
if you have to, there's a real return on investment there.
But there needs to be a critical mass of this task.
And we're focusing on a few that we believe are ubiquitous in the industrial production
environment.
And that's using a thermal camera
to keep things from overheating,
using an acoustic imager to find compressed air leaks,
using visual cameras to read gauges, measuring vibration.
These are standard things that you do
to prevent unintended shutdown of a factory.
And this takes place in a beer factory. We're working with AB
in Dev. It takes place in chipfabs. We're working with global foundries. It takes place in
electric utilities and nuclear power plants. And so the same robot can be applied in all of these
industries. And as I said, we have about, actually, it's 1100 spots out now, to really get
a profitability, we need to be at 1,000 a year, maybe 1,500 a year for that sort of part
of the business. So it still needs to grow, but we're on a good path. So I think that's
totally achievable. So the application should require the cost in that
thousand robot area. It really should. Yeah. I want to mention our second robot, stretch.
Yeah. Tell me about stretch. What stretch? Who stretch? Stretch started differently than
spot. You know, spot we built because we had decades of experience building quadrupeds. We just,
we had it in our blood. We had to build a quadruped product, but we had to go figure out what the application was. And we actually
discovered this, this factory patrol application, basically preventative maintenance, by
seeing what our customers did with it. Stretch is very different. We started knowing that
there was, warehouses all over the world. There's shipping containers moving all around the world,
full of boxes that are mostly being moved by hand.
By some estimates, we think there's a trillion boxes,
cardboard boxes shipped around the world each year
and a lot of it's done manually.
It became clear early on that there was an opportunity
for a mobile robot in here to move boxes around.
And the commercial experience has been very different between stretch and with spot.
As soon as we started talking to people, potential customers, about what stretch was going
to be used for, they immediately started saying, oh, I'll buy all by that robot.
You know, I'm going to put in an order for 20 right now.
We just started shipping the robot
in January after several years of development this year. This year. So our first deliveries of
stretch to customers were DHL and Merisk in January. We're delivering a gap right now. We have about
seven or eight other customers all who've already agreed in advance to buy between 10 and 20 robots. And so we've already got commitments for a couple
hundred of these robots. This one's gonna go, right? It's so obvious that there's
a need and we're not just gonna unload trucks, we're gonna do any box moving
task in the warehouse. And so it, too, will be a multi-purpose robot and we'll
eventually have it doing palatizing or depalatizing or
loading trucks or unloading trucks.
There's definitely thousands of robots.
There's probably tens of thousands of robots of this in the future.
So it's going to be profitable.
Can you describe what stretch looks like?
It looks like a big, strong robot arm on a mobile base.
The base is about the size of a pallet,
and we wanted it to be the size of a pallet
because that's what lives in warehouses, right?
Palettes of goods sitting everywhere.
So we needed to be able to fit in that space.
It's not a legate robot.
It's not a legate robot.
And so it was our first, it was actually a bit of a commitment
from us, a challenge for us,
to build a non-balancing robot,
to do them much easier problem,
and to put it to do a...
Well, because it wasn't, you know,
it wasn't gonna have this balance problem.
And in fact, the very first version
of the logistics robot we built was a balancing robot,
and that's called handle.
And there's nothing was epic. All right, it's a beautiful machine. that's called handle. And that thing was epic.
All right, it's a beautiful machine.
It's an incredible machine.
So it was, I mean, it looks epic.
It looks like out of, I mean, out of the sci-fi
movie of some source.
I mean, just can you actually just linger on the design
of that thing?
Because that's another leap into something
you probably haven't done.
It's a different kind of balancing.
Yeah, so let me, I'd love talking
about the history of how I handle came about,
because it connects all of our robots, actually.
So I'm gonna start with Atlas.
When we had Atlas getting fairly far along,
we wanted to understand,
I was telling you earlier,
the challenge of the human form
is that you have this mass up high and balancing that inertia, that mass up high is its own unique
challenge.
And so we started trying to get Atlas to balance standing on one foot like on a balance
beam using its arms like this.
And you know, you can do this, I'm sure I can do this, right?
Like if you're walking a tightrope, how do you do that balance? So that's sort of, you know, controlling the inertia,
controlling the momentum of the robot. We were starting to figure that out on Atlas.
And so our first concept of handle, which was a robot that was going to be on two wheels,
so it had the balance, but it was going to have a big long arm so it could reach a box at the top of a truck.
And it was going to, it needed yet another counterbalance, a big tail, to help it balance
while it was using its arm.
So the reason why this robot sort of looks epic, some people said it looked like an ostrich,
or maybe an ostrich moving around,
was the wheels, the le, it has legs,
so it can extend its legs.
So it's wheels on legs.
We always wanted to build wheels on legs.
It had a tail and had this arm,
and they're all moving simultaneously
and in coordination to maintain balance,
because we had figured out the mathematics
of doing this momentum control, how to maintain balance, because we had figured out the mathematics of doing this momentum control,
how to maintain that balance.
And so part of the reason why we built this
to like it robot was we had figured this thing out.
We wanted to see it in this kind of machine,
and we thought maybe this kind of machine
would be good in a warehouse, and so we built it.
And it's a beautiful machine.
It moves in a graceful way, like nothing else we've built. But it wasn't the right machine for a logistics application. We decided it was
too slow and couldn't pick boxes fast enough basically. And it was doing beautifully with
elegantly. But it just wasn't inefficient enough. So we let it go. Yeah. But I think we'll
come back to that machine eventually. The fact that it's
possible, the fact that he's show that you could do so many things at the same time in coordination
and so beautifully, there's something there. Yeah. That was a demonstration of what it's possible.
Basically, we made a hard decision and this was really kind of a hard-knosed business decision.
It was, it was, it indicated us not doing it just for the beauty of the mathematics
or the curiosity, but no, we actually need to build a business that can make money in the long run.
And so we ended up building stretch, which has a big heavy base with a giant battery in the base
of it, that allows it to run for two two shifts, 16 hours of operation. And that big battery
is sort of helps it stay balanced, right?
So you can move a 50 pound box around with its arm
and not tip over.
It's omnidirectional, it can move in any direction.
So it has a nice suspension built into it.
So it can deal with gaps or things on the floor
and roll over it.
But it's not a balancing robot.
It's a mobile robot arm that can work to carry a pick or place a box up to 50 pounds anywhere
in the warehouse.
Take a box from point A to point B, anywhere.
Palatize, deep palatize.
We're starting with unloading trucks because there's so many trucks and containers that were
goods or shipped.
And it's a brutal job.
You know, in the summer, it can be 120 degrees inside that container.
People don't want to do that job.
And it's back breaking labor, right?
Again, these can be up to 50 pound boxes.
And so we feel like this is a productivity enhancer.
And for the people who used to do that job unloading trucks,
they're actually operating the robot now.
By building robots that are easy to control,
and it doesn't take an advanced degree to manage,
you can become a robot operator.
As we've introduced these robots to both the HL and
Marist can get, the warehouse workers who were doing
that manual labor
are now the robot operators.
And so we see this as ultimately a benefit to them as well.
Can you say how much stretch costs?
Not yet, but I will say that when we engage
with our customers, they'll be able to see
a return on investment in
typically two years.
Okay.
So that's something you constantly thinking about how.
Yeah.
And I suppose you have to do the same kind of thinking with Spot.
So it seems like we stretch the application is like directly obvious.
Yeah.
It's a slam dunk.
Yeah.
And so you can have a little more flexibility.
Well, I think we know the target. We know what we're going after. Yeah. And so you can you have a little more flexibility. Well, I think we know the target.
We know what we're going after. Yeah. And with Spotted, it took us while to figure out what we
were going after. Well, let me return to that question about maybe the conversation you were having
a while ago with Larry Page, maybe looking to the longer future of social robotics,
of using Spotted to connect with human beings, perhaps in the home.
Do you see a future there if we were to sort of hypothesize or dream about a future where a spot
like robots are in the home as pets, a social robot? We definitely think about it and we would like
to get there. We think the pathway to getting there is likely through these industrial applications and then mass
manufacturing. Let's figure out how to build the robots, how to make the software so that they can
really do a broad set of skills that's going to take real investment to get there. Performance first,
right? The principle of the company has always been really make the robots
do useful stuff. And so, you know, the social robot companies that try to start some place
else by just making acute interaction, mostly they haven't survived. And so we think the
utility really needs to come first. And that means you have to solve some of these hard problems.
And so to get there, we're going to go through the design and software development in industrial
and then that's eventually going to let you reach a scale that could then be addressed
to a consumer level market.
And so yeah, maybe we'll be able to build a smaller spot with an
arm that could really go get your beer for you. But there's things we need to
figure out still. How does safely, really safely. And if you're going to be
interacting with children, you better be safe. And right now we count on a
little bit of standoff distance between the robot and people so she don't
pinch a finger in the robot.
So you've got a lot of things you need to go solve before you jump to that consumer level product.
Well, there's a kind of tradeoff and safety because it feels like in the home,
you can fall. Like, you don't have to be as good at, like, you're allowed to fail in different ways,
in more ways, as long as it's safe for the humans.
So it just feels like an easier problem to solve,
because it feels like in the factory,
you're not allowed to fail.
That may be true, but I also think the variety of things,
a consumer level robot would be expected to do
will also be quite
broad.
They're going to want to get the beer and know the difference between the beer and
the Coca-Cola or my snack.
And they're all going to want you to clean up the dishes from the table without breaking
them.
Those are pretty complex tasks.
And so there's still work to be done. So to push back
on that, here's where application, I think they'll be very interesting. I think the application of
being a pet a friend. So like no tasks. Just be cute. Because I not cute, not cute. Like the
dog is more a dog is more than just cute. A dog is a friend. There's a companion.
There's something about just having interacted with them. And maybe because I'm hanging out alone with a robot dogs a little too much, but like there's a
There's a connection there and it feels like that connection is not
Should not be disregarded. It's a no it should not be disregarded
Robots that can somehow communicate through their physical
gestures are you're going to be more attached to in the long run. Do you remember Ibo, the Sony Ibo?
Yeah. They sold over a hundred thousand of those, maybe 150,000. You know what probably wasn't
considered a successful product for them. they suspended that eventually and then they brought
it back.
So they brought it back.
And people definitely treated this as a pet as a companion.
And I think that will come around again.
Will you get away without having any other utility?
Maybe in a world where we can really talk to our simple
little pet because, you know, chat GPT
or some other generative AI has made it possible
for you to really talk in what seems like a meaningful way.
Maybe that'll open the social robot up again.
That's probably not a path we're gonna go down
because again, we're so focused on
performance and utility, we can add those other things also, but we really want to start
from that foundation of utility, I think.
But I also want to predict the year wrong on that, which is that the very path you're taking, which is creating a great robot platform,
will very easily take a leap to adding
a chat GPT-like capability, maybe GPT-5,
and there's just so many open source alternatives
that you could just plop that on top of spot.
And because you have this robot's platform,
and you're figuring out how to mask, manufacture it,
and how to drive the cost down and
How to make it you know reliable all those kinds of things. It'll be a natural transition to where just adding tragic beauty
I do think that
being able to verbally
Converse or even
Converse through through gestures, you know part of part of these
Learning models is that you know you can now look at video and imagery and associate, you know, intent with that.
Those will all help in the communication between robots and people, for sure.
And that's going to happen, obviously, more quickly than any of us we're expecting.
I mean, what else do you want from life?
Friends, you should be here.
And then just talk shit about the state of the world.
I mean, where's a deep loneliness within all of us?
And I think a beer and a good chat solves so much of it,
or it takes us a long way to solving.
It'll be interesting to see, you know, when a genitive AI can give you that
warm feeling that you connected, you know, and that, oh yeah, you remember me, you're
my friend, you know, we have a history. You know, that history matters, right? Memory
of joy, like memory of having witnessed.
I mean, that's what friendship, that's what connection, that's what love is.
In many cases, some of the deepest friendships you have is having gone through a difficult
time together and having a shared memory of an amazing time or a difficult time and
kind of that memory creating this like foundation based on which you can then experience the world together.
The silly, the mundane stuff of day to day is somehow built on a foundation of having gone through some shit in the past.
And the current systems are not personalized in that way, but I think that's a technical problem,
not some kind of fundamental limitation.
So combine that with an embodied robot like Spot,
which already has magic in its movement.
I think it's a very interesting possibility of what,
where that takes us.
But of course, you have to build that on top of a company
that's making money with real applications,
with real customers, and with robots that are safe,
and work, and reliable safe and work and reliable and manufactured
scale.
And I think we're in a unique position in that because of our investors primarily on
day, but also soft banks, alone, is 20% of us.
They're not totally fixated on driving us to profitability as soon as possible.
That's not the goal.
The goal really is a longer-term vision of creating what does mobility mean in the future?
How is this mobile robot technology going to influence us?
Can we shape that?
And they want both.
And so we are, as a company,
we're trying to strike that balance between let's build a business that makes money.
I've been describing that to my own team as self-destination. If I want to, if I want to drive my
own ship, we need to have a business that's profitable in the end. Otherwise, somebody else is going
to drive the ship for us. So that's really important, but we're going to retain the aspiration
that we're going to build the next generation of technology at the same time. And the real trick
will be if we can do both. Speaking of ships, let me ask you about a competitor and some of
these become a friend. So you almost contest, uh, test, have announced, have been in the early days
of building a humanoid robot.
How does that change the landscape of your work?
So there's sort of from the outside perspective,
it seems like, well, as a fan of robotics,
it just seems exciting.
Very exciting, right?
When Elon speaks, people listen.
And so it suddenly brought a bright light onto the work that we've been doing for over a decade.
And I think that's only going to help.
And in fact, what we've seen is that in addition to Tesla, we're seeing a proliferation of
robotic companies arise now.
Including humanoid?
Yes.
Oh, yeah.
So, and interestingly, many of them, as they're raising money, for example, will claim whether or not they have a former Boston Dynamics employee on their staff as a criteria
Yeah, that's true. That's a
I would do that company. Yeah, for sure
Yeah, so it shows your legit. Yeah, so you know what it's bring it has brought a tremendous validation to what we're doing and
excitement
competitive juices are flowing, you know the whole whole thing. So it's all good.
You know, I was also kind of stated that, you know, maybe he implied that the problem is solvable in year-term, which is a low-cost humanoid robot that's able to do,
that's a relatively general use case robot.
So I think he'll honest, known for sort of setting
these kinds of incredibly ambitious goals,
maybe missing deadlines,
but actually pushing not just the particular team you
lead, but the entire world to like accomplishing those. Do you see Boston
Dynamics in the near future being pushed in that kind of way, like this
excitement of competition kind of pushing Atlas maybe to do more cool stuff,
trying to drive the cost of Atlas down, perhaps.
Or, I mean, I guess I want to ask if there's some kind of exciting energy in Boston dynamics
due to this a little bit of competition.
Oh, yeah, definitely.
When we released our most recent video of Atlas, you know, I think you'd seen it scaffolding and throwing the box of tools around and then doing the flip at the end.
Yeah, we were trying to show the world that not only can we do this parkour mobility thing, but we can pick up and move heavy things.
Because if you're going to work in a manufacturing environment,
that's what you gotta be able to do.
And for the reasons I explained to you earlier,
it's not trivial to do so.
Changing the center of mass by picking about 50 pound block
for a robot that weighs 150 pounds,
that's a lot to accommodate.
So we're trying to show that we can do that.
And so it's totally been energizing.
We see the next phase of Atlas being more dexterous hands
that can manipulate and grab more things
that we're gonna start by moving big things around
that are heavy and that effect balance.
And why is that? Well, really tiny, dexterous things. Probably we are going to be hard for a while
yet. And maybe you could go build a special purpose robot arm for stuffing chips into
electronic boards, but we don't really want to do really fine work like that.
I think more coursework where you're using two hands to pick up and balance, and then
we'll be thing, maybe in a manufacturing environment, maybe in a construction environment.
Those are the things that we think robots are going to be able to do with the level of
dexterity that they're going to have in the next few years, and that's where we're headed.
And I think, and, you know, Elon has seen the same thing, right?
He's talking about using the robots in a manufacturing environment.
We think there's something very interesting there about having this, a two armed robot.
Because when you have two arms, you can transfer a thing from one hand to the other, you can
turn it around, you know, you can, you can reorient it in a way that you can't do it if you just have one hand on it. And so there's a lot
that extra arm brings to the table.
So I think in terms of mission, you mentioned Boston, the name X really wants to see what
is the limits of what's possible. And so the cost comes second. Or it's a component,
but first figure out what are the limitations. I think, with Elon, he's really driving the cost down.
Is there some inspiration, some lessons you see there of the challenge of driving the
cost down, especially without this with a humanoid robot?
Well, I think the thing that he's certainly been learning by building car factories is
what that looks like. By scaling, you can get
efficiencies that drive costs down very well. And the smart thing that they have in their
favor is that they know how to manufacture, they know how to build electric motors, they
know how to build computers and vision systems. So there's a lot of overlap between modern automotive companies and robots.
But hey, we have a modern robotic, I mean, that automotive company behind us as well.
So bringing on who's doing pretty well, right?
Do you like to take vehicles from Hyundai are doing pretty well?
I love it.
So how much, so we've talked about some of the low level controls,
some of the incredible stuff that's going on and basic perception.
But how much do you see in currently in the future of
Boston Dynamics, sort of more high-level machine learning applications.
Do you see customers adding on those capabilities
or do you see Boston Dynamics doing that in-house?
Some kinds of things we really believe are probably gonna be
more broadly available, maybe even commoditized,
using a machine learning like a vision algorithm.
So a robot can recognize something in the environment.
That ought to be something you can just download.
Like I'm going to a new environment, I have a new kind of door handle or piece of equipment
I want to inspect, you ought to be able to just download that.
And I think people, besides Boston Dynamics, will provide that.
And we've actually built an API that lets people add these vision algorithms to Spot.
And we're currently working with some partners who are providing that.
Levitas is an example of a small provider who's giving a software for reading gauges.
And actually another partner in Europe replies doing the same thing. So we see that. We see it ultimately
an ecosystem of providers doing stuff like that. And I think ultimately, you might even be
able to do the same thing with behaviors. So this technology will also be brought to bear
on controlling the robot, the motions of the robot, and you know, we're using learning,
reinforcement learning to develop algorithms for both locomotion and manipulation.
And ultimately, this is going to mean you can add new behaviors to a robot, you know, quickly.
And that could potentially be done outside of Boston Dynamics right now. That's all internal to us.
I think I think you need to understand at a deep level, you know, the robot control to do that.
But eventually that could be outside.
But it's certainly a place where these approaches are going to be brought to bear in robotics.
So the enforcement learning is part of the processes.
You do use enforcement learning. Yes. So there's increasing levels of learning with these robots. Yes.
And that's for both for for look-amotion, for manipulation, and for perception. Yes.
Well, what do you think in general about all the exciting advancements of
Well, what do you think in general about all the exciting advancements of transformer, neon networks, most beautifully illustrated through the large language models like GPT-4?
Like everybody else, we're all surprised at how far they've come.
I'm a little bit nervous about the anxiety around them, obviously, for I think good reasons.
Disinformation is a curse that's an unintended consequence of social media that could be exacerbated
with these tools.
So, if you use them to deploy disinformation, it could be a real risk.
But I also think that the risks associated with these kinds of models don't have a whole
lot to do with the way we're going to use them in our robots. If I'm using a robot,
I'm building a robot to do, you know, a manual task of some sort. I can judge very easily.
Is it doing the task? I asked it to. Is it doing it correctly? There's sort of a built-in mechanism
for judging. Is that doing the right thing? Did it successfully do the task?
Yeah, physical reality is a good verifier.
It's a good verifier.
That's exactly it.
Whereas if you're asking for, yeah, I don't know,
you're trying to ask a theoretical question in chat GPT,
it could be true or it may not be true,
and it's hard to have that verifier.
What is that truth that you're comparing
against? Whereas in physical reality, you know the truth. And this is an important difference.
So I'm not, I think there is reason to be a little bit concerned about how these tools of
large language models could be used, but I'm not very worried about how they're gonna be used.
Well, how learning algorithms in general
are going to be used on robotics.
It's really a different application
that has different ways of verifying what's going on.
Well, the nice thing about language models
is that I ultimately see I'm really excited
about the possibility of having conversations with spot.
Yeah, there's no, I would say negative consequences to that, but just increasing the bandwidth and the variety of ways you can communicate with this particular robot.
Yeah, so you can communicate visually, you can communicate through some interface and to be able to communicate verbally again with the beer and so on.
I think that's really exciting to make that much, much easier.
We have this partner, Levitas, that's adding the vision algorithms for daydreaming for us.
They just, just this week, I saw a demo where they hooked up, you know,
a language tool to spot and they're talking to spot to give you a glance.
Can you tell me about the Boston Dynamics AI Institute?
What is it in what is its mission?
So it's a separate organization,
the Boston Amics Artificial Intelligence Institute.
And it's led by Mark Robert, the founder of Boston Dynamics
and the former CEO and my old advisor at MIT.
Mark has always loved the research, the pure research,
without the confinement or demands of commercialization.
And he wanted to continue to pursue
that unadulterated research.
And so suggested to Hyundai that they set up this institute and they agree that
it's worth additional investment to kind of continue pushing this forefront. And we expect
to be working together where Boston Dynamics is again both commercialized and due research.
But the sort of time horizon of the research we're gonna do
is in the next, let's say five years.
What can we do in the next five years?
Let's work on those problems.
And I think the goal of the AI Institute
is to work even further out.
Certainly, the analogy of a legate locomotion again,
when we started that, that was a multi-degade problem.
And so I think Mark wants to have the freedom
to pursue really hard over the horizon problems.
And that'll be the goal of the Institute.
So we mentioned some of the dangers of,
some of the concerns about large language models
that said, there's been a long-running fear of these embodied robots.
Why do you think people are afraid of Lincoln robots?
Yeah, I wanted to show you this.
So this is a Wall Street Journal, and this is all about chat GPT, right?
But look at the picture. Yeah.
It's a humanoid robot that's saying I will replace it.
It's saying that looks scary and it says
I'm going to replace you. And so the humanoid robot is sort of is the embodiment of this chat GPT
tool that there's reason to be a little bit nervous about how I guess deployed. Yeah. So I'm
nervous about that connection. Um, it's unfortunate that they chose to use a robot as that embodiment.
For, as you and I just said, there's big differences in this.
But people are afraid because we've then taught to be afraid for over 100 years.
So, you know, the word robot was developed by a playwright
named Carol Chapack in 1921 to check a playwright
for Rossum's universal robots.
And in that first depiction of a robot,
the robots took over the end of the story.
And, you know, people love to be afraid.
And so we've been entertained by these stories
for a hundred years.
But I, and I think that's as much why people are afraid of anything else, as we've been sort of taught that this is the logical progression through fiction.
I think it people more and more will realize, just like you said, that the threat, like
say you have a super intelligent AI embodied in a robot, that's much less threatening because
it's visible, it's verifiable, it's right there in physical reality and we humans know
how to deal with physical reality.
I think it's much scarier when you have arbitrary scaling of
Intelligent AI systems in the digital space
That they could pretend to be human
So robot spot is not gonna be pretend it can pretend it's human all at once
You could tell you you could put your GPD at the top of it
But you're gonna know it's not human because you have a contact with physical reality
And you're gonna know whether or not it's doing what you have to do.
Yeah.
Like, it's not going to, like, if it, like, I mean, I'm sure you can start just like a dog
lies to you.
Like, I didn't, I wasn't part of tearing up that college.
So it's like, it's like, it's like, try to lie that like, you know, it wasn't me that's
built that thing, but that's, you're going to kind of figure it out eventually.
It's, but if I have this multiple times, you know.
But I think that humanity has figured out how to make machines safe.
And there's regulatory environments and certification protocols that we've developed in order to figure
out how to make machines safe. We don't know and don't have that experience with software that can be propagated worldwide
in an instant.
So I think we need to develop those protocols and those tools.
And so that's work to be done, but I don't think the fear of that and that work should
necessarily impede our
ability to now get robots out.
Because again, I think we can judge when a robot's being safe.
So again, just like in that image, there's a fear that robots will take our jobs.
I just, I took a ride, I was in San Francisco, I took a ride in the Waymo vehicles and
the autonomous vehicle.
And I was on several times that they're doing
incredible work over there. But people flick that off.
Right.
They're not the car. So I mean, that's a long story of what the psychology of that is.
It could be maybe big tech or what I don't know exactly what they're flicking off.
Yeah. But there is an element of like these robots are taking our jobs
or irreversibly transforming society such that it will have
economic impact and the little guy will lose a lot,
they would lose their well-being.
Is there something to be said about the fear
that robots will take our jobs. You know, at every significant technological transformation,
there's been fear of an automation anxiety.
That it's going to have a broader impact than we expected.
And there will be
jobs will change.
Sometimes in the future, we're going to look back at people who manually unloaded these boxes from trailers,
and we're going to say, why did we ever do that manually?
But there's a lot of people who are doing that job today that it could be impacted.
But I think the reality is, as I said before, we're going to build the technologies.
Those very same people can operate it.
And so I think there's a pathway to upskilling and operating just like, look, we used to farm
with hand tools.
And now we farm with machines.
And nobody has really regretted that transformation.
And I think the same can be said for a lot of manual labor that we're doing today.
And on top of that, you know, look, we're entering a new world where demographics are going to have strong impact on economic growth.
And the advanced, the first world is losing population quickly. In Europe, they're worried about hiring enough people
just to keep the logistics supply chain going.
And part of this is the response to COVID
and everybody's sort of thinking back
what they really want to do with their life.
But these jobs are getting harder and harder to fill.
And I'm hearing that over and over again.
So I think frankly, this is the right technology
at the right time, where we're gonna need some of this work
to be done and we're gonna want tools
to enhance that productivity.
And the scary impact, I think, again,
GPT comes to the rescue in terms of being much more terrifying.
The scary impact of basically, so I guess I'm a software person, so I program a lot, and
the fact that people like me could be easily replaced by GBT, that's going to have a Well and a lot you know anyone who deals with texts and writing a draft proposal might be easily done with
chat GPT no
consultants
Journalists yeah
Everybody is what but on the other hand you also want it to be right and
They don't know how to make it right yet.
But it might make you a good starting point for you to iterate.
Boy, do I have to talk to you about modern journalism?
That's another conversation altogether.
But yes, more right than the average, the mean journalist. Yes.
You spearheaded the NT weaponization letter. Boston Dynamics has can you
describe what that letter states and the general topic of the use of robots in war?
We authored a letter and then got several leading robotics companies around the world, including, you know, Unitry and China and agility here in the United States and
Annie Mall and in Europe and some others. To a cosine of letter that said, we won't put weapons
on our robots. And part of the motivation there is, you know, as these robots start to become
commercially available, you can see videos online of people who've gotten a robot and strapped
a gun on it and shown that they can operate the gun remotely
while driving the robot around.
And so having a robot that has this level of mobility and that can easily be configured
in a way that could harm somebody from a remote operator is justifiably a scary thing.
And so we felt like it was important to draw a bright line there and say, we're not going
to allow this.
For reasons that we think ultimately it's better for the whole industry, if it grows in
a way where robots are ultimately going to help us all and make our lives more fulfilled
and productive, but by goodness, you're going
to have to trust the technology to let it in.
And if you think the robot's going to harm you, that's going to hurt and impede the growth
of that industry.
So we thought it was important to draw a bright line and then publicize that. And our plan is to begin to engage with lawmakers and regulators.
Let's figure out what the rules are going to be around the use of this technology.
And use our position as leaders in this industry and technology to help force that issue.
And so we are, in fact, I have a policy director at my company whose job it is to engage with
the public to engage with interested parties and including regulators to begin these discussions.
Yes, and a really important topic and it's an important topic for people that worry about
the impact of robots on our society with autonomous weapons systems.
So I'm glad you're sort of leading the way in this.
You are the CEO of Boston Dynamics.
What's the take to BSEO of our robotics company? So you started as a humble engineer
PhD
just looking at your journey. What does it take to go from being
from building the thing to leading a company? What are some of the big challenges for you? Courage, I would put
front and center for multiple reasons. I talked earlier about the courage to
tackle hard problems. So I think there's courage required not just of me but of
all of the people who work at Boston Dynamics. I also think we have a lot of
really smart people. We have people who are
way smarter than I am. And it takes a kind of courage to be willing to lead them. And to trust that
you have something to offer to somebody who probably is maybe a better engineer than I am.
Adaptability. Part of it, it's been a great career for me. I never
would have guessed I'd stayed in one place for 30 years. And the job is always changed. I didn't,
I didn't really aspire to be CEO for the very beginning, but it was the natural progression of
things. There was always a, there always needed to be some level of management that was needed. And so, you know, when
I saw something that needed to be done that wasn't being done, I just stepped in to go
do it. And oftentimes, because we were full of such strong engineers, oftentimes that
was in the management direction, or it was in the business development direction or
organizational hiring. Geez, I was the main person hiring at Boston Dynamics for probably 20 years.
So I was the head of HR basically.
So I, you know, just willingness to sort of tackle any piece of the business that
just willingness to sort of tackle any piece of the business that needs it and then be willing to shift. Is there something you could say to what it takes to hire a great team? What's a good
interview process? How do you know the guy or galler are going to make a great member of
an engineering team that's doing some of the hardest work in the world?
of an engineering team that's doing some of the hardest work in the world. We developed an interview process that I was quite fond of.
It's a little bit of a hard interview process because the best interviews you ask somebody
about what they're interested in and what they're good at.
And if they can describe to you something that they worked on and you saw they really did
the work, they solved the problems and you saw their passion for it.
And you can ask about that.
But what makes that hard is you have to ask a probing question about it.
You have to be smart enough about what they're telling you, their expert, to ask a good question.
And so it takes a pretty talented team to do that.
But if you can do that,
that's how you tap into,
ah, this person cares about their work.
They really did the work.
They're excited about it.
That's the kind of person I want at my company.
At Google, they taught us about their interview process.
And it was a little bit different.
You know, we evolved the process at Boston Dynamics, where it didn't matter if you are an engineer,
or you are an administrative assistant, or a financial person, or a technician.
You gave us a presentation.
You came in and you gave us a presentation.
You had to stand up and talk in front of us.
And I just thought that was great to tap into those things I just described to you.
At Google, they taught us, and I think I understand why, right?
They're hiring tens of thousands of people.
They need a more standardized process.
So they would sort of err on the other side where they would ask you a standard question.
I'm going to ask you a programming question and I'm just gonna ask you to you know write code in front of me
That's a terrifying, you know application process. Yeah
It does let you compare candidates really well, but it doesn't necessarily let you tap in
Who they are right because you're asking them to answer your question instead of you asking them about what they're interested in.
But frankly, that process is hard to scale.
And even at Boston Dynamics,
we're not doing that with everybody anymore.
But we are still doing that with the technical people.
But because we too,
now need to sort of increase our rate of hiring, not everybody's giving
a presentation anymore.
But you're still ultimately trying to find that basic seed of passion for the world.
Did they really do it?
Did they find something interesting or curious, you know, and do they care about it? I think somebody admires Jim Keller,
and he likes details.
So one of the ways,
if you get a person to talk about what they're interested in,
how many details,
like how much of the whiteboard can you fill out?
Yeah, well, I think you figure out,
did they really do the work if they know some of the details?
Yes. And if they have to wash over the details, well, then they don't do it.
Especially with engineering, the work is in the details.
I have to go there briefly just to get your kind of thoughts in the long term future of robotics.
There's been discussions on the GPT side on the large language model
side of whether there's consciousness inside these language models. And I think there's
fear, but I think there's also excitement, or at least the wide world of opportunity and possibility in embodied robots having something like, let's start with
emotion, love towards other human beings, and perhaps the display real or fake of consciousness.
Is this something you think about in terms of long-term future because as we've talked about, people do anthropomorphize
these robots. It's difficult not to project some level of, I used the word, sentience, some
level of sovereignty, identity, all the things we think as human. That's what anthropomorphization is, as we project humaneness onto mobile, especially legate robots. Is that something almost from a science
fiction perspective you think about? Or do you try to avoid ever... try to avoid the topic
of consciousness altogether? I'm certainly not an expert in it and
you don't spend a lot of time thinking
about this, right? And I do think it's fairly remote for the machines that we're dealing
with. Our robots, you're right, the people anthropomorphize. They read into the robots
intelligence and emotion that isn't there because they see physical gestures that are similar to things
they might even see in people or animals.
I don't know much about how these large language models really work.
I believe it's a kind of statistical averaging of the most common responses, you know, to
a series of words, right? It's sort of a very elaborate word completion.
And I'm dubious that has anything to do with consciousness. And I even wonder if that model of
sort of simulating consciousness by stringing words together that are statistically
associated with one another. Whether or not that kind of knowledge, if you want to call that knowledge,
would be the kind of knowledge that allowed a sentient being to grow or evolve. It feels to me like there's something about truth or emotions. That's just a very
different kind of knowledge. That is absolute. The interesting thing about truth is it's absolute,
and it doesn't matter how frequently it's represented in the worldwide web. If you know it to be true,
it may only be there once. But my God, that's true.
And I think emotions are a little bit like that too. You know something, you know, and, and I just think there's a different kind of
knowledge than the way these large language models of derive sort of simulated.
It does seem like intelligence, things that are true.
It does seem like things that are true very well might be statistically well represented on the internet because the internet is made up of humans.
So I tend to suspect that large language models are going to be able to simulate consciousness
very effectively.
And I actually believe that current GBT4, when when fine-tuned correctly, would be able to do just that.
And that's going to be a lot of very complicated ethical questions
that have to be dealt with.
They have nothing to do with robotics, and everything to do with...
There needs to be some process of labeling, I think, what is true.
Because there is also disinformation available on the web, and these
models are going to consider that kind of information as well. And again, you can't average
something that's true and something that's untrue and get something that's moderately true.
It's either right or it's wrong. So how is that process?
And this is obviously something that the purveyors of these barred in Chan-GBT, I'm sure
this is what they're working on.
Well, if you interact on some controversial topics with these models, they're actually
refreshingly nuanced.
They present, because you realize there's no one truth, you know,
what caused the war in Ukraine, right? Any geopolitical conflict, you can ask any kind of question,
especially the ones that are politically tense, divisive and so on. GBT is very good at
presenting. Here's the like here the, it presents the different hypotheses.
It presents calmly, the amount of evidence for each one.
It's very, it's really refreshing.
It makes you realize that truth is nuanced.
And it does that well.
And I think with consciousness, it would very accurately say, well, it sure is how it feels like
I'm one of you humans, but where's my body?
I don't understand.
You're going to be confused.
The cool thing about G.P.T. is it seems to be easily confused in the way we are.
You wake up in a new room and you ask,
where am I?
It seems to be able to do that extremely well.
It'll tell you one thing, like a fact about
what a war started.
And when you correct this,
say, well, this is not consistent.
It'll be confused.
It'll be, yeah, you're right.
It'll have that same element,
child-like element,
with humility,
trying to figure out its way in the world.
And I think that's a really tricky area
to sort of figure out with us humans
of what we want to allow AI systems to say to us.
Because then if there's elements of sentience
that are being on display, you can then start to manipulate
human emotion, all that kind of stuff.
But I think that's something that's a really serious
and aggressive discussion that needs to be had
on the software side.
I think, again, embodiment, robotics are actually
saving us from the arbitrary scaling of software systems versus creating more problems
But that's that I really believe in that connection between human robot. There's magic there
and I think
There's also I think a lot of money to be made there and Boston Dynamics is leading the world in
The most elegant movement done by robots.
So I can't wait to thank you to what maybe other people that built on top of Boston Dynamics
robots or Boston Dynamics by itself.
So you had one wild career, one place, and one set of problems, but incredibly successful.
Can you give advice to young folks today in high school, maybe in college, looking out to this
future, where so much robotics and AI seems to be defining the trajectory of human civilization. Can you give them advice on
how to have a career that can be proud of or how to have a life that can be proud of?
Well, I would say
You know follow your heart and your interest. You know what again, this was an organizing principle. I think behind
the like lab at MIT that turned
into a value at Boston Dynamics, which was follow your curiosity, love what you're doing.
You'll have a lot more fun and you'll be a lot better at it as a result. I think it's hard to plan, you know, don't get too hung up on planning
too far ahead, find things that you like doing and then see where it takes you. You can always
change direction. You will find things that, you know, that wasn't a good move. I'm going
to back up and go do something else. So when people are trying to plan a career, I always
feel like, yeah, there's
a few happy mistakes that happen along the way and just live with that, you know, but just,
but make choices then. So avail yourself to these interesting opportunities, like when I happen
to run into Mark down in the lab, the basement of the AI lab, but be willing to make a decision
and then pivot if you see something exciting to go at,
you know, because if you're out and about enough, you'll find things like that that get
you excited.
So there was a feeling when you first met Mark and saw the robots that there's something
interesting.
Oh boy, I got to go do this.
There is no doubt.
What do you think in a hundred years? What do you think in 100 years, what do you think Boston dynamics is doing?
What do you think is the role, even bigger?
What do you think is the role of robots in society?
Do you think we'll be seeing billions of robots everywhere?
Do you think about that long-term vision? Well, I do think that,
I think the robots will be ubiquitous and they will be out amongst us.
And they'll be certainly doing,
some of the hard labor that we do today.
I don't think people don't want to
work.
People want to work.
People need to work to, I think, feel productive.
We don't want to offload all of the work to the robots because I'm not sure if people
would know what to do with themselves.
And I think just self-satisfaction and feeling productive is such an ingrained part of being
human that we need to keep doing
this work.
So we're definitely going to have to work in a complimentary fashion.
And I hope that the robots and the computers don't end up being able to do all the creative
work, right?
Because that's the part that's, you know, that's the rewarding.
The creative part of solving a problem is the thing that gives you that serotonin rush that you never forget, you know or that adrenaline rush that you never forget
And so you know people need to be able to do that creative work and and just feel productive and sometimes that
You can feel productive over fairly simple work. It's just well done, you know, and that you can see the result on
so I you know, you know, there is a,
I don't know, there's a cartoon,
was it Wally where they had this big ship
and all the people were just overweight,
lying on their best chairs,
kind of sliding around on the deck of the movie because they didn't do anything anymore.
Well, we definitely don't want to be there.
You know, we need to work in some complimentary fashion where we keep all of our faculties and our physical health and we're doing some labor right, but in a complimentary fashion somehow.
And I think a lot of that has to do with the interaction, the collaboration with robots and with the
AI systems. I'm hoping there's a lot of interesting possibilities. I think that could be really cool,
right? If you can work in an interaction and really be helpful, robots, you can ask a robot to
do a job you wouldn't ask a person to do. And that would be a real asset. You wouldn't feel guilty
about it, you know? You'd say, just do it. It's a machine.. You wouldn't feel guilty about it, you know?
You'd say just do it. It's a machine. And I don't have to have quams about that, you know. The ones that are machines, I also hope to see a future and it is hope. I do have optimism
on bot thy future where some of the robots are pets, have an emotional connection to our humans.
And because one of the problems that humans have to solve is
this kind of a general loneliness.
The more love you have in your life, the more friends you have in your life.
I think that makes a more enriching life helps you grow.
And I don't fundamentally see why some of those friends can't be robots.
There's an interesting long-running study.
Maybe it's in Harvard.
They've just
nice report article written about it recently. They've been studying this group of a few
thousand people now for 70 or 80 years. And the conclusion is that companionship and friendship
are the things that make for a better and happier life. And so I agree with you.
And I think that could happen with a machine
that is probably, you know, simulating intelligence.
I'm not convinced there will ever be true intelligence
in these machines, sentience, but they could simulate it
and they could collect your history.
And they could, you know, I guess it remains to be seen whether they can establish that
real deep.
You know, when you sit with a friend and they remember something about you and bring that
up and you feel that connection, it remains to be seen if a machine is going to be able
to do that for you.
Well, I have to say, it's, in things of that already started happening for me, some of
my best friends at robots
And I have you to thank for leading the way in in the
Accessibility and the ease of use of such robots and the elegance of their movement
Robert you're an incredible person Boston dynamics is an incredible company
I just been a fan for many many years for everything you stand for for everything you do in the world
If you're interested in great engineering robotics, go join them, build
cool stuff. I'll forever celebrate the work you're doing. And it's just a big honor that
you will sit with me today and talk. It means a lot. So thank you so much. You're doing
great work. Thank you, likes. I'm honored to be here. And I appreciate it. It was fun.
Thanks for listening to this conversation with Robert Plader. To support this podcast, please check out our sponsors in the description.
And now, let me leave you with some words from Alan Turing in 1950
defining what is now termed the Turing Test.
A computer would deserve to be called intelligent
if it could deceive a human into believing that it was human.
Thank you for listening and hope to see you next time.
you