Lex Fridman Podcast - #208 – Jeff Hawkins: The Thousand Brains Theory of Intelligence
Episode Date: August 8, 2021Jeff Hawkins is a neuroscientist and cofounder of Numenta, a neuroscience research company. Please support this podcast by checking out our sponsors: - Codecademy: https://codecademy.com and use code ...LEX to get 15% off - BiOptimizers: http://www.magbreakthrough.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free - Eight Sleep: https://www.eightsleep.com/lex and use code LEX to get special savings - Blinkist: https://blinkist.com/lex and use code LEX to get 25% off premium EPISODE LINKS: A Thousand Brain (book): https://amzn.to/3AmxJt7 Numenta's Twitter: https://twitter.com/Numenta Numenta's Website: https://numenta.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:35) - Collective intelligence (17:17) - The origin of intelligence in the human brain (30:31) - How intelligent life evolved on Earth (41:30) - Why humans are special in the universe (44:48) - Neurons (49:02) - A Thousand Brains theory of intelligence (57:42) - How to build superintelligent AI (1:15:41) - Sam Harris and existential risk of AI (1:27:43) - Neuralink (1:34:34) - Will AI prevent the self-destruction of human civilization? (1:40:05) - Communicating human knowledge to alien civilizations (1:50:22) - Devil's advocate (1:55:16) - Human nature (2:03:39) - Hardware for AI (2:10:18) - Advice for young people
Transcript
Discussion (0)
The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand
the structure, function, and origin of intelligence in the human brain. He previously wrote the
seminal book on the subject titled On Intelligence, and recently a new book called A Thousand Brains,
which presents a new theory of intelligence that Richard Dawkins, for example, has been raving about, calling the book
quote, brilliant and exhilarating. I can't read those two words and not think of him saying it
in his British accent. Quick mention of our sponsors. Code Academy, Bio Optimizers, ExpressVPN,
A Sleep, and Blinkist. Check them out in the description to support this podcast.
A. Sleep and Blinkist. Check them out in the description to support this podcast. As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions
in his new book is that if human civilization were to destroy itself, all of knowledge,
all our creations will go with us. He proposes that we should think about how to save that
knowledge in a way that long outlives us, whether that's on earth, in orbit around earth, or in deep space. And then,
the same message is that, advertise this backup of human knowledge to other intelligent
alien civilizations. The main message of this advertisement is not that we are here,
but that we were once here. This little difference somehow was deeply humbling to me,
that we may with some non-zero likelihood destroy ourselves and that an alien civilization,
thousands or millions of years from now may come across this knowledge store, and they would only
with some low probability even notice it not to mention be able to interpret it.
And the deeper question here for me is what information in all of human knowledge is even essential?
Does Wikipedia capture it or not at all? This thought experiment forces me to wonder what are the
things we've accomplished and are hoping to still accomplish that will all live us. Is it things like complex buildings, bridges, cars, rockets?
Is it ideas like science, physics, mathematics?
Is it music and art?
Is it computers, computational systems or even artificial intelligence systems?
I personally can't imagine that aliens wouldn't already have all of these things.
In fact, much more and much better.
To me, the only unique thing we may have is consciousness itself,
and the actual subjective experience of suffering, of happiness, of hatred, of love.
If we can record these experiences in the highest resolution directly from the human brain,
such that aliens would be able to replay them, that is what we should store and send as a message.
Not Wikipedia, but the extremes of conscious experiences, the most important of which,
of course, is love.
As usual, I'll do a few minutes of ads now.
I try to make these interesting, but I give you time stamps, so if you skip,
please still check out the sponsors by clicking the links in the description. It is actually
the best way to support this podcast. I'm very fortunate to be able to be picky, the
sponsors will take on way more sponsors than we can take on. So we only take on the
good ones. So hopefully if you buy their stuff stuff you'll find value knit just as I have. This show is brought to you by a new amazing sponsor called
Code Academy. It's a website I highly recommend you go to if you want to learn to
code. It doesn't matter if you're totally new or somewhat experienced there's a
course there for you. If you haven't programmed before and are curious how to dive in,
I would highly recommend you sign up and take their Learn Python 3 course. They say it takes 25
hours to complete, but it is so clear, it's accessible. I would say it's even fun that that time
is going to just fly by. They're very selective with the kind of content they present to you,
and they focus on the most important basics.
Honestly, if you want to learn to program, I think Python is the right program in language
to start with.
If you want to learn Python, I think you should use Codecademy.
They're learned Python 3 course.
I highly recommend.
Anyway, get 15% off your Codecademy Pro membership when you go to codecademy.com and use
promo code Lex.
By the way, that's spelled code academy.
It's not code academy.
There's no A. It's C O D E C A D E M Y code academy.
That's promo code Lex at codecademy.com.
And you get 15% off of their pro membership.
The next sponsor, also in you one, is by optimizers, BI optimizers.
They have a new magnesium breakthrough supplement that they want me to tell you about, and it is
in fact the supplement I use.
When I fast, or I doing keto, carnivore,
which is what I've been doing a lot for the past several years now actually.
Getting the electrolytes is really important.
And that means sodium, potassium, and magnesium.
Those are essential.
The amount and the type is also very important.
And I think magnesium is the tricky one.
It's difficult to get it right.
That's why I use magnesium
breakthrough supplement from bi optimizers. Most supplements contain only one or two forms of
magnesium like glycinate or citrate. When in reality there are at least seven that your body needs
and benefits from and of course magnesium breakthrough supplement has all of them. You should definitely go to their website to check out all the different benefits.
That's magbreakthrough.com slash Lex.
If you go there, you get a special discount.
That's mag as an MAG breakthrough.com slash Lex.
This show, sponsored by ExpressVPN.
I use them to protect my privacy on the internet. A lot of you probably go to a bunch of shady websites.
So I should probably let you know that when you browse using incognito mode that doesn't actually protect you.
The ISP like Comcast or Verizon know every single website you visit still.
And ISPs can sell this information legally to ad companies and tech giants who then use
your data to target you.
I think there's a lot of ways in which this can go wrong from the book Brave New World
to the book 1984.
And I think the way to resist that is have tools that allow you to regain control
over your data. A VPN I think like it is the most essential, the most basic tool everybody should
be using. And my favorite VPN is ExpressVPN. I've been using it for many years. It started out with
a sexy red button. It still has a big power on button, but it's no longer red, but it's still sexy.
Anyway, go to ExpressVPN.com slash Lex pod to get an extra three months free that's ExpressVPN.com slash Lex pod.
This episode is also brought to you by 8 8 sleep and its pod probe mattress. It controls temperature with an app, is packed with sensors, and can cool down to as low
as 65 degrees on each side of the bed separately.
Matt Walker, the sleep scientist, my new friend, just started a podcast, he should definitely
check out.
He also did a podcast with Andrew Huberman, and he also did a conversation with me.
I won't use this ad or the conversation with Matt to try to tease apart my philosophy
on the pursuit of happiness in life.
But let me say that once I do get into bed,
I love a cold bed with a warm blanket and that is something A sleep provides and it makes me look forward to naps
It makes me look forward to sleep and I truly enjoy it
A good night sleep and a cold bed is one of the best rewards in life after some hard-fought battles during a productive day
Anyway, they have a pod bro cover so you can just add that to your mattress without having
to buy theirs, but their mattress is nice too.
Just so you know, I can track a bunch of metrics like heart rate variability, but cooling
alone is honestly worth the money.
Go to 8Sleep.com slash Lex to get special savings.
That's 8Sleep.com slash Lex.
This episode is supported by Blinkist.
My favorite app for learning new things.
Blinkist takes the key ideas from thousands of nonfiction books and can nest them down
into just 15 minutes that you can read or listen to.
I use it three ways.
One, to pick which books I want to read next and full, two, to review books I've already
read, sort of to give me a very clean summary of what the book was about.
And three, sometimes there's too many amazing books
in this world, you never get a chance to read.
And it's good to sort of build up an intuition,
build up a high level understanding of the key insights
in the book.
I've actually been thinking about doing a project where I read a book a day and maybe make a video on each of those
books, like especially powerful books, they mean a lot to me, some of which I've
already read, so rereading them, which I enjoy doing, and some of which I've always
wanted to read and haven't gotten a chance. Part of me wants to return to that place of where I read basically full time for like a
few weeks.
I think that's a fascinating experiment I'd like to take on.
Anyway, go to blinkist.com slashlex to start your free 7-day trial and get 25% off of
a Blinkist Premium membership as blinkist.com slashlex spelled B-L-I-N-K-I-S-D Blinkus.com slash
Lex.
This is the Lex Friedman Podcast and here is my conversation with Jeff Hawkins. We previously talked over two years ago. Do you think there's still neurons in your brain
that remember that conversation that remember me and got excited.
Like there's a Lex neuron in your brain that just like finally has a purpose.
I do remember our conversation or I have some memories of it and I formed additional
memories of you in the meantime.
I wouldn't say there's a neuron or a neurons in my brain that know you, there are synapses
in my brain that have formed that reflect are synapses in my brain that have formed, that reflect my
knowledge of you and the model of you and the world.
Whether the exact same synapses were formed two years ago, it's hard to say, because these
things come and go all the time.
But we know from one thing to know about brain is that when you think of things, you're
often erase the memory and rewrite it again.
Yes, but I have a memory of you and that's instantiated in synapses. There's a simple way to think it again. So yes, but I have a memory of you and I have that's instantiated instant abses. There's a simple way to think about it. Like, so you have, we have a model
of the world in your head. And that model is continually being updated. I updated this
morning. You will often meet this water who said it was from the refrigerator. I remember
these things. And so we, and so the model includes where we live, the places we know the words,
the objects in the world.
But he's just monstrous model and is constantly being updated and people are just part of that model.
So we're animals, so are other physical objects, so are events we've done.
So it's no special in my mind, special place for the members of humans.
I mean, obviously, I know a lot about my wife, but in friends,
and so on, but it's not like a special place for humans over here, but we model everything,
and we model other people's behaviors too. So if I said, you're a copy of your mind in my mind,
it's just because I know how humans, I've learned how humans behave and I've learned something about you and that's part of my world model.
Well, I just also mean the collective intelligence of the human species. I wonder if there's something fundamental to the brain that enables that.
So modeling other humans with their ideas. You're actually jumping into a lot of big properties.
Our collective intelligence is a separate topic that a lot of people like to talk about.
We can talk about that.
But, so that's interesting.
We're not just individuals.
We live in society and so on.
But from our research point of view, and so again, let's just talk.
We studied in your cortex.
It's a sheet of neural tissue.
It's about 75% of your brain.
It runs on this very repetitive algorithm.
It's a very repetitive circuit.
And so you can apply that algorithm to lots of different problems,
but it's all underneath it's the same thing.
We're just building this model.
So from our point of view, we wouldn't
look for these special circuits someplace buried
in your brain that might be related to other, you know, understanding of the humans. It's more like,
you know, how do we build a model of anything? How do we understand anything in the world
and humans are just another part of the things we understand?
So, there's nothing, there's nothing to the brain that knows the emergent phenomenon
of collective intelligence. Well, I certainly know about that. I've heard the terms I've read.
No, but that's right.
Well, okay, right.
It's an idea.
Well, I think we have language,
which is sort of built into our brains,
and that's a key part of collective intelligence.
So there are some prior assumptions about the world
we're gonna live in.
When we're born, we're not just a blank slate.
And so, did we evolve to take advantage
of those situations, yes?
But again, we studied only part of the brand,
the Neocortex.
There's other parts of the brand
are very much involved in societal interactions
and human emotions and how we interact
and even societal issues about, you know,
how we are, how we interact with other people
when we support them, when we're
greedy and things like that.
I mean, certainly the brain is a great place where to study intelligence.
I wonder if it's the fundamental atom of intelligence.
Well, I would say it's it's it's absolutely in a central component, even if you believe
in collective intelligence as, hey, that's where it's all happening.
That's what we need to study, which I don't believe that, by the way.
I think it's really important, but I don't think that is the thing.
But even if you do believe that, then you have to understand how the brain works in doing
that.
It's more like we are intelligent individuals, and together we are much more magnified our
intelligence. We can do things we couldn't do individually. But even as individuals we're
pretty damn smart and we can model things and understand the world and interact with
it. So to me if you're going to start someplace you need to start with the brain. Then you
could say well how do brains interact with each other? And what is nature of language?
And how do we share models that I've learned something
about the world, how do I share it with you?
Which is really what sort of communal intelligence is.
I know something, you know something.
We've had different experiences in the world.
I've learned something about brains.
Maybe I can impart that to you.
You've learned something about physics,
and you can port that to me.
But it also comes down to it. even just a epistemological question of,
well, what is knowledge, and how do you represent it in the brain?
Right? And it's not, that's where it's going to reside, right?
Or in our writings.
It's obvious that human collaboration, human interaction,
is how we build societies.
Yeah.
But some of the things you talk about and work on, some of those elements
of what makes up an intelligent entity is there with a single person.
Absolutely. I mean, we can't deny that the brain is the core element here in, in, at least
I think it's obvious, the brain is the core element in all theories of intelligence.
It's where knowledge is represented, It's where knowledge is created.
We interact.
We share.
We build upon each other's work.
But without a brain, you'd have nothing.
There would be no intelligence without brains.
And so that's where we start.
I got into this field because I just
was curious as to who I am.
How do I think?
What's going on in my head when I'm thinking?
What does it mean to know something?
You know, I can ask what it means for me to know something independent of how I learned it from you or from someone else or from society.
What does it mean for me to know that I have a model of you in my head?
What does it mean to know I know what this microphone does and how it works physically, even when I can't see it right now?
How do I know that?
What does it mean?
How do the neurons do that?
At the fundamental level of neurons
and synapses and so on.
Those are really fascinating questions.
And I'm happy to just happy to understand those
as if I could.
So in your new book, you talk about our brain,
our mind, as being made up of many brains
So the book is called a thousand brains a thousand brain theory of intelligence. What is the key idea of this book?
The book has three sections
And it has sort of maybe three big ideas
So the first sections all about what we've learned about the near cortex and that's the thousand brains theory
Just they complete the picture of the second sections all about AI and we've learned about the Neocortex, and that's the Thousand Brains Theory.
Just to be complete the picture of the second section is all about AI and the third section
is about the future of humanity.
So the Thousand Brains Theory, the big idea there, if you've had to summarize into one
big idea, is that we think of the brain, the Neocortex is learning this model of the
world.
But what we learned is actually there's
tens of thousands of independent modeling systems
going on.
And so each, we call it column in the cortex,
there's about 150,000 of them,
is a complete modeling system.
So it's a collective intelligence
in your head in some sense.
So the thousand brain theory says,
well, where do I have knowledge about this coffee cup? Well, where's the model of this cell phone? It's not in one place.
It's in thousands of separate models that are complimentary and they communicate with each
other through voting. So this idea that we have, we feel like we're one person, you know,
that's our experience, we can explain that. But reality, there's lots of these like,
it's almost like little brains, like, but like they're they're sophisticated modeling systems
about 150,000 of them in each of you know human brain and that's a total different way of thinking
about how the New York cortex is structured then we or anyone else thought of even just five years
ago. So you mentioned you started this journey and just looking in the mirror and trying to
understand who you are. So if you have many brains
Who are you then? So it's interesting. We have a singular perception, right? You know, we think oh, I'm just here. I'm looking at you
But it's it's composed of all these things like there's sounds and this and there's this vision and there's touch and
All kinds of inputs yet. We have the singular perception and what the thousand brain series says we have these models that are visual models
We have a lot of models that are the models, models that are tactile models and so on.
But they vote. And so in the cortex, you can think about that these columns as like little grains
of rice, 150,000 stacked next to each other. And each one is its own little modeling system.
But they have these long range connections that go between them. And we
call those voting connections or voting neurons. And so the different columns try to reach
to consensus. Like, what am I looking at? Okay, you know, each one has some ambiguity,
but they come to an consensus. Oh, there's a water bottle looking at it. We are only consciously
able to perceive the voting. We're not able to perceive anything that goes under the hood.
So the voting is what we're aware of.
The results of the vote.
Yeah, the vote.
Well, you can imagine it this way.
We were just talking about eye movements a moment ago.
So as I'm looking at something, my eyes are moving about three times a second.
And with each movement, it completely new input is coming into the brain.
It's not repetitive, it's not shifting it around, it's completely new.
I'm totally unaware of it.
I can't perceive it.
But yeah, if I looked at the neurons in your brain, they're going, I don't know, I don't
know, I don't know, I don't know.
But the voting neurons are not.
The voting neurons are saying, you know, we all agree, even though I'm looking at different
parts of this, this is a water bottle right now.
And that's not changing.
And it's in some position and pose relative to me.
So I have this perception of the water bottle
about two feet away from me at a certain pose to me.
That is not changing.
That's the only part I'm aware of.
I can't be aware of the fact that the inputs to the eyes
are moving and changing and all this other is tapping.
So these long range connections
are the part we can be conscious of.
The individual activity in each column
is doesn't go anywhere else.
It doesn't get shared anywhere else.
It doesn't, there's no way to extract it
and talk about it or extract it and even remember it
to say, oh, yes, I can recall that.
But these long-range connections are the things
that are accessible to language
and to our, you know, it's like the hippocampus
are memories, you know, short-term memory systems and so on.
So we're not aware of 95% or maybe it's even 98% of what's going on in your brain.
We're only aware of this sort of stable, somewhat stable voting outcome of all these things that are going on underneath the hood. So what would you say is the basic element in the thousand brains theory of intelligence,
like what's the atom of intelligence when you think about it?
Is it the individual brains, then what is a brain?
Well, let's, let's, can we just talk about what intelligence is first and then, and then
we can talk about what the elements are.
So in my, in my book, intelligence is the ability
to learn a model of the world.
To build an internal to your head,
a model that represents the structure of everything.
To know what this is a table, and that's a coffee cup,
and this is a goose neck lamp, and all this thing.
To know these things, I have to have a model of them in my head.
I just don't look at them and go, what is that?
I already have internal representations of these things in my head, and I had to have a model of it in my head. I just don't look at it and go, what is that? I already have internal representations of these things
in my head, and I had to learn them.
I wasn't born to any of that knowledge.
You know, we have some lights in the room here.
I, you know, that's not part of my evolutionary heritage, right?
It's not in my genes.
So we have this incredible model, and the model includes
not only what things look like and feel like,
but where they are relative to each other,
and how they behave.
I've never picked up this water bottle before, but I know that if I took my hand on that blue
thing and I turn it, it'll probably make a funny little sound as a little plastic thing
is detached and then it'll rotate in a little way in a certain way and it'll come off.
How do I know that?
I have this model in my head.
So the essence of intelligence is our ability to learn a model and the more sophisticated
our model is, the smarter we are.
Not that there is a single intelligence because you can know about, you know a lot about things
that I don't know and I know about things you don't know.
And we can both be very smart.
But we both learned a model of the world through interacting with it.
So that is the essence of intelligence.
Then we can ask ourselves, what are the mechanisms in the brain?
Tell us to do that.
And what are the mechanisms of learning?
Not just the neural mechanisms, but what is the general process for how we learn a model? So that was a big insight for us
It's like what are the what is the actual things that how do you learn this stuff in terms that you have to learn a firm
Movement you can't learn it just by that's how we learn we learn through movement we learn
So you build up this model by observing things and touching them and moving them and walking around the world and so on
Either you move or the thing moves.
Somehow.
Yeah.
Obviously you can learn things just by reading a book,
something like that.
But think about if I were to say, oh, here's a new house.
I want you to learn, what do you do?
You have to walk from room to room.
You have to open the doors, look around,
see what's on the left, what's on the right.
As you do this, you're building a model in your head.
That's what you're doing.
You can't just sit there and say,
I'm gonna rock the house.
No, you know, or you can,
you don't even want to just sit down
and read some description of it, right?
You literally physically interact
and the same with like a smartphone.
If I'm gonna learn a new app, I touch it
and I move things around.
I see what happens when I,
when I do things with it.
So that's the basic way we learn in the world.
And by the way, when you say model,
you mean something that can be used for prediction in the future.
It's used for prediction and for behavior and planning.
And does a pretty good job in doing so.
Yeah, here's the way to think about the model. A lot of people get hung up on this. So
you can imagine an architect making a model of a house.
So there's a physical model of a house.
So there's a physical model that's small. And why do they do that?
Well, we do that because you can imagine
what it would look like from different angles.
You can say, okay, look at me, look at me.
And you can also say, well,
how far to get from the garage to the swimming pool
or something like that, right?
You can imagine looking at this.
And you can say, what would you be the view from this location?
So we build these physical models
that let you imagine the future and imagine the behaviors. Now we can take that same model and put it in a
computer. So we now today they'll build models of houses in a computer and they and they do that
using a set of we'll come back to this term in a moment reference frames, but eventually you assign
a reference frame for the house and you assign different things for the house in different locations.
And then the computer can generate an image and say, okay, this is what it looks like in this direction.
The brain is doing something remarkably similar to this, surprising.
It's using reference frames, it's building these, it's similar to a model in a computer, which has the same benefits of building a physical model.
It allows me to say, what would this thing look like if it was in this orientation?
What would likely happen if I pushed this button? I've never pushed this button before, or how would
I accomplish something? I want to convey a new idea of learning. How would I do that?
I can imagine in my head, well, I could talk about it, I could write a book, I could
do some podcasts, I could maybe tell my neighbor, and I can imagine the outcomes of all these things before I do any of them.
That's what the model that you do. Let's just plan the future and imagine the consequences of our actions.
Prediction, you asked about prediction. Prediction is not the goal of the model. Prediction is an inherent property of it, and it's how the model corrects itself. So prediction is fundamental to intelligence.
It's fundamental to building a model and the model's
intelligent.
And let me go back and be very precise about this.
Prediction, you can think of prediction two ways.
One is like, hey, what would happen if I did this?
That's a type of prediction.
That's a key part of intelligence.
But it isn't prediction is like, oh, what's this?
This is what I'm going to feel like when I pick it up, you know, and that's
not seem very intelligent.
But the way to think one way to think about intelligence and prediction is it's a way
for us to learn where our model is wrong.
So if I picked up this water bottle and it felt hot, I'd be very surprised.
Or if I picked up was very light, it would be very, I'd be surprised.
Or if I turned this top and it didn't, it had to turn the other very, I'd be surprised. Or if I turned this top and it didn't,
I had to turn it the other way, I'd be surprised. And so all of this, I have a prediction,
like, okay, I'm gonna do it, I'll drink some water. Okay, I do this, there it is, I feel
opening, right? What if I had to turn it the other way? Or what if it's split into,
then I say, oh my gosh, I'd misunderstood this, I didn't have the right model, this thing.
My attention would be drawn to, I'd be looking at it going, well, how the hell did that happen?
You know, I didn't open up that way. And I would update my model by doing it,
just been looking at it and playing around with that update and say, this is a new type
of water bottle. But you, so you're talking about sort of complicated things like a water
bottle, but this also applies for just basic vision, just like seeing things. It's almost
like a precondition of just perceiving the world is predicting.
Mm-hmm.
So just everything that you see is first passed
through your prediction.
Everything you see and feel, in fact,
this is the insight I had back in the late 80s,
and this gives me early 80s,
and I know the people are reaching the same idea,
is that every sensory input you get,
not just vision, but touch and hearing,
you have an expectation about it,
and a prediction.
Sometimes you can predict very accurately,
sometimes you can't.
I can't predict what next word is going to come out of your mouth,
but as you start talking,
I was better and better predictions,
and if you talk about some topics,
I'd be very surprised.
So I have this sort of background prediction that's going on all the time for all of my senses.
Again, the way I think about that is this is how we learn. It's more about how we learn.
It's the test of our understanding. Our predictions are test. Is this really a water
ball? If it is, I shouldn't see, you know, a little finger sticking out this side.
And if I saw a little finger sticking out, I was like, oh, what the hell's going on?
You know, that's not normal.
I mean, that's fascinating that let me linger on this for a second.
It really honestly feels that prediction is fundamental to everything, to the way our mind operates, to intelligence. So like, it's just
a different way to see intelligence, which is like, everything starts at prediction.
And prediction requires a model. You can't predict something unless you have a model of
it. Right. But the action is prediction. So like, the thing the model does is prediction.
And but it also, yeah, but you can then extend it to things like,
what would happen if I took this today?
I went and did this, would be like that,
or how you can extend prediction to like,
oh, I want to get a promotion at work.
What action should I take?
You can say, if I did this, I'd
predict what might happen.
If I spoke to someone, I predict what would might happen.
So it's not just low-level predictions. Yeah, it's all predictions. It's all predictions. Sixth is black box, so you can ask what might happen. If I spoke to someone, I predict what might happen. So it's not just low level predictions.
Yeah, it's all predictions.
It's all predictions.
Sixth is black box, you can ask basically any question, low
level or high level.
So we started off with that observation.
It's all, it's just this non-stop prediction.
And I write about this in the book about,
and then we ask, how do neurons actually make predictions?
And physically, like, what was the neuron
do when it makes a prediction?
And where the neural tissue does when it makes prediction. And then we ask, what was the neuron dude when it makes a prediction? And, or the neural tissue does when it makes prediction.
And then we asked what are the mechanisms by how we build a model that allows you to make prediction?
So we started with prediction as sort of the fundamental research agenda, if in some sense,
like, and say, well, we understand how the brain makes predictions,
we'll understand how it builds these models and how it learns, and that's a core of intelligence.
So it was like, it was the key that got us in the door to say that is our research agenda understand predictions
So in this whole process
where does intelligence
originate would you say?
so it
If we look at things that are much less intelligence the humans, and you start to build up a human, the prosive evolution,
where is this magic thing that has a prediction model or a model that's able to predict that starts to look a lot more like intelligence?
Is there a place where Richard Dawkins wrote an introduction to your book, an excellent introduction. I mean, it puts a lot of things into context.
And it's funny just looking at parallels for your book and Darwin's
origin of species. So Darwin wrote about the origin of species.
So what is the origin of intelligence?
Yeah. Well, we have a theory about it. And it's just that is the
theory. Theory goes as follows. As soon as living things started to move, they're not just floating and
see, they're not just a plant, you know, grounded someplace. As soon as they started to move,
there was an advantage to moving intelligently, to moving in certain ways. And there's some
very simple things you could do, you know, bacteria or single cell organisms can move towards
a source of gradient of food or something like that.
But an animal that might know where it is
and know where it's been and how to get back to that place.
Or an animal that might say, oh, there was a source
of food someplace, how do I get to it?
Or there was a danger, how do I get to it?
There was a mate, how do I get to them.
There was a big evolution advantage to that. So early on, there was a pressure to how do I get to them? There was a big evolution advantage to that.
So early on, there was a pressure to start
understanding your environment like where am I,
and where have I been, and what happened
in those different places.
So we still have this neural mechanism in our brains.
It's in the mammals, it's in the hippocampus
and entorinocortex, These are older parts of the brain.
And these are very well studied.
We build a map of our environment.
So these neurons in these parts of the brain
know where I am in this room and where the door was and things
like that.
So a lot of other mammals have this kind of problem.
All mammals have this.
And almost any animal that knows where it is and get around must have some mapping system,
must have some way of saying, I've learned a map of my environment, I have hummingbirds
in my backyard.
And they go at the same place all the time, they have to, they must know where they are.
They just know where they are wonderful.
They're not just randomly flying around.
They know, they know particular flowers they come back to.
So we all have this.
And it turns out it's very tricky to get neurons to do this, to build a map of an environment.
And so we now know there's these famous studies that still very active about play cells and
grid cells and these other types of cells in the older parts of the brain, and how they
build these maps of the world.
It's really clever.
It's obviously being under a lot of evolutionary pressure
over a long period of time to get good at this.
So animals know where they are.
What we think has happened,
and there's a lot of evidences to justice,
is that mechanism we learn to map like a space
is was repackaged, the same type of neurons was repackaged into a more compact form,
and that became the cortical column. And it was in some sense, a generic size, if that's a word.
It was turned into a very specific thing about learning maps of environments,
to learning maps of anything, learning a model of anything, not just your space, but coffee cups and so on.
And it got sort of repackaged into a more compact version,
a more universal version, and then replicated.
So the reason we're so flexible is we have a very generic
version of this mapping algorithm
and we have 150,000 copies of it.
So it sounds a lot like the progress of deep learning.
How so.
So taking your own networks that seem to work well
for a specific task, compress them,
and multiply it by a lot.
And then you just stack them on top of it.
It's like the story of Transformers and that's
a lot of these processes. But in teaching deep learning networks. They end up, you're replicating an element,
but you still need the entire network to do anything. Right. Here,
what's going on is each individual element is a complete learning system.
This is why I can take a human brain, cut it in half and it still works.
It's, it's pretty amazing. It's fundamentally distributed.
It's fundamentally distributed, complete modeling systems.
So, but that's our story we like to tell.
I'm a guest. It's likely largely right.
But, you know, there's a lot of evidence supporting that story,
this evolutionary story.
The thing which brought me to this idea is that the human brain got big very quickly.
So that led to the proposal a long time ago that well there's this common element just
instead of creating new things, it just replicated something.
We also are extremely flexible.
We can learn things that we had no history about.
And so that tells us that the learning algorithm is very generic.
It's very kind of universal, because it doesn't
assume any prior knowledge about what it's learning.
And so you combine those things together, and you say,
OK, well, how did I come about?
Where did that universal algorithm come from?
Had to come from something that wasn't universal.
It came from something that was more specific.
And so anyway, this led to our hypothesis
that you would find grid cells and play cell equivalents
in the Neocortex.
And when we first published our first papers on this theory,
we didn't know of evidence for that.
It turns out there was some, but we didn't know about it.
And since then, so then we became aware
of evidence for grid cells and put in parts
of the Neocortex.
And then now there's been new evidence coming out.
There's some interesting papers that came out just January of this year.
So one of our predictions was if this evolution hypothesis is correct, we would see grid cell
play cell equivalent, cells that work like them through every column in the air cortex.
That's starting to be seen.
What does it mean that why is it important that they're present?
Because it tells us, well, we're asking about the evolutionary origin of intelligence,
right? So our theory is that these columns in the cortex are working on the same principles,
they're modeling systems. And it's hard to imagine how neuron do this. And so we said, hey,
it's really hard to imagine how neurons could learn these models of things. I'm going to talk
about the details of that if you want.
But there's other part of the brain, we know the orange models of environments.
So could that mechanism of the lens model, this room, be used to learn a model, the water
model, is at the same mechanism?
So we said, it's more or less likely that the brain is using the same mechanism, in which
case, it would have these equivalent cell types.
So, it's basically the whole theory is built on the idea that these columns have reference
frames and learning these models and these grid cells create these reference frames.
So, it's basically the major, in some sense, the major predictive part of this theory is
that we will find these equivalent mechanisms in each column in the near cortex, which tells us that that that's what they're doing.
They're learning these sensory motor models of the world.
So just a we're pretty content that would happen, but now we're seeing the evidence.
So the evolutionary process nature does a lot of copying paste and see what happens.
Yeah.
Yeah, there's no direction to it, but it it just found out like hey if I took this these elements and
And made more of them what happens and let's hook them up to the eyes and let's look up to ears and and that seems to work pretty well
Yeah, for us again, just to take a quick step back to our
Conversation of collective intelligence
Do sometimes see that it's just another
Do you sometimes see that it's just another copy and paste aspect is copying pasting these brains and humans and making a lot of them and then creating
social structures that then almost operates as a single brain?
I wouldn't have said it but you said it sounded pretty good.
So to you the brain is fundamental is like, is it something?
I mean, our goal is to understand how the New York Project works.
We can argue how essential that is to understand the human brain, because it's not the entire human brain.
You can argue how essential that is to understanding human intelligence.
You can argue how essential this to, you know, sort of communal intelligence.
I'm not, our goal was to understand the near cortex.
Yeah, so what is the near cortex and where does it fit in the various aspects of what the
brain does, like how important is it to you?
Well, obviously, again, I mentioned it again in the beginning, it's about 70 to 75%
of the volume of a human brain.
So it dominates our brain in terms of size, not in terms of number of neurons, but in terms
of size.
Size isn't everything, Jeff.
I know.
But it's nothing.
It's nothing.
It's not that.
We know that all high-level vision, hearing, and touch happens in the neocortex.
We know that all language occurs in
its understood in the ear cortex, whether that's spoken language, written language, sign language,
with language of mathematics, language of physics, music, you know, we know that all high-level
planning and thinking occurs in the ear cortex. If I were to say, you know, what part of your
brain designed a computer and understands programming and creates music, it's all the ear cortex.
and understands programming and creates music. It's all the Neocortex.
So then that's just an undeniable fact.
But then there's other parts of our brain are important too,
right, our emotional states,
our body regulating our body.
So the way I like to look at it is,
you know, can you understand the Neocortex
about the rest of the brain?
And some people say you can't, and I think, absolute you can.
It's not that they're not interacting, but you can understand it.
Can you understand the Neocortex without understanding the motions of fear?
Yes, you can. You can understand how the system works. It's just a modeling system.
I make the analogy in the book that it's like a map of the world.
And how that map is used depends on who's using it.
So how our map of our world in our
you know, cortex, how we how we manifest as a human depends on the rest of our brain, whatever
our motivations, you know, what are my desires? Am I a nice guy or not a nice guy? Am I a cheater or
my, you know, or not a cheater? You know, how important different things are in my life. So,
So how important different things are in my life. So, but the Neo-Quargeics can be understood on its own.
And I say that as a neuroscientist, I know there's all these interactions, and I want
to say I don't know them, and we don't think about them.
But a lay person's point of view, you can say it's a modeling system.
I don't generally think too much about the communal aspect of intelligence, which you've
brought up a number of times already.
So that's not really been more concerned.
I just wonder if there's a continuum from the origin of the universe, like this pocket
of complexities that form living organisms.
I wonder if we're just, if you look at humans, we feel like we're at the top.
I wonder if there's like just where everybody probably, every living type pocket of complexity is probably thinks they're the,
pardon the French, they're the shit.
They're at the top of the parent.
Well, it's, they're thinking, um, well, then, then what is thinking?
What the, oh, I can sense the point is, in their sense of the world, their sense is at the top
of it.
I think, what is a turtle?
But you're bringing up the problems of complexity and complexity theory are, it's a huge
interesting problem in science.
And I think we've made surprisingly little progress
in understanding complex systems in general.
And so, the Santa Fe Institute
was founded to study this.
And even this scientist there will say,
it's really hard.
We haven't really been able to figure out exactly,
that science hasn't really congealed yet.
We're still trying to figure out
the basic elements of that science.
What, where does complexity come from?
And what is it?
And how you define it, whether it's DNA,
creating bodies, or phenotypes, or its individuals,
creating societies, or ants, and markets, and so on.
It's a very complex thing.
I'm not a complexity theorist, person, right?
And I think they ask, well, the brain itself is a complex
system. So can we understand that? I think we've made a lot of progress understanding how the brain
works. So, but I haven't brought it out to like, oh, well, where are we on the complexity spectrum?
It's a great question. I'd prefer for that answer to be, we're not special.
It seems like if we're honest, most like we're not special.
So if there is a spectrum, we're probably not in some kind of significant place in that
spectrum.
I think there's one thing we could say that we are special.
And again, only here on Earth, I'm not saying of that.
Is that if we think about knowledge, what we know, we clearly, human brains
have the only brains that have a certain types of knowledge.
We have the only brains on this earth to understand what the earth is, how old it is, the universe
is a picture as a whole.
We have the only organisms understand DNA and the origins of species,
no other species on this planet has that knowledge.
So we think about, I like to think about,
one of the endeavors of humanity
is to understand the universe as much as we can.
I think our species is further along on that undeniably.
Whether our theories are right or wrong, we can debate.
But at least we have theories.
We know that what the sun is and how fusion is and how what black holes are.
And we know general theory relativity and no other animal has any of this knowledge.
So from that sense of where special, are we special in terms of the hierarchy of complexity
in the universe?
Probably not.
Can we look at a neuron?
Yeah, you say that prediction happens in the neuron. What does that mean? So the neuron tradition is seen as the basic element of the brain. So I mentioned this earlier,
prediction was our research agenda. Yeah, we said, okay, how does the brain make
a prediction? Like, I'm about to grab this water bottle and my brain is predicting what I'm going
to feel. On all my parts of my fingers, if I felt something really odd on any part here, I'd
notice it. So my brain is predicting what it's going to feel as I grab this thing. So what is it?
How does that manifest itself in neural tissue? Right got brains made of neurons, and there's chemicals, and there's neurons, and there's spikes,
and they're connected, you know, where is the prediction going on?
And one argument could be that, well, when I'm predicting something, a neuron must be
firing in advance.
It's like, okay, this neuron represents what you're going to feel, and it's firing.
It's sending a spike.
And certainly that happens to some extent.
But our predictions are so ubiquitous that we're making so many of them, which we're
totally unaware of, just the vast majority, and we have no idea that you're doing this.
That it wasn't really, we were trying to figure, how could this be?
Where are these, where are these happening?
And I won't walk you through the whole story
unless you insist on it, but we came to the realization
that most of your predictions are occurring inside individual
neurons, especially the most common neuron
the pyramidal cells.
And there's a property of neurons.
Everyone knows, or most people know,
that a neuron is a cell and it has this spike
called an action potential and it sends information.
But we now know that there's these spikes internal
to the neuron, they're called dendritic spikes.
They travel along the branches of the neuron
and they don't leave the neuron.
They're just internal only.
They're far more dendritic spikes
than there are action potentials, far more.
They're happening all the time.
And what we came to understand that those
dendritic spikes, that ones that are occurring,
are actually a form of prediction.
They're telling the neuron, the neuron is saying,
I expect that I might become active shortly.
And that internal, so the internal spike is a way of saying,
you might be generating an external spike soon.
I predicted you're going to become active. And we've, we've, we've, we've wrote a paper in 2016, which
explained how this manifests itself in neural tissue and how it is that this all works together.
But the vastment, we think it's, there's a lot of evidence supporting it. So we, that's
where we think that most of these predictions are internal. That's why you can't be the internal learner on. You can't perceive them.
Well, for understanding the prediction mechanism of a single neuron, do you think there's deep
insights to be gained about the prediction capabilities of the mini braids within the
bigger brain and the brain?
Oh, yeah. Yeah. So having a prediction side of their individual neuron is not that useful. You know what so what?
The way it manifests itself in neural tissue is that when a neuron, a neuron emits these spikes are a very singular type of event. If a neuron is predicting that it's going to be active,
it emits its spike very a little bit sooner, just a few milliseconds sooner than it would have
otherwise. It's like, I give the analogy of the book, it's like a sprinter on a starting block in a race.
And if someone says, get ready set, you get up and you're ready to go.
And then when you're a start, you get a little bit earlier start.
So that, it's that, that ready set is like the prediction, and the neurons like ready to go quicker.
And what happens is when you have a whole bunch of neurons together,
and they're all getting these inputs.
The ones that are in the predictive state, the ones that are anticipating to become active.
If they do become active, they happen sooner, they disable everything else,
and it leads to different representations in the brain.
So, you have to, it's not isolated just to the neuron.
The prediction occurs within the neuron, but the network behavior changes.
So, what happens under different predictions,
different inputs have different representations.
So what I predict is gonna be different
under different contexts.
What my input is different under different contexts.
So this is a key to the whole theory, how this works.
So the theory of the thousand brains,
if you were to count the number of brains,
how would you do it?
The thousand main theory says that basically every cortical column in your New York cortex is a
complete modeling system, and that when I ask where do I have a model of something like a coffee cup,
it's not in one of those models, it's in thousands of those models. There's thousands of models of
coffee cups. That's what the thousand brains are. There's a voting mechanism. Then there's a voting mechanism which you
leads with this is the thing you're with your conscious of which leads to your
singular perception. That's why you perceive something. So that's the thousand
brain theory. The details how we got to that theory are complicated. It wasn't
we just thought of it one day. And one of those details details is we had to ask how does a model make predictions.
We talked about these predictive neurons. That's part of the theory.
It's like saying, oh, it's a detail, but it was like a crack in the door. It's like, how do we get to figure out how these neurons build through this?
What is going on here? We just looked at prediction as like, well, we know that you're ubiquitous.
We know that every part of the cortex is making predictions.
Therefore, whatever the predictive system is,
it's going to be everywhere.
We know there's a gazillion predictions happening at once.
So this type of thing starts teasing apart,
ask questions about, how could neurons
be making these predictions?
And that sort of built up to now what we have
the thousand brains theory, which is complex.
It's just that I can state it simply,
but we just didn't think of it.
We had to get there step by step.
Very, it took years to get there.
And where does reference frames fit in?
So yeah.
Okay.
So again, a reference frame, I mentioned earlier
about a model of a house.
And I said, if you're going to build a model of a house
in a computer, they have a reference frame. And you can think of reference to them like Cartesian coordinates,
like x, y, and z axes. So I can say, well, I'm going to design a house. I can say, well, the front
door is at this location, x, y, z, and the roof is at this location, x, y, z, and so on. That's
the type of reference frame. So it turns out for you to make a prediction, and I walk you through
the thought experiment in the book where I was
predicting what my finger was going to feel when I touched the coffee cup was a ceramic coffee cup of this one will do.
And what I realized is that
to make a prediction when my finger is going to feel like it's going to feel different than this, which it feels different
if I touch the whole or the thing on the bottom.
Make that prediction the cortex needs to know where the finger is, the tip of the finger or the thing on the bottom. Make that prediction. The cortex needs to
know where the finger is, the tip of the finger relative to the coffee cup and exactly relative
to the coffee cup. And to do that, I have to have a reference frame for the coffee cup. There has
to have a way of representing the location of my finger to the coffee cup. And then we realize,
of course, every part of your skin has to have a reference frame relative to the things that touch.
And then we did the same thing with vision. But so the idea that a reference frame is necessary
to make a prediction when you're touching something
or when you're seeing something,
and you're moving your eyes or you're moving your fingers.
It's just a requirement to know what to predict.
If I have a structure, I'm gonna make a prediction.
I have to know where it is.
I'm looking or touching it.
So then we say, well, how do neurons make reference frames?
It's not obvious.
You know, x, y, z coordinates don't exist in the brain.
It's just not the way it works.
So that's when we looked at the older part of the brain,
the hippocampus and the antironacortex, where we knew that,
in that part of the brain, there's a reference frame
for a room or reference frame for an environment.
Remember I talked earlier about how you could
know, make a map of this room.
So we said, how that they are implementing reference frames there?
So we knew that a reference frame is the need to exist in every quarter of a column.
And so that was a deductive thing.
We just deduced it.
Has to go.
So you take the old mammalian ability to know where you are in a particular space and
you start applying that to higher and higher levels.
Yeah, you first you applied to visit like where your finger is.
So here's the way I think about it.
The old part of the brain says, where's my body in this room?
The new part of the brain says, where's my finger relative to this object?
Where is a section I retina relative to this object. Yeah. Where is a section I retina relative to this object?
Where is it?
I'm looking at one little coin.
Where is that relative to this patch of my retina?
Yeah.
And then we take the same thing and apply the concepts,
mathematics, physics, humanity, whatever you want to think about.
In the venture you're pondering your own mortality.
Well, whatever.
But the point is, when we think about the world, when we have knowledge about the world,
how does that knowledge organize? Where's it in your head? The answer is it's in reference
frames. So the way I learn the structure of this water bottle, where the features are relative
to each other, when I think about history or democracy or mathematics, the same basic underlying
structures happening. There's reference frames for where the knowledge that you're signing things to. And I think about history or democracy or mathematics, the same basic underlying structures
happening.
There's reference frames for where the knowledge that you're assigning things to.
So in the book, I go through examples like mathematics, and language, and politics.
But the evidence is very clear in the neuroscience.
The same mechanism that we use to model this coffee cup, we're going to use to model high level
thoughts.
You're the demise of the humanity, whatever you want to think about.
It's interesting to think about how different are the
representations of those higher-dimensional concepts,
higher-level concepts, how different the representation
there is in terms of reference frames versus spatial.
But interesting thing, it's a different application,
but it's the exact same mechanism.
But isn't there some aspect to higher level concepts that they seem to be hierarchical?
Like they just seem to integrate a lot of information into that. So is our physical objects.
So take this water bottle.
I'm not particular to this brand, but this is a Fiji water bottle.
And it has a logo on it. I use this example in my book, our company's coffee cup has a logo on it.
But this object is hierarchical.
It's got like a cylinder and a cap, but then it has this logo on it.
The logo has a word.
The word has letters, the letters of features.
So I don't have to remember, I don't have to think about this.
I said, oh, there's a Fiji logo on this water bottle.
I don't have to go through and say, oh, what is a Fiji logo?
It's the F and I and a J and I.
And there's a Hibiscus flower.
And, and, oh, it has a pest, you know, the stamen on it.
I don't have to do that.
I just incorporate all of that in some sort of hierarchical
representation.
I say, you know, put this logo on this water bottle.
Yeah.
And, and, and then the logo has a word and the word has letters.
All hierarchical.
It's all that stuff is big. It's amazing that the brain instantly just does all that.
Yeah. The idea that there's, there's water, it's liquid, and the idea that you can drink it when you're thirsty, the idea that there's brands.
Yeah. And then there's like all of that information is instantly
like built into the whole thing once you proceed.
So I wanted to get back to your point about hierarchical representation.
The world itself is hierarchical, right?
And I can take this microphone in front of me.
I know inside there's going to be some electronics.
I know there's going to be some wires and I know there's going to be a little diaphragm
that was back and forth.
I don't see that, but I know it.
So everything in the world is hierarchical.
You just go into a room, it's composed of other components of kitchen heads of refrigerator,
you know, the refrigerator has a door, the door has a hinge, a hinge has a screws and pin. I mean,
so anyway, the modeling system that exists in every cortical column learns the hierarchical structure
of objects. So it's a very sophisticated modeling system
in this granderice.
It's hard to imagine, but this granderice can do
really sophisticated things.
It's got 100,000 neurons in it.
It's very sophisticated.
So that same mechanism that can model
a water bottle or a coffee cup can model conceptual objects as well.
That's the beauty of this discovery that this guy,
Vernon Mount Castle made many, many years ago,
which is that there's a single cortical algorithm
underlying everything we're doing.
So common sense concepts and higher level concepts
are all represented in the same way.
They're set in the same mechanisms, yeah.
It's a little bit like computers, right?
All computers are universal turning machines.
Even the little teeny one that's in my toaster
and the big one that's running some cloud server someplace.
They're all running on the same principle,
they can provide different things.
So the brain is all built on the same principle,
it's all about learning these models,
structured models using movement and reference frames,
and it can be applied to something as simple as a water ball
in a coffee cup and it can be like just thinking like what's the future of humanity and, you know,
why do you have a head talk on your desk? I don't know. Nobody knows. Well,
I think it's a head talk. That's right. It's a head talk in the fog. It's a Russian reference.
Does it give you any
inclination or hope about how difficult it is to engineer common sense reasoning? So how complicated is this whole
process? So looking at the brain, is this a marvel of
engineering or is it pretty dumb stuff stack on top of each
other over?
I can be both can't be both can't be both. Can't be both, right?
I don't know if it can be both because if it's an incredible engineering job, that means
it's so evolution did a lot of work.
It.
Yeah, but then it just copied that.
Right.
So as I said earlier, the figuring out how to model something like a space is really hard.
And evolution has to go through a lot of trick and these cells I was talking about, these
grid cells in place, they're really complicated. This is not simple stuff. This neural tissue works
on these really unexpected weird mechanisms. But it did it. It figured it out. But now you could
just make lots of copies of it. But then finding, yeah, so it's a But now you can just make lots of copies of it.
But then finding, yeah, so it's a very interesting idea
that's a lot of copies of a basic mini brain.
But the question is how difficult it is to find
that mini brain that you can copy and paste effectively.
Well, today we know enough to build this.
I'm sitting here with, you know, I know the steps we have to go. There's still some engineering problems to solve, but we know
enough. And it's not like, oh, this is an interesting idea. We have to go think about it
for another few decades. No, we actually understand it pretty well details. So not all the details,
but most of them. So it's complicated, but it is an engineering problem.
So in my company, we are working on that.
We are basically the data roadmap, how we do this.
It's not gonna take decades, it's better a few years.
Optimistically, but I think that's possible.
It's, you know, complex things,
if you understand them, you can build them.
So in which domain do you think it's best to build them?
Are we talking about robotics, like entities that operate
in the physical world that are able to interact with that world?
Are we talking about entities that operate in the digital world?
Are we talking about something more like,
more specific like is done in the machine learning community
where you look at natural language or computer vision.
Where do you think is easiest to?
It's the first two, more than the third one, I would say.
Again, let's just use computers as an analogy.
The pioneers in computing, people like John Bennoiman, I'm
turning, they created this thing, you know, we now call the universal
turn machine, which is a computer, right? Today, you know, how is
going to be applied, where it was going to be used, you know, could they
envision any of the future? No, they just said, this is like a really
interesting computational idea about algorithms and how you can
implement them in a machine.
And we're doing something similar to that today. We are building this sort of universal learning principle
that can be applied to many, many different things.
But the robotics piece of that is the interactive.
All right, that's just specific.
You can think of this cortical column as we call a sensory motor learning system.
It has the idea that there's a sensor and then it's moving.
That sensor can be physical. It could be like my finger and it's moving in the world. It could be like my eye and it's physically moving.
It can also be virtual. So it could be, um, example would be I could have a system that lives in the internet
that that actually samples information
on the internet and moves by following links. That's a century motor system.
So something that echoes the process of a finger moving along a car.
But in a very, very loose sense, it's like, again, learning is inherently about the subring
destruction in the world and discover the destruction of the world. You have to move
through the world, even if it's a virtual world, even if it's a conceptual world, you have to move
through it. It doesn't exist in one, it has some structure to it. So here's a couple of predictions
of getting what you're talking about. In humans, the same algorithm is does robotics, right? It moves
my arms, my eyes, my body right and so in my in the
future to me robotics and AI will merge they're not going to be separate fields because they're going
to the algorithms to really controlling robots are going to be the same algorithms we have in our
bread the brain at these sensory motor algorithms I today we're not there but I think that's going to
happen and and then so but not all AI systems will have B-robotics.
You can have systems that have very different types of embodiments.
Some will have physical movements, some will have not physical movements.
It's a very generic learning system.
Again, it's like computers, the Turing machine, it's like,
doesn't say how it's supposed to be implemented, doesn't say how big it is,
doesn't say what you can apply it to,
but it's an interesting, it's a computational principle.
Quarter column equivalent is a computational principle
about learning.
It's about how you learn,
and it can be applied to go zillion things.
This is what I think this is,
I think this impact of AI is gonna be as large
if not larger than computing has been in the last century
by far, because it's getting at a fundamental thing.
It's not a vision system or a learning system.
It's a, it's not a vision system or a hearing system.
It is a learning system.
It's a fundamental principle
how you learn the structure in the world,
how you gain knowledge and be intelligent.
And that's what the thousand brain says was going on.
And we have a particular implementation in our head,
but it doesn't have to be like that at all.
Do you think there's going to be some kind of impact?
Okay, let me ask it another way.
What do increasingly intelligent AI systems do with us humans in the following way?
How hard is the human and the loop problem?
How hard is it to interact?
The finger on the coffee cup equivalent of having a conversation with a human being.
So how hard is it to fit into our little human world?
I don't, I think it's a lot of engineering problems. I don't think it's a fundamental problem.
I could ask you to say question, how hard is the computer to fit into a human world?
Right, that's essentially what I'm asking, like how much are we elitist,
are we as humans, like we tried to keep out systems. I don't know. I'm not sure. I think
I'm not sure that's the right question. Let's look at computers as an analogy. Computers are
a million times faster than us. They do things we can't understand. Most people have no idea what's going on when they use computers. How we integrate
them in our society? Well, we don't think of them as their own entity. They're not living
things. We don't afford them rights. We rely on them. Our survival as a 7 billion people
or something like that is
relying on computers now. Don't you think that's a fundamental problem that we see them as something
we can't we don't give rights to? So computers? So yeah, computers. So robots, computers, intelligent
systems, it feels like for them to operate successfully, they would need to have a lot of the elements that we would start
having to think about, like, should this entity have rights?
I don't think so.
I think it's tempting to think that way.
First of all, I don't think anyone, hardly anyone thinks that's for computers today.
No one says, oh, this thing needs a right.
I shouldn't be able to turn it off or, you know, if I throw it in the trash can and hit it
with a sledgehammer, I might,
might form a criminal act, no one thinks that.
And now we think about intelligent machines,
which is where you're going.
And, and all of a sudden, like, well,
now we can't do that.
I think the basic problem we have here
is that people think intelligent machines will be like us.
They're gonna have the same emotions as we do, same feelings as we do.
What if I can build an intelligent machine that have absolutely could care less about whether it was on or off or destroyed or not.
It doesn't care. It's just like a map. It's just a modeling system.
There's no desires to live. Nothing.
Is it possible to create a system that can model the world deeply and not care of whether
it lives or dies?
Absolutely.
No question about it.
To me, that's not 100% obvious.
It's obvious to me.
We can debate if you want.
Where does your desire to live come from?
It's an old evolutionary design.
We can argue it doesn't really matter if we live or not?
Objectively no.
Right?
We're all going to die eventually.
But evolution wipes us want to live.
Evolution makes us want to fight to live.
Evolutionists want to care and love one another
and to care for our children and our relatives
and our family and so on.
And those are all good things.
But they come about not because we're smart, because we're animals, they grew up.
The hummingbird in my backyard cares about its offspring.
Every living thing in some sense cares about surviving.
But when we talk about creating and tells the machines, we're not creating life, we're
not creating evolving creatures, we're not creating living things, we're just creating a machine that can learn really sophisticated stuff and that machine it may even be able to talk to us.
But it's not going to have a desire to live. That's built into you.
Well, hard to hear. So, people like Ernest Becker argue,
so, okay, there's the fact of finiteness of life.
The way we think about it is something we learn,
perhaps. So, okay.
Yeah, and some people decide they don't want to live.
And some people decide, you know, you can, but the desire to live is built in being a name, right?
But I think what I'm trying to get to is in order to accomplish goals, it's useful to have
the urgency of mortality.
So what the Stoics talked about is meditating in your mortality.
It might be a very useful thing to do to die and have the urgency of death and to realize
that to conceive yourself as an entity that operates in this world
that eventually will no longer be a part of this world and actually
conceive of yourself as a conscious entity.
It might be very useful for you to be a system that makes sense of
the world.
Otherwise, you might get lazy.
Well, okay.
We're gonna build these machines, right?
So, we're talking about building AI.
But we're building the
equivalent of the cortical columns, the...
The Neo-Cortex?
The Neo-Cortex.
And the question is, where do they arrive at?
Because we're not hard coding everything in where well
Well, in terms of if you build the neocortex equivalent
It will not have any of these desires or emotional states now you can argue that
That neocortex won't be useful unless I give it some agency unless I give it some desire
And unless I give it some motivation, although it means lazy do nothing right you could argue that
But on its own it's not gonna do those things.
It's just not, it's not gonna sit there and say,
I understand the world, therefore I care to live.
No, it's okay to do that.
It's just gonna say, I understand the world.
Why is that obvious to you?
Why, why, why don't, do you think it's,
okay, let me ask it this way.
Do you think it's possible?
It will at least assign to itself agency and perceive itself in this world as being a conscious
entity as a useful way to operate in the world and to make sense of the world.
I think a tells the machine to be conscious, but that doesn't, again, imply any of these
desires and goals
that you're worried about.
We can talk about what it means for each machine to be conscious.
And by the way, not worry about, but get excited about.
It's not necessarily that we should worry about it.
So I think there's a legitimate problem or not problem, a question asked, if you've
been able to this modeling system, what's the kind of model?
Yes. Right.
What's its desire?
What's its goal?
What are we applying it to?
Right?
So that's an interesting question.
One thing, and it depends on the application.
It's not something that inherent to the modeling system.
It's something we apply to the modeling system in a particular way.
So if I wanted to make a really smart car, it would have to know about
driving in cars and what's important in driving in cars. It's not going to figure that out
on its own. It's not going to sit there and say, you know, I've understood the world
and I've decided, you know, no, no, no, no, we can have to tell it. We're going to have
to say like, so I imagine I make this car really smart. It learns about your driving habits,
it learns about the world, and it's just, you know, is it one day going to wake up and say, you know what, I'm tired of driving and
doing what you want, I think I have better ideas about how to spend my time.
Well, okay. No, it's not going to do that.
Well, part of me is playing a little bit of devil's advocate, but part of me is also trying
to think through this because I've studied cars quite a bit and I've studied pedestrians
and cyclists quite a bit.
And there's part of me that thinks that there needs to be more intelligence that we
realize in order to drive successfully.
That game theory of human interaction seems to require some deep understanding of human nature.
That, okay, when a pedestrian crosses the street, there is some sense, they look at a car
usually, and then they look away.
There's some sense in which they say, I believe that you're not going to murder me.
You don't have the guts to murder me.
This is the little dance of pedestrian car interaction is saying, I want to look
away and I'm going to put my life in your hands because I think you're human.
You're not going to kill me.
And then the car in order to successfully operate in Manhattan streets has to
say, no, no, no, no, I get a job and you can't get a job.
You know, you know, you can't get a job.
You know, you can't get a job.
You know, you can't get a job.
You know, you can't get a job.
You know, you can't get a job.
You know, you can't get a job.
You know, you can't get a job.
You know, you can't get a job.
You know, you can't get a job.
You know, you can't get a job. You know, you can't get a job. It might have a lot of the same elements the year talking about, which is we're leveraging things we were born with and applying them in the context that, uh,
All right. I would, I would have answered that I would have said that that kind of interaction
is learned because you know, people in different cultures have different interactions like that.
If you cross the street in different cities and different,
are the world, they have different ways of interacting.
I would say it's learned and I would say an intelligent system can learn that too,
but that does not lead and the intelligence system can understand humans.
It could understand that, you know, just like I can study an animal and learn something
about that animal.
You know, I can study apes and learn something about their culture and so on.
I don't have to be in ape to know that.
I may not be completely, but I can understand something.
So I tell them, machine can model that.
That's just part of the world.
It's just part of the interactions.
The question we're trying to get at, will the intelligent machine have its own personal
agency that's beyond what we assign to it, or its own personal goals, or will evolve
and create these things?
My confidence comes from understanding the mechanisms
I'm talking about creating.
This is not the hand-wavy stuff.
It's down in the details, I'm gonna build it.
And I know what it's gonna look like,
and I know what it's gonna behave.
I know what the kind of things it could do
and the kind of things it can't do.
Just like when I build a computer,
I know it's not gonna on its own
decide to put another register inside of it.
It can't do that.
No way, no matter what you solve who does, it can't add a register to the computer. So in
this way, when we build AI systems, we have to make choices about the under how we embed
them. So I talk about this in the book. I said, you know, it's a brain, and tells the
system is not just a neocortex equivalent. You have to have that, but it has to have some kind of embodiment,
physical, virtual.
It has to have some sort of goals.
It has to have some sort of ideas about dangers,
about things it shouldn't do.
Like, we build in safe garbage into systems.
We have them in our bodies.
We have put them into cars, right?
My car falls my directions until the day it's,
I'm about to hit something
and it ignores my directions and puts the brake on. So we can build those things in. So that's
a very interesting problem. How to build those in. I think my differing opinion about the
risks of AI for most people is that people assume that somehow those things will just appear
automatically and it'll evolve. And intelligence itself begets that stuff or requires it.
But it's not.
Intelligence of the Neo-Cortrix equipment
doesn't require this.
The Neo-Cortrix equipment just says,
I'm a learning system.
Tell me what you want me to learn.
And I'll ask me questions.
I'll tie the answers.
But in that we can, it's again, like a map.
It doesn't, a map has no intent about things,
but you can use it to solve problems.
Okay. So the building, engineering, the Neuro Cortex in itself is just creating
an intelligent prediction system.
Modeling system.
Sorry, modeling system.
Yeah.
You can use it to make predictions, but you can also put it inside a thing that's actually acting in this world.
What's it?
You have to put it inside something.
It's, again, think of the map analogy, right?
Map on a tone doesn't do anything.
Right.
It's just inert.
It's just that you can learn, but it's just inert.
So we have to embed it somehow and something to do something.
So what's your intuition here?
You had a conversation with Sam Harris recently
that was sort of, you've had a bit of a disagreement
and you're sticking on this point.
Elon Musk, Stuart Russell, kind of have a worry
existential threats of AI.
What's your intuition?
Why, if we engineer increasingly intelligent,
Neocortex type of system in the computer,
why that shouldn't be a thing that we...
It was used the word intuition,
and Sam Harris used the word intuition too.
And when he used that intuition, that word,
I immediately stopped and said,
oh, that's the cut to the problem.
He's using intuition.
I'm not speaking about my intuition.
I'm speaking about something I understand,
something I'm gonna build, something I am building,
something I understand completely,
or at least well enough to know what,
it's all I'm guessing, I know what this thing's gonna do.
And I think most people who are worried,
they have trouble separating out,
I mean, they don't have the unknowledge
or the understanding about like what is intelligence, how's it manifest in the brain, how's it separate the unknowledge or the understanding about what is intelligence,
how's it manifest in the brain,
how's it separate from these other functions in the brain.
And so they imagine it's gonna be human-like
or animal-like.
It's gonna have the same sort of drives and emotions we have,
but there's no reason for that.
That's just because there's unknown.
If the unknown is like, oh my God,
I don't know what this is gonna do,
we have to be careful, it could be like us, but really smarter. I'm saying, no, it won't be like us.
It would be really smarter, but it won't be like us at all. And, um, and, but I, I'm coming from that,
not because I just guessing I'm not a twit in using intuition. I'm basing on like, okay,
I understand this thing works. It's what it does. It makes me excited to you.
Okay. But, uh, to push back, so I also disagree with the intuitions that Sam has,
but so disagree with what you just said,
which, you know, what's a good analogy.
So if you look at the Twitter algorithm
in the early days, just recommender systems,
you can understand how recommender systems work.
What you can't understand in the early days is when you apply that recommender systems. You can understand how recommender systems work. What you can't
understand in the early days is when you apply that recommender system at scale to thousands
and millions of people, how that can change societies. So the question is, yes, you're just
saying this is how an engineer in your cortex works. But the quote, like when you have a
very useful tick tock type of service that goes viral, when your
new cortex goes viral, and then millions of people start using it, can that destroy the
world?
No.
Well, this is, one thing I want to say is that AI is a dangerous technology.
I'm not denying that.
All technologies are dangerous.
Well, and AI, maybe particularly so.
Okay, so
Am I worried about it? Yeah, I'm totally worried about it the thing where the now component we're talking about now is the existential risk of AI, right?
So I want to make that distinction because I think AI can be applied poorly
It can be applied in ways that you know people are gonna understand the consequences of it
These are all potentially very bad things,
but they're not the AI system
creating this existential risk on its own.
And that's the only place that I disagree with other people.
Right, so.
So I think the existential risk thing is,
humans are really damn good at surviving.
So to kill off the human race,
it'd be very, very difficult.
Well, you can even, yes, but you can even,
I'll go further.
I don't think AI systems are ever gonna try to.
I don't think AI systems are ever gonna like, say,
I'm gonna ignore you, I'm gonna do what I think is best.
I don't think that's gonna happen.
At least not in the way I'm talking about it.
So the Twitter recommendation algorithm is interesting,
example.
Let's use computers and analogy again.
I build a computer.
It's a universal computing machine.
I can't predict what people are going to use it for.
They can build all kinds of things.
They can even create computer viruses.
It's all kinds of stuff.
So there's some unknown about its utility
and about where it's going to go.
But in the other hand, I pointed out that once I build a computer, it's not going to fundamentally
change how it computes. It's like, I use the example of a register, which is a part internal part
of a computer. You know, I say, it can't just say, because computers don't evolve, they don't
replicate, they don't evolve, they don't, you know, the physical manifestation of the computer
itself is not going to, there's certain things that can't do. Right? So you
can break into things like things that are possible to happen, we can't predict, and things
are just impossible to happen. And let's we go out of our way to make them happen. They're
not going to happen unless somebody makes them happen.
Yeah, so there's a bunch of things to say. One is the physical aspect, which you're
absolutely right. We have to build a thing for it to operate in the physical world,
and you can just stop building them, you know,
the moment they're not doing the thing you want them to do.
Or just change the design.
Or change the design.
The question is, I mean, there's a,
it's possible in the physical world,
this is probably longer term, is you automate the building.
It makes, it makes a lot of sense to automate the building. There's a lot of factories
They're doing more and more and more automation to go from raw resources to the final product
It's possible to imagine that it's obviously much more efficient to keep to create a factory that's creating
robots that do something
You know, do something extremely useful for society. It could be a personal assistance.
It could be, it could be your toaster,
but a toaster that as much as deep in knowledge of your culinary preferences.
Yeah.
And that could,
well, I think now you've got the right thing.
The real thing we need to be worried about next is self-replication.
Right.
That is the thing that we're in the physical world.
Yeah, or even the virtual world.
Self-replication because self-replication is dangerous. It's probably more likely to be killed by a virus,
you know, or a human-hanging-eared virus. Anybody can create a, you know, this, the technology is
getting so almost anybody, well not anybody, but a lot of people could create a human-engineered virus
that could wipe out humanity. That is really dangerous. No intelligence
required. Just self-proplication. So we need to be careful about that. So when I think about,
you know, AI, I do not think about robots building robots. Don't do that. Don't build a, you know,
just... Well, that's because you're interested in creating intelligence. It seems like self-proplication
Well, that's because you're interested in creating intelligence. It seems like self-replication is a good way to make a lot of money.
Well, all right, but so is maybe editing viruses is a good way to.
I don't know.
The point is, as a society, when we want to look at existential risks, the existential
risks we face that we can control, almost all of all of around self-replication.
Yes.
The question is, I don't see a good way to make a lot of money by engineering viruses and
deploying them on the world.
All right.
There could be applications that are useful for that stuff.
But let's separate out.
Let's separate out.
I mean, you don't need to.
You only need some terrorists who want to do it because it doesn't take a lot of money
to make viruses.
Let's just separate out what's risky and what's not risky.
I'm arguing that the intelligence
that's part of this equation is not risky.
It's not risky at all.
It's the self-replication side of the equation is risky.
And I'm not dismissing that.
I'm scared to hell.
It's like the paperclip maximizes the thing.
Those are often like talked about in the same conversation.
I think you're right.
Like creating ultra intelligent, super intelligent systems
is not necessarily coupled with a self-replicating,
arbitrarily self-replicating systems.
Yeah, and you don't get evolution in this yourself,
replicating.
Yeah.
And so I think that's just this argument.
The people have trouble separating those two out.
They just think, oh yeah, intelligence looks like us.
And look at the damage we've done to this planet.
Look how we've destroyed all these other species.
Yeah, well, we replicate,
we're 8 billion over 7 billion of us now.
So.
I think the idea is that the more intelligent
we're able to build systems, the more tempting
it becomes from a capitalist perspective of creating products, the more tempting it becomes
to create self-reproduced systems.
Okay.
All right.
So let's say that's true.
So does that mean we don't build intelligent systems?
No, that means we regulate, we understand the risks, we regulate them. Yeah.
You know, look, there's a lot of things we could do
a society which have some sort of financial benefit
to someone which could do a lot of harm.
And we have to learn how to regulate those things.
We have to learn how to deal with those things.
I will argue this, I would say the opposite.
Like I would say having intelligent machines
at our disposal will actually help us in the end more
because it'll help us understand these risks better because it'll help us understand these risk better
It help us mitigate these risk-redder. There might be ways to saying, oh, well, how do we solve climate change problems?
You know, how do we do this or how do we do that?
That just like computers are
Dangerous in the hands of the wrong people, but they've been so great for so many other things
We live with those dangers and I think we have to do the same with intelligence machines
we live with those dangers. And I think we have to do the same with intelligence machines. But we have to be constantly vigilant about this idea of A bad actors doing bad things with them. And B,
don't ever create a self-replicating system. And by the way, I don't even know if you could create
a self-replicating system that uses a factory that's really dangerous. Nature know, nature's way of self-opaquinig is so amazing.
You know, it doesn't require anything.
It just renews the thing and resources and it goes, right?
If I said to you, you know what, we have to build, our goal is to build a factory that
can make that builds new factories.
And it has to end to end supply chain.
It has to, it has to, mind the resources, get the energy.
I mean, that's really hard.
No one's doing that in the next 100 years.
I've been extremely impressed by the efforts
of Elon Musk and Tesla to try to do exactly that,
not from raw resource.
Well, he actually, I think, states the goal is to go
from raw resource to the, he actually, I think, states the goal is to go from raw resource
to the final car in one factory. Yeah. That's the main goal. Of course, it's not currently possible,
but they're taking huge leapshops. Well, he's not the only one to do that. This has been a goal
for many industries for a long, long time. It's difficult to do. Well, a lot of people, what they do
is instead they have like a million suppliers
and then they like there's everybody's made. They all co-locate them and they kind of
systems together. It's a fundamental attribute. I think that's that also is not getting at the
issue I was just talking about, which is self-replication. It's, I mean self-replication means there's no energy involved other than the
energy that's replicating.
Right.
And so if there are humans in the loop, that's not really self-replicating, right?
It's unless somehow we're duped into that.
Well, but it's also, don't necessarily agree with you because you've kind of mentioned that AI will not say no to us.
I just think...
Yeah, yeah.
So, like, I think it's a useful feature to build in.
I'm just trying to, like, put myself in the mind of engineers to sometimes say no.
You know, if you...
Yeah, well, I gave you...
I gave you an example earlier, right? I gave you, I gave you, I gave you, I gave you, I gave you
example, my car, right?
My car turns the wheel and, and applies accelerator and the brake as I say.
Until it decides there's something dangerous.
Yes.
And then it doesn't do that.
Yeah.
Now, that was something it didn't decide to do.
It's something we programmed into the car.
And so good. It's a good idea, right?
The question again isn't like if we create an intelligence system or ever
ignore our commands, of course it will, on sometimes. Is it going to do it because it came up with
its own goals that serve its purposes and it doesn't care about our purposes? No, I don't think
that's going to happen. Okay, so let me ask you about these super intelligent cortical systems that we engineer and us humans
Do you think with these entities operating out there in the world?
What what is the future most promising future look like is it us
merging with them or
Is it us like how do we keep us humans around when you've increasingly intelligent or is it us?
Like how do we keep us humans around
when you've increasingly intelligent beings?
Is it one of the dreams is to upload our minds
in a digital space?
So can we just give our minds to these systems
so they can operate on them?
Is there some kind of more interesting merger
or is there more interesting?
So in the third part of my book,
I talked about all these scenarios. And let me just walk through them. Is there some kind of more interesting merger or is there more? The third part of my book I talked about all these scenarios and let me just walk through
them.
Sure.
The uploading the mind one.
Yes.
Extremely, really difficult to do. Like we have no idea how to do this even remotely
right now. So it would be a very long way away. But I make the argument you wouldn't like
the result. And you wouldn't be pleased with the away. But I make the argument you wouldn't like the result.
And you wouldn't be pleased with the result. It's really not what you think it's gonna be.
Imagine I could upload your brain
into a computer right now.
And now the computer's sitting there going,
hey, I'm over here.
Great, get rid of that old bio person.
I don't need him.
You're still sitting here.
What are you gonna do?
You know, that's not me, I'm here, right?
Are you gonna feel satisfied?
Then people imagine, look, I'm on my death me. I'm here, right? Are you gonna feel satisfied then you know you but people imagine like I'm on my deathbed
And I'm about to you know expire and I push the button and I'm uploaded
But yeah think about it a little differently and and so I don't think it's gonna be a thing because people by the time
We're able to do this if ever
Because you have to replicate the entire body not just the brain. It's it's really it's I walked through the issue
It's really, I walk through the issue. It's really substantial.
Do you have a sense of what makes us us?
Is there a shortcut to what can only save a certain part that makes us truly
are?
No, but I think that machine would feel like it's you too.
Right.
Right.
If two people just like, I have a child, right?
I have two daughters.
They're independent people.
I created them, well, partly.
And I don't, just because they're somewhat like me,
I don't feel on them and they don't feel like I'm me.
So if you split them, you have two people.
So we can come back to what makes,
what consciousness do you want?
We can talk about that.
But we don't have remote consciousness.
I'm not sitting there going,
oh, I'm conscious of that. I mean, that system over there. So, there's, there's like,
let's stay on our topic here. So, one was uploading a brand.
Yeah. In going to happen in a hundred years, maybe a thousand, but I don't think people
are going to want to do it. The merging your mind with, you know, the neural length thing, right?
Again, really, really difficult.
It's one thing to make progress to control a prosthetic arm.
It's another to have like a billion or several billions,
you know, think, and understanding what those signals mean.
Like, it's the one thing to like, okay,
I can learn to think some patterns to make something happen.
It's quite another thing to have a system, a computer,
which actually knows exactly which cells it's talking to
and how it's talking to them in interacting in a way like that.
Very, very difficult.
We're not getting anywhere closer to that.
Interesting.
Can I ask a question here?
So for me, what makes that merger very difficult practically in the next 10, 20, 50 years is
like literally the biology side of it, which is like, it's just hard to do
that kind of surgery in a safe way.
But your intuition is even the machine learning part of it, where the machine has to learn
what the heck it's talking to.
That's even harder.
I think it's even harder.
And it's not, it's, it's easy to do when you're talking about hundreds of signals.
It's, it's a totally different thing to say, talking about billions of signals.
So you don't think it's the raw, it's a machine learning problem.
You don't think it could be learned?
Well, I'm just saying, no, I think you'd have to have detailed knowledge.
You'd have to know exactly what the types of neurons you're connecting to.
I mean, in the brain, there's these, they're not going to do all different types of things.
It's not like a neural network.
It's a very complex organism system up here. We talked about the grid cells earlier in the place cells. You know, you have to know what kind of cells you're talking to all different types of things. It's not like a known network. It's a very complex organism system up here.
We talked about the grid cells earlier than the place cells.
You have to know what kind of cells you're talking to
and what they're doing and how their timing works
and all this stuff which you can't today
is in a way of doing that, right?
But I think it's a, I think the problem,
you're right that the biological aspect of it
wants to have a surgery and have this stuff
to insert it in the brain, that's a problem.
But this isn't we solve that problem.
I think the information coding aspect is much worse.
I think that's much worse.
It's not like what they're doing today.
Today, it's simple machine learning stuff because they're doing simple things.
But if you want to merge your brain, like I'm thinking on the internet, I'm merged my
brain with the machine and we're both doing, that's a totally different issue.
That's interesting.
I tend to think, okay, yeah.
If you have a super clean signal
from a bunch of neurons at the start,
you don't know what those neurons are.
I think that's much easier than the getting
of the clean signal.
I think if you think about today's machine learning,
that's what you would conclude.
I'm thinking about what's going on in the brain
and I don't be talking inclusion.
So we'll have to say, but I don't think even then,
I think this kind of a sad future.
Like, do I have to like plug my brain into a computer?
I'm still a biologic organism.
I assume I'm still gonna die.
So what have I achieved?
Right, you know, what have I achieved? What have I achieved to do?
Some sort of...
Oh, I disagree.
We don't know what those are, but it seems like there
could be a lot of different applications.
It's like virtual reality is to expand your brain's capability
to read what could be there.
Yeah, but fine, but you're still a biological organ.
Yes, yes.
So you're still mortal, you're still all of that.
So what are you accomplishing? You're making your, you're still all of that. All those things.
So what are you accomplishing?
You're making your life in this short period of time better, right?
Just like having the internet made our life better.
Yeah, yeah.
So I think that's of, if I think about all the possible gains we can have here,
that's a marginal one.
It's an individual.
Hey, I'm better.
You know, I'm smarter.
But you know, fine, I'm better. You know, I'm smarter.
But, you know, fine, I'm not against it. I just don't think it's earth changing.
I, but, so this is the true of the internet.
When each of us individuals is smarter,
we get a chance to then share our smartness.
We get smarter and smarter together as a collective.
This is kind of like the sand colony of the telecom.
But that's why do I just create an intelligent machine
that doesn't have any of this biological nonsense?
That is all the same.
It's everything except don't burden it with my brain.
Yeah.
All right, it has a brain.
It is smart.
It's like my child, but it's much, much smarter than me.
So I have a choice between doing some implant,
doing some hybrid weird, you know, biological thing
that's bleeding into all these problems
and limited by my brain,
or creating a system
which is super smart that I can talk to that helps me understand the world that can read
Wikipedia and talk to me.
I guess the open questions there are what does the manifestation of super intelligence
look like.
So, like, what are we going to?
You talked about why do I want to merge with AI?
Like, what's the actual marginal benefit here?
If I, if we have a super intelligent system, yeah,
how will it make our life better?
So let's, let's, that's a great question, but let's break it
onto a little pieces.
All right.
On the one hand, it can make our life better in lots of simple ways.
You mentioned like a care robot or something
that helps me do things, a cook side,
on the what it does, right?
Little things like that.
We have super better smarter cars.
We can have better agents that aid helping us
in our work environment and things like that.
To me, that's like the easy stuff,
the simple stuff in the beginning.
And so in the same way that computers made our lives better in ways, many, many ways, I will have
those kind of things.
To me, the really exciting thing about AI is sort of its transcendent quality in terms
of humanity.
We're still a biological organisms.
We're still stuck here on Earth.
It's going to be hard for us to live anywhere else.
I don't think you and I are going to want to live on Mars anytime soon.
And we're flawed.
We may end up destroying ourselves.
It's totally possible.
If not completely, we could destroy our civilizations.
You know, it's this face of fact.
We have issues here, but we can create and tell the machines that can help us in various ways.
For example, one example I gave, and that sounds a little sci-fi, but I believe this.
If we really want to live in Mars, we'd have to have intelligent systems that go there
and build the habitat for us, not humans. Humans are never going to do this. It's just too hard.
But could we have a thousand or 10,000, you know, engineering workers up there doing this up,
building things, terraforming, why sure?
Maybe we can move the Mars.
But then if we want to, if we want to go around the universe,
should I send my children around the universe?
Should I send some intelligence machine, which is like a child that represents me
and understands our needs here on Earth, that could travel through space.
So it's sort of, in some sense, intelligence allows us to transcend our limitations of our biology.
And don't think of it as a negative thing. It's in some sense my children transcend my biology too
because they live beyond me. And they represent me and they also have their own knowledge and I
can impart knowledge to them. So intelligent machines will be like that too, but not limited like us.
But the question is, there's so many ways that transcendence can happen.
And the merger with AI and humans is one of those ways.
So you said intelligent basically beings or systems propagating throughout the universe,
representing us humans.
They represent us humans in the sense they represent our knowledge and our history, not us individually.
Right, right.
But I mean, the question is it's just a database
with the really damn good model of the world.
No, no, it's just conscious, conscious just like us.
Okay, but just different. They're different. Just like my children are different. They're like me no, they're conscious. It's conscious just like us. Okay, but just different. They're different
Just like my children are different. They're like me, but they're different
These are more different. I guess maybe I've already I
Kind of I take a very broad view of our life here on others. I say, you know, why are we living here? We just living because we live
Is it we surviving because we can survive are we we fighting just because we wanna just keep going?
What's the point of it?
Yeah.
Right?
So to me, the point, if I may ask myself,
what's the point of life is,
what transcends that the femoral sort of biological experience
is to me, this is my answer,
is the acquisition of knowledge to understand more about the universe, and to explore.
And that's partly to learn more, right?
I don't view it as a terrible thing.
If the ultimate outcome of humanity
is we create systems that are intelligent, that are offspanked,
that are not like us at all.
And we stay here and live on earth as long as we can, which won't be forever, but as long
as we can.
And that would be a great thing to do.
It's not like a negative thing.
Well, would you be okay then if the human species vanishes,
but our knowledge is preserved
and keeps keep being expanded by intelligent systems.
I want our knowledge to be preserved and expanded.
Yeah.
Am I okay with humans dying?
No, I don't want that to happen.
But if it does happen,
what if we were sitting here
and this is the last two people on Earth
and we're saying, Lex, we blew it all over, right? Wouldn't I feel better if I knew that our knowledge
was preserved and that we had agents that knew about that, that were trans, you know, that
left earth? I would want that. It's better than not having that. You know, I make the analogy
of like, you know, the dinosaurs, the poor dinosaurs, they live for, you know, tens of
millions of years. They raise their kids, they, you know, they,, the poor dinosaurs, they lived for, you know, tens of millions of years, they raised their kids, they, you know,
they fought to survive, they were hungry,
they, they, you know, they did everything we do.
And then they're all gone.
Yeah.
And if we didn't discover their bones,
nobody would ever know that they ever existed.
Do we want to be like that?
I don't want to be like that.
Well, there's a sad aspect to the,
and it's kind of, it's jarring to think about that it's possible
that a human like intelligent civilization has previously existed on earth. The reason I say this
is like it is jarring to think that we would not, if they weren't extinct, we wouldn't be able to
find evidence of them. After sufficient amount of time, of course, is like, like, basically
humans, like, if we destroy ourselves now, human civilization destroy ourselves now,
after sufficient amount of time, we would not be, we'd find evidence of the dinosaurs,
we'd not find evidence of us humans. Yeah, that's, that's kind of a odd thing to think
about. Although, I'm not sure if we have enough knowledge about species going back for billions of years
that we could, we might be able to eliminate
that possibility, but it's an interesting question.
Of course, it's a similar question to,
you know, there were lots of intelligent species
about our galaxy that have all disappeared.
Yeah, that's super sad that they're, exactly,
that there may have been much more intelligent alien civilizations
in our galaxy.
They're no longer there.
Yeah.
You actually talked about this, the humans might destroy ourselves.
Yeah.
And how we might preserve our knowledge.
Yeah.
And advertise that knowledge to other advertisers of the funny word he is.
PR for what a PR for some.
There's no financial gain in this.
You know, like make it like from a tourism perspective,
make it interesting.
Can you describe how?
Well, there's a couple things.
I broke it down to the two parts.
Well, actually three parts.
One is, you know, there's a lot of things we know that what
if we were, what if we ended, one of our civilization collapsed. Yeah, I'm not talking tomorrow.
Yeah, we could be a thousand years from now, like, you know, we don't really know. But
yeah. But historically, it would be likely at some point time flies when you're having fun.
Yeah. Yeah. That's a good way to put it. You know, could we, and then, then it tells
it life evolved again on this planet. When they want to know a lot You know, could we, and then it tells you life evolved again on this planet.
When they want to know a lot about us and what we knew, when they wouldn't be able to
ask us questions.
So one very simple thing, I said, how would we archive what we know?
That was a very simple idea.
I said, you know, that wouldn't be that hard, but if you satellites, you know, going around
this, the sun, I mean, upload Wikipedia every day, and that kind of thing.
So you know, we end up killing ourselves.
Well, it's up there in the next intelligence species we'll find it and learn something.
They would like that.
They would appreciate that.
So that's one thing.
The next thing I said, well, what if, you know, how outside of our solar system, we have
the setty program.
We're looking for these intelligence signals from everybody.
And if you do a little bit of math, which I did in the book,
and you say, well, what have intelligence species only lived for 10,000 years
before technologically intelligent species?
Like, ones that are really able to do this,
we're just starting to be able to do.
Well, the chances are we wouldn't be able to see any of them
because they would have all been disappeared by now.
They've lived for 10,000 years and now they're gone.
And so we're not going to find these signals being sent from these people because I said,
what kind of signal could you create that would last a million years, or a billion years?
That someone would say, dammit, someone smart lived there.
We know that.
That would be a life-changing event for us to figure that out.
Well, what we're looking for today in the city program isn't that.
We're looking for very coded signals in some sense.
And so I asked myself, what would be a different type of signal one could create?
I've always thought about this throughout my life.
And in the book, I gave one possible suggestion, which was, we now detects planets going
around other other stars, excuse me.
And we do that by seeing the slight dimming of the light as the planets move in front of them. That's how we detect planets elsewhere in our galaxy.
What if we created something like that that just rotated around our around the sun and it blocked out a little bit of light in a particular pattern that someone said, hey, that's not a planet. That is a sign that someone was once there. You can say,
what if it's beating up pie, you know, free point, whatever. So I did.
My distance, you can, you can, you can, you can, from a distance, broadly broadcast,
takes no continual activation on our part. This is the key, right? No one has to be seen
at running a computer and supplying it with power. It just goes on. So we go, it's continues.
And, and I argued that part of the study program should be
looking for signals like that. And to look for signals like that, you ought to figure out how would
we create a signal? Like what would we create? That would be like that. That would persist for millions
of years. That would be broadcast broadly. You could see from a distance. There was uneclivical
came from an intelligent species.
I gave that one example because they don't know what it is.
And then finally, if our ultimate yard solar system will die at some point in time, how
do we go beyond that?
And I think it's possible.
If it's all possible, we'll have to create intelligent machines that travel throughout
the solar system or throughout the galaxy. And I don't think that's possible, if it's all possible, we'll have to create intelligent machines that travel throughout the solar system,
or we'll have to go to the galaxy.
And I don't think that's gonna be humans.
I don't think it's gonna be biological organisms.
So these are just things to think about.
What's the, I don't wanna be like the dinosaur.
I don't wanna just live in it.
I don't know, I was it, we're done.
Well, there is a kind of presumption
that we're gonna live forever, which I think it is a bit sad to
imagine that the message we send as he talk about is that we were once here instead of
we are here.
Well, it could be we are still here, but it's more of an insurance policy in case we're
not here, you know.
Well, I don't know, but there is something.
I think about we, we assume is don't often think about this, but.
It's like, like whenever I, um, recorded video, I've done this a couple of times in my life.
I've recorded a video for my future self, just for personal, just for fun.
And it's always just fascinating to think about that, preserving yourself for a person out, just for fun. And it's always just fascinating to think about that,
preserving yourself for future civilizations.
For me, it was preserving myself for a future me,
but that's a little fun example of archival.
Well, these podcasts are preserving you and I,
in a way, yeah, for future.
Hopefully, well, after we're gone.
But you don't often, we're sitting here talking about this.
You are not thinking about the fact that you and I
are going to die and there'll be like 10 years after
somebody watching this and we're still alive.
You know, and sometimes I do.
I'm here because I want to talk about ideas.
Right.
And these ideas transcend me,
and they transcend this time in our planet.
We're talking here about ideas
that could be around a thousand years from now,
or a million years from now.
When I wrote my book,
I had an audience in mind,
and one of the clearest audiences was,
was people reading this 100-year-smell.
I said, how do I make this book relevant
to someone reading this 100-year-smell?
What would they want to know that we were thinking back then?
What would make it like that was an interesting,
it's still an interesting book.
I'm not sure I can achieve that,
but that was how I thought about it
because these ideas, especially in the third part
of the book, the ones who were just talking about,
you know, these crazy, it sounds like crazy ideas
about, you know, storing our knowledge
and, you know, merging our brains of computers
and sending, you know, our machines down the space,
is not gonna happen in my lifetime.
And they may not have been happening the next hundred years.
They may not happen for a thousand years, who knows?
But we have the unique opportunity right now,
we you mean other people like this,
to sort of, at least propose the agenda
that might impact the future like that.
It's a fascinating way to think both like writing
or creating, try to make,
try to create ideas, try to create things that hold up in time.
Yeah, you know, when, I'm just thinking how the brain works,
we're gonna figure that once.
That's it, it's gonna be figured out once.
And after that, that's the answer.
And people will study that thousands of years now.
We still, we still, you know,
Ben and Nate Newton and Einstein.
And, you know, because, because ideas are exciting,
even well into the future, you know, they're not there.
Well, the interesting thing is, like, big ideas,
even if they're wrong, are still useful.
Like, yeah, especially if they're not completely wrong.
Like, like, you're right, right.
Noons laws are not wrong.
They're just science science.
They're better.
So, um, well, it's yeah, I mean, but we're talking when you and I,
and I were talking about physics, I wonder if we'll ever achieve that kind of clarity,
but understanding, um, like complex systems and the, this particular manifestation of
complex systems, which is the human brain.
I'm totally optimistic we can do that.
I mean, we're making progress at it.
I don't see any reason why we can't completely.
I mean, completely understanding the sense, we don't really completely understand what all
the molecules in this water bottle are doing, but we have laws that sort of capture it pretty
good.
So we'll have that kind of understanding.
I mean, it's not like you're going to have to know what every neuron in your brain is doing.
But enough to, first of all, to build it.
Yeah.
And second of all, to do, you know, do what physics does, which is like have concrete experiments
where we validate.
I, we're, we're, we're, this is happening right now.
Like it's not, this is not some future thing.
I'm very optimistic about it because I know about our work and what we're doing.
We'll have to prove it to people.
But I consider myself a rational person.
And until fairly recently, I wouldn't have said that.
But right now, where I'm sitting right now, I'm saying have said that. But right now, we're sitting right now,
I'm saying, this is gonna happen.
There's no big obstacles to it.
We finally have a framework for understanding
what's going on in the cortex,
and that's liberating.
It's like, oh, it's happening.
So I can't see why we wouldn't be able to understand it.
I just can't.
Okay.
So, I mean, on that topic,
let me ask you to play devil's advocate.
Is it possible for you to imagine?
Look, look a hundred years from now and looking at your book.
In which ways might your ideas be wrong? Oh, I worry about this all the time.
Yeah, it's still useful. Yeah. Yeah.
I think there's, you know, I can, I can best relate it to like things I'm worried about
right now. So we talk about this voting idea. It's happening. There's no question it's
happening. But it could be far more, there's
enough things I don't know about it
that it might be working in different ways.
And I'm thinking about what's voting, who's voting,
where are representations?
I talked about, you have 1,000 models of a coffee cup like that.
That could turn out to be wrong because it
may be there are 1,000 models that are submodels but
not really a single model of the coffee cup. I mean, these are all sort of on the edges,
things that I present as like, oh, it's so simple and clean, well, that's not that. It's
always going to be more complex. And there's parts of the theory which I don't understand
the complexity well. So I think the idea that this brain
is a distributed modeling system
is a not controversial at all, right?
That's not, that's well understood by many people.
The question then is, are each cortical column
an independent modeling system?
I could be wrong about that.
I don't think so, but I worry about it.
My intuition, not even thinking
why you could be wrong, is the same intuition I have about any sort of physicist, like
strength theory, that we assume is desire for a clean explanation and a hundred years
from now, intelligent systems might look back at us and laugh at how we try to get rid
of the whole mess by having simple explanation when the reality is it's way messier.
And in fact, it's impossible to understand you can only build it.
It's like this idea of complex systems and cellular automata.
You can only launch the thing you cannot understand it. Yeah, I think that, you know, the history of science suggests
that's not likely to occur.
The history of science suggests that,
look, as a theorist, and we're theorists,
you look for simple explanations, right?
Fully knowing that whatever simple explanation
you're gonna come up with is not gonna be completely correct.
I mean, it can't be.
I mean, it's just, it's just more complexity.
But that's the role of theorists play.
They give you a framework on which you now can talk about a problem
and see if you're okay.
Now we can start digging more details.
The best framework stick around while the details change.
You know, again, the classic example is Newton and Einstein, right?
You know, Newton's theories are still used.
They're still valuable.
They're still practical.
They're not like wrong.
It just they've been refined.
Yeah, but that's in physics.
It's not obvious, by the way.
It's not obvious for physics either
that the universe should be such
that it's amenable to these simple.
But it's so far that it appears to be.
As far as we can tell.
Yeah, I mean, but as far as we can tell.
But there's also an open question,
where the brain is amenable to such clean theories.
That's the not the brain, but intelligence.
Well, I know, I would take intelligence out of it.
Just say, you know, well, okay.
The evidence we have suggests that the human brain is at the one time extremely messy and
complex, but there's some parts that are very regular and structured.
That's why we started the Neocortex.
It's extremely regular in its structure.
And unbelievably so.
And then I mentioned earlier, the other thing is,
it's universal abilities. It is so flexible to learn so many things. We haven't figured out
what I can't learn yet. We don't know, but we haven't figured out yet. But it learns things that
it never was evolved to learn. So those give us hope. That's why I went into this field because
they said, you know, this regular structure, it's doing this amazing number of things. There's got to be some underlying principles that are that are common.
And other other scientists have come up with the same conclusions.
And so it's promising.
It's promising.
And and that's and whether the theories play out exactly this way or not, that is the role that theorists play. And so far, it's worked out well, even though, you know, maybe, you know, we don't understand all the laws of physics.
But so far, it's been pretty damn useful. The ones we have are theories are pretty, pretty useful. necessarily be at least to the degree that we are worried about the existential risks of artificial intelligence relative to human risks from human nature being an existential risk.
What aspect of human nature worries you the most in terms of the survival of the human species? I'm disappointed in humanity, humans.
I mean, I'm one, so I'm disappointed myself too.
It's kind of a sad state.
There's two things that disappoint me.
One is how it's difficult for us
to separate our rational component of ourselves
from our evolutionary heritage,
which is not always pretty.
Rape is an evolutionary good strategy for reproduction.
Murder can be at times too.
Making other people miserable at times is a good strategy for reproduction.
Now that we know that, and yet we have this sort of, you know, we,
and I can have this very rational discussion
talk about, you know, intelligence and brains and life and so on.
So many, it seems like it's so hard.
It's just a big transition to forget humans,
all humans to make the transition from be like,
let's pay no attention to all that ugly stuff over here.
Let's just focus on the issues.
What's unique about humanity is our knowledge
and our intellect.
But the fact that we're striving
is in itself amazing, right?
The fact that we're able to overcome that part
and it seems like we are more and more
becoming successful over and over.
That is the optimistic view and I agree with you.
But I worry about it.
I'm not saying I'm worrying about it.
I think maybe I was your question.
I still worry about it.
You know, we could be in tomorrow because some terrorists could get nuclear bombs and
blow us all up who knows, right?
The other thing I'm disappointed is, and I understand it, I guess you can't really
be disappointed.
It's just a fact is that we're so prone to false beliefs. We have a model in our head.
The things we can interact with directly, physical objects, people, that model is pretty
good.
And we can test it all the time.
I touch something, I look at it, I talk to you, see it, my model is correct.
But so much that we know is stuff I can't directly interact with.
I don't even know because someone told me about it. And so we're prone inherently prone to having false beliefs because I'm told something,
how am I going to know it's the right or wrong, right? And so then we have the scientific process,
which says we are inherently flawed. So the only way we can get closer to the truth is by looking for
get closer to the truth is by looking for contrary evidence. Yeah, like this conspiracy theory, this theory that scientists keep telling me about that
the earth is round.
As far as I can tell, when I look out, it looks pretty flat.
Yeah.
So yeah, there is, there's attention, but it's also
I tend to believe that we haven't figured out most of this thing, right?
Most of nature around us is a mystery. And so it
But that doesn't work. Does that worry you? I mean, it's like, oh, that's like a pleasure more to figure out, right? Yeah, that's exciting. But I'm saying like there's going to be a lot of quote unquote wrong ideas.
I mean, I've been thinking a lot about engineering systems like social networks and so on.
And I've been worried about censorship and thinking through all that kind of stuff
because there's a lot of wrong ideas.
There's a lot of dangerous ideas.
But then I also read history and see when you censor ideas that are wrong.
Now this could be a small scale censorship, like a young grad student who comes up, who
like raises their hand and says some crazy idea.
A form of censorship could be, I shouldn't use the word censorship, but like de-incentivize
them from, no, no, no, no, this is the way it's been. Yeah, yeah, you're foolish kid, don't think so.
Yeah, you're foolish.
So in some sense, those wrong ideas, most of the time, end up being wrong, but sometimes end up
in the wrong position. I agree with you. So I don't like to wear censorship.
And the very end of the book, I ended up with a sort of a plea or a recommended force of action.
And the best way I could, I know how to do with this issue that you bring up is if everybody understood,
as part of your upbringing life, something about how your brain works, that it builds a model of the world,
how it works, you know, how it basically builds up all the world,
and that the model is not the real world.
It's just a model.
And it's never going to reflect the entire world.
And it can be wrong.
And it's easy to be wrong.
And here's all the ways you can get a wrong model in your head, right?
It's not prescribed what's right or wrong.
It's just understand that process.
If we all understood the process,
then I got together and you say,
I disagree with you, Jeff.
And I said, Lex, I disagree with you.
That, at least we understand that we're both
trying to model something.
We both have different information
which leads to our different models.
And therefore, I shouldn't hold it against you
and you shouldn't hold it against me.
And we can at least agree that,
well, what can we look for in that's common ground
to test our, our beliefs as opposed to so much,
as, are we raise our kids on dogma, which is this is a fact,
and this is a fact, and these people are bad, and, and, you know,
where everyone knew just to, to be skeptical of every belief,
and why and how their brains do that.
I think we might have a better world.
Do you think the human mind is able to comprehend reality? So you talk about this creating models
that are better and better. How close do you think we get to reality? So the wildest idea is
it's like Donald Hoffman saying we're very far away from reality.
Do you think we're getting close to reality?
Well, it depends on what you define reality. We are getting, we have a model of the world that's
very useful. For basic goals. Well, for our survival and our pleasure, right, right? So that's
useful. I mean, it's really useful.
Oh, we can build planes, we can build computers, we can do these things, right? I don't think,
I don't know the answer to that question. I think that's part of the question we're trying to
figure out, right? Like, you know, obviously, we end up with a theory of everything that really
is a theory of everything and all of a sudden, everything comes in the play and there's no room for something else, then you might feel like we have a good
model of the world.
Yeah, but if you have a theory of everything and somehow, first of all, you'll never be
able to really conclusively say it's a theory of everything, but say somehow we are very
damn sure it's a theory of everything. We understand what happened at the Big Bang and how
just the entirety of the physical process. I'm still not sure that gives us an understanding of the next many layers of the hierarchy.
Stractions that form.
Well, also, what if string three turns out to be true?
And then you say, well, we have no reality, no modeling, what's going on in those other
dimensions that are wrapped into it on each other.
Right?
Or the multiverse.
You know, I honestly don't know how for us, for human interaction, for ideas of intelligence,
how it helps us to understand that we're made up of vibrating strings that are like 10
to the whatever time smaller than us.
I don't, you know, you could probably build better weapons,
better rockets, but you're not going to be able to understand intelligence.
I guess, I guess, maybe better computers.
No, you won't be able to, I think it's this more purely knowledge.
You might need to a better understanding of the beginning of the universe, right?
It might lead to a better understanding of, I don't know.
I guess, I think the acquisition of knowledge has always been one where you pursue it
for its own pleasure, and you don't always know
what is going to make a difference.
Yeah, you're pleasantly surprised
by the weird things you find.
Do you think for the Neo Cortex in general,
do you think there's a lot of innovation
to be done on the machine side?
Do you use the computer as a metaphor quite a bit?
Is there different types of computer
that would help us build intelligence?
I mean, what are the easy things?
Like manifestations of intelligent machines?
Yeah, or is it?
Oh, no, it's gonna be totally crazy.
We have no idea how this is gonna look out yet.
You can already see this.
Today, of course, we model these things
on traditional computers,
and now GPUs are really popular
with neural networks and so on.
But there are companies coming up
with fundamentally new physical substrates
that are just really cool.
I don't think they're going to work or not. But I think
there'll be decades of innovation here. Yeah, totally. Do you think the final thing will
be messy like our biology is messy? Or do you think it's the old bird versus airplane question?
Or do you think we could just build airplanes
that fly way better than birds
in the same way we could build electrical
neural cortex.
Yeah.
You know, can I riff on the bird thing a bit?
Cause I think it's interesting.
People really misunderstand this.
The Wright brothers,
the problem they were trying to solve was control flight, how to turn an airplane.
Not how to propel an airplane. They weren't worried about that. They already had, at that time,
there was already wing shapes, which they had from studying birds. There was already gliders that
carry people. The problem was if you put a rudder on the back of a glider and you're trying it,
the plane falls out of the sky. So the problem was how do you control
flight? And they studied birds and they actually had birds in captivity. They watched birds in
wind tunnels, they were observing the wild and they discovered the secret was the birds twist their wings
when they turn. And so that's what they did on the right-butterflyer. They had these sticks
you would twist the wing and that was that was their innovation, not their propeller. And today
airplanes still twist their wings.
We don't twist the entire wing.
We just just to tail end of it.
The flaps, which is the same thing.
So today's airplanes fly on the same principles
as birds, which we'll observe.
So everyone get that analogy wrong.
But let's step back from that, right?
Once you understand the principles of flight,
you can choose how to implement them.
Yeah. No one's going to use bones and feathers and muscles, but they do have wings,
and we don't flap them, we have propellers. So when we have the principles of computation that
goes on to modeling the world in a brain, we understand those principles right clearly.
We have choices on how to implement them, and some of them would be biological like and some won't.
And, but I do think there's going to be a huge amount of innovation here.
Just think about the innovation on the computer.
They had the invent the transistor, the the silicon chip.
They had the invent, you know, then just software.
I mean, it's the things I think they had to do.
Memory systems, we're going to do, it's going to be similar. Well, it's interesting that the deep learning, the effectiveness of deep
learning for specific tasks is driving a lot of innovation in the hardware, which may have effects
for actually allowing us to discover intelligent systems that operate very differently,
or this much bigger than deep learning. Yeah, interesting.
So, ultimately, it's good to have an application that's making our life better now, because
the capitalist process, if you can make money, that works.
I mean, the other way, Neil DeGrasse Tyson writes about this, is the other way, with
fund science, of course, is through military, so like conquests.
So.
Here's an issue that we're doing on this regard.
So we've decided we should have a series
of these biological principles,
and we can see how to build these intelligent machines.
But we've decided to apply some of these principles
to today's machine learning techniques.
So one of the, we didn't talk about this principle,
one is a sparsity in the brain,
most of the neurons are inactive at any point in time, and the connectivity spars, and
that's different the deep learning networks.
So we've already shown that we can speed up existing deep learning networks anywhere
from a 10 to a factor of 100, I mean, literally 100, and make a more robust at the same time.
So this is commercially very, very valuable.
And so if we can prove this actually in the largest systems
that are commercially applied today,
there's a big commercial desire to do this.
Well, sparsity is something that doesn't run really well
on existing hardware.
It doesn't really run really well on GPUs and on CPUs. And so that
would be a way of sort of bringing more brain principles into the existing system on a commercially
valuable basis. Another thing we can think we can do is we're going to use the dendrites,
models of, I talked earlier about the prediction occurring inside and around that basic property can be applied to
existing neural networks and allow them to learn continuously which something they don't do today and so the
Dendritic spikes that you talk about yeah, well, we wouldn't model miss spikes, but the idea that you have
that no today's neural networks have to come to go to point neurons is a very simple model of a neuron and
By adding dendrites to them at just one more level of complexity, that's in biological
systems, you can solve problems in continuous learning and rapid learning.
So we're trying to take, we're trying to bring the existing field, we'll see if we can
do it, we're trying to bring the existing field of machine learning commercially along
with us. You brought up this idea of keeping, the existing field of machine learning commercially along with us.
You brought up this idea of keeping paying for it commercially along with it as we move towards
the ultimate goal of a true AI system. Even small innovations on your own networks are really,
really exciting. It seems like such a trivial model of the brain and applying different insights
and applying different insights that just even like you said, continuous learning or making it more asynchronous or maybe making more dynamic or like incentivizing or making it
even just more robust and making it somehow much better incentivizing sparsity somehow.
Yeah. Well, if you can make things 100 times faster, then there's plenty of incentive.
People are spending millions of dollars, you know, just training some of these networks now,
these these transforming networks. Let me ask you a big question for young people listening
to this today in high school in college?
What advice would you give them in terms of which career path to take and
Maybe just about life in general
Well in my case
I
Didn't start life with any kind of goals
I was when I was going to college is like, oh, what are study, well, maybe I'll do a selection of engineering stuff, you know.
I wasn't like, you know, today you see
some of these young kids are so motivated,
like, I'm a change of what I'm like,
I'm a man, you know, whatever.
And, but then I did fall in love with something,
besides my wife, but I fell in love with this,
like, oh my God, it would be so cool
to understand how the brain works.
And then I said to myself, that's the most
important thing I could work on. I can't imagine anything more important because if you're going to understand how the brain works. And then I said to myself, that's the most important thing I could work on.
I can't imagine anything more important because if you're going to
understand how the brain's working, you can build tells and machines and they
could figure out all the other big questions of the world, right?
So, and then I said, I want to understand how it works.
So I fell in love with this idea and I became passionate about it.
And this is, you know, a trope people say this, but it's true. Because I was passionate about it,
I was able to put up almost so much crap.
You know, I was in that,
I was like, person said, you can't do this.
I was a graduate student at Berkeley
when they said, you can't study this problem.
No one's gonna solve this,
or you can't get funded for it.
Then I wanted to do mobile computing
and it was like people said, you can't do that. You can't build a cell it. You know, then I wanted to do mobile computing and there's like people say,
you can't do that, you can't build a cell phone.
I don't know.
You know, so, but all along, I kept being motivated
because I wanted to work on this problem.
I said, I want to understand the brain work.
And if I got myself mail, I got one lifetime,
I'm going to figure out, do as best I can.
So by having that, because, you know,
it's really, as you point out, like,
it's really hard to do these things. People, it's just, there's so many downers along the way. So many way obstacles
to getting your way. Yeah. I'm sitting here happy all the time, but trust me, it's not
always like that.
But that's, I guess the happiness, the, the passion is a prerequisite for surviving the
whole thing.
Yeah. I think so. I think that's right. And so I, I don't want to sit to someone and say,
you know, you need to find a passion and do it.
No, maybe don't.
But if you do find something you're passionate about,
then you can follow it as far as your passion
will let you put up with it.
Do you remember how you found it?
How the spark happened?
And why it's specifically for me?
Yeah, like because you said it's such an interesting, so like almost like later and
like by later I mean like not when you're five.
Yeah.
You didn't really know and then all of a sudden you fell in love with that idea.
Yeah, yeah.
There was two separate events that compounded one another.
One, when I was probably a teenager, it might have been 17 or 18.
I made a list of the most interesting problems I could think of.
First was, why does the universe exist?
Seems like not existing is more likely.
The second one was, well, given exist,
why does the behavior of the way it does?
Well, as a physics, why is the equal MC squared,
not MC cubed, you know, I didn't think the question.
Third one was like, what's the origin of life?
And the fourth one was what's intelligence?
And I stopped there. I said, well, that's probably
the most interesting one. And I put that aside as a teenager. But then when I was 22 and I was reading the,
no, I was, excuse me, it was 1979, excuse me, 1979. I was reading, so I was, at that time,
was 22. I was reading the September Issue of Scientific American,
which is all about the brain.
And then the final essay was by Francis Crick,
who of DNA fame, and he had taken his interest
to studying the brain now.
And he said, you know, there's something wrong here.
He says, we got all this data.
Oh, this fact.
This is 1979.
All these facts about the brain.
Tons and tons of facts about the brain.
Do we need more facts?
Or do we just need to think about a way
of rearranging the facts we have?
Maybe we're just not thinking about the problem correctly.
You know, as he says, this shouldn't be like this, you know?
So I read that and I said, wow.
I said, I don't have to become like
an experimental neuroscientist.
I could just look at all those facts
and try to become a theoretician
and try to figure it out.
And I said, that, I felt like it was something
I would be good at.
I said, I wouldn't be a good experimentalist.
I don't have the patience for it.
But I'm a good thinker and I love puzzles.
And this is like the biggest puzzle in the world. This is the biggest puzzle all time.
And I got all the puzzle pieces in front of me. Damn, that was exciting.
And there's something obviously you can't convert it towards. It's just kind of sparked
this passion. And I have that a few times in my life, just something
that a few times in my life just something, just like you, uh, the grabs you.
Yeah.
I thought that was something that was both important and I could make a contribution to. Yeah.
And so all of a sudden, it felt like, oh, like, it gave me purpose in life.
Yeah.
You know, I honestly don't think it has to be as big as one of those four questions.
No, no, but you can find those things in the smallest.
Oh, absolutely.
Absolutely.
The David Foster Wallace
said like the key to life is to be unborrable.
I think it's very possible to find
that intensity of joy in the smallest thing.
Absolutely.
I'm just you asked me my story.
Yeah, yeah.
No, but I'm actually speaking to the audience.
Yeah.
It doesn't have to be those four.
You happen to get excited by one
of the bigger questions of, yeah. In questions of the universe, but even the smallest
things. And watching the Olympics now, just giving yourself life, giving your life over to the
study and the mastery of a particular sport is fascinating. And if it sparks joy and passion,
you're able to, in the case of the Olympics, basically
suffer for like a couple of decades to achieve.
I mean, you could find joy and passion just being a parent.
I mean, it, yeah, the parenting one is funny.
So I was, not always, but for a long time, one of kids and get married and stuff.
And especially, it has to do with the fact that I've seen a lot of people that I respect get a whole
other level of joy from kids and at you know at first is like you're thinking
is well like I don't have enough time in the day right if I have this
passion to solve which is true but like if I want to solve intelligence, how is this kid's situation going to help me?
But then you realize that, you know, like you said, the things that sparks joy and it's
very possible that kids can provide even a greater or deeper or more meaningful joy
than those bigger questions.
Yeah.
Or they enrich each other.
And that, that seemed like obviously when I was younger,
it's probably a counterintuitive notion,
because there's only so many hours in the day,
but then life is finite,
and you have to pick the things that give you joy.
Yeah.
But you also, you understand you can be patient too.
I mean, it's finite, but we do have, you know,
whatever, 50 years or something.
It's us along, yeah. So, in my case, you know, it's finite, but we do have, you know, whatever, 50 years or something. It's us along, yeah.
So, in my case, you know, in my case, I had to give up on my dream of the neuroscience because
I was a graduate student at Berkeley and they told me I couldn't do this and I couldn't
get funded.
And, you know, and so I went back in my back in the computing industry for a number of years.
I thought I would be four, but it turned out to be more.
But I said, but I said, I'll come back. You know, I definitely, I'm definitely
going to come back. I'm now I'm going to do this computer stuff for a while, but I'm
definitely coming back. Everyone knows that. And it's the same thing like raising
kids. Well, yeah, you have to spend a lot of time with your kids. It's fun, enjoyable.
But that doesn't mean you have to give up on other dreams. It just means that you
have to wait a week or two. Don't work on that next idea.
means that you have to wait a week or two. One day you don't work on that next idea.
Well, you talked about the the
darker side of me disappointing side of human nature that we're hoping to
overcome so that we don't destroy ourselves. I tend to
put a lot of value in the broad general concept of love,
of the human capacity to of compassion each other, of just kindness.
Whatever that longing of just the human, human to human connection, it connects back to
our initial discussion.
I tend to see a lot of value in this collective intelligence aspect.
I think some of the magic of human civilization happens when there's a party is not as fun when
it you're alone.
Yeah, I totally agree with you on these issues.
Do you think from a newer cortex perspective, what role does love play in the human consciousness?
Well, those are two separate things from a newer cortex.
I don't think it doesn't impact our thinking about the newer cortex.
From a human condition point of view, I think it's core.
I mean, we get so much pleasure out of loving people and helping people.
So I'll rack it up to old brain stuff and maybe we can throw it onto the bust of evolution
if you want.
That's fine.
It doesn't impact how we think about how we model the world,
but from an humanity point of view, I think it's essential.
Well, I tend to give it to the new brain
and also I tend to think the sum of aspects of that
need to be engineered into AI systems,
both in their ability to have compassion for the humans and their ability to maximize
love in the world between humans.
So, I'm more thinking about social networks, whenever there's a deep integration between
AI systems and humans, there's a specific applications where AI and humans, I think that's something that's often
not talked about in terms of metrics over which you try to maximize, like which metric
to maximize in a system.
It seems like one of the most powerful things in societies is the capacity to us.
It's fascinating. I think it's a great way to think about it.
I have been thinking more of these fundamental mechanisms in the brain as opposed to the social interaction between the interaction between humans and AI systems in the future,
which is, and I think, if you think about that, you're absolutely right. But that's a complex system.
I can have intelligent systems that don't have that component, but they're not interacting
with people.
You know, they're just running something or building some place or something.
I don't know.
But if you think about interacting with humans, yeah, it's going to, but it has to be engineered
in there.
I don't think it's going to appear on its own.
That's a good question.
I.
Yeah. Well, we'll let it open up.
In terms of, from a reinforcement learning perspective,
whether the darker sides of human nature
or the better angels of our nature,
win out statistically speaking, I don't know.
I tend to be optimistic and hope that love wins out in the end. You've done a lot of incredible stuff and your book is
driving towards this fourth question that you started with on the nature of
intelligence. Would you hope your legacy for people reading a hundred years from
now? How do you hope they remember your work? How do you hope they remember this book? for people reading 100 years from now.
How do you hope they remember your work? How do you hope they remember this book?
Well, I think as an entrepreneur or a scientist
or any human who's trying to accomplish some things,
I have a few that really all you can do
is accelerate the inevitable.
It's like, you know, if we didn't figure out, we didn't study the brain,
someone else started the brain. If you know, if Elon just did make electric cars, someone else
would do it eventually. And if, you know, if Thomas Anderson didn't invent a lipo, we wouldn't be
using candles today. So what you can do as an individual is you can accelerate something that's
beneficial and make it happen sooner than whatever. That's really it's all you can do. You can't create a new reality that it wasn't gonna
happen. So from that perspective I would hope that our work not just me but our
work in general. People would look back and said hey they really helped make this
better future happen sooner. They, you know, they helped us understand the nature of false beliefs sooner than they
made right amount of.
They made it, but now we're so happy that we have these intelligence machines doing
these things, helping us that that maybe that solved the climate change problem and they
made it happen sooner.
So I think that's the best I would hope for.
Someone say those guys just moved the needle forward a little bit in time.
Well, I do, it feels like the progress of human civilization is not,
is there's a lot of trajectories. And if you have individuals that accelerate
towards one direction that helps steer human civilization. So I think in a long stretch of time,
all trajectories will be traveled,
but I think it's nice for this particular civilization
out earth to travel down one that's not.
Yeah, well, I think you're right.
I mean, look, we have the take the whole period of,
you know, War of War II, Nazis, or something like that.
Well, that was a bad side step, right?
We went over there for a while,
but you know, there is the optimistic view about life
that ultimately, it does converge in a positive way.
It progresses ultimately, even if we
have years of darkness.
So yeah, so I think you could perhaps, that's
accelerating the positive, it could also
mean eliminating some bad missteps along the way, too. But I'm an optimistic in that way.
Despite we talking about the end of civilization, I think we're going to live for a long time.
I hope we are. I think our society in the future is going to be better. We're going to have
less discord. We're going to have less people killing each other. We'll make the
deliver in some sort of way that's compatible with the carrying capacity of the earth.
I'm optimistic these things will happen. And all we can do is try to get there sooner. And at the very least, if we do destroy ourselves, we'll have a few satellites.
Hopefully.
I will tell alien civilization that we were once here.
Or maybe our future, you know, future inhabitants of Earth, you know, imagine we, you know, the planet of the Apes scenario, you know, we color ourselves in a million years from now or a billion years from now, there's another species on the planet.
Curious creatures or ones here.
Jeff, thank you so much for your work and think you do for a very broader sense for humanity, I think.
Thanks, Jeff.
All right.
Pleasure.
Thanks for listening to this conversation with Jeff Hawkins.
And thank you to Codacademy, bio optimizers, ExpressVPN, A sleep and Blinkist.
Check them out in the description to support
this podcast. And now, let me leave you with some words from Albeir Kamu. An intellectual
is someone whose mind watches itself. I like this because I'm happy to be both halves,
the watcher and the watch. Can they be brought together? This is a practical
question we must try to answer. Thank you.