Lex Fridman Podcast - Noam Chomsky: Language, Cognition, and Deep Learning
Episode Date: November 29, 2019Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political acti...vist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 03:59 - Common language with an alience species 05:46 - Structure of language 07:18 - Roots of language in our brain 08:51 - Language and thought 09:44 - The limit of human cognition 16:48 - Neuralink 19:32 - Deepest property of language 22:13 - Limits of deep learning 28:01 - Good and evil 29:52 - Memorable experiences 33:29 - Mortality 34:23 - Meaning of life
Transcript
Discussion (0)
The following is a conversation with Nome Jomsky.
He's truly one of the great minds of our time.
And he's one of the most cited scholars
in the history of our civilization.
He has spent over 60 years at MIT
and recently also joined the University of Arizona,
where we met for this conversation.
But it was at MIT about four and a half years ago
when I first met Nome.
In my first two days there, I remember
getting into an elevator, a status center, pressing the button for whatever floor, looking
up and realizing it was just me and Nome Chomsky, riding the elevator. Just me and one of
the seminal figures of linguistics, cognitive science, philosophy, and political thought
in the past century, if not ever. I tell that silly story because I
think life is made up of funny little defining moments that you never forget for reasons
that may be too poetic to try and explain. That was one of mine.
Nome has been an inspiration to me and millions of others. It was truly an honor for me to
sit down with him in Arizona. I traveled there
just for this conversation. And in a rare heartbreak and moment, after everything was set
up and tested, the camera was moved and accidentally the recording button was pressed, stopping
the recording. So I have good audio of both of us, but no video of
Nome. Just the video of me and my sleep deprived but excited face that
I get to keep as a reminder of my failures. Most people just listen to this audio version
for the podcast as opposed to watching it on YouTube, but still it's heartbreaking for
me. I hope you understand and still enjoy this conversation as much as I did. The depth
of intellect that Nome showed and his willingness to truly listen to me, a silly
looking Russian in the suit, it was humbling and something I'm deeply grateful for.
As some of you know, this podcast is a side project for me, or my main journey and dream
is to build AI systems that do some good for the world.
This latter effort takes up most of my time, but for the moment has been mostly private.
But the former, the podcast, is something I put my heart and soul into, and I hope you
feel that, even when I screw things up.
I recently started doing ads at the end of the introduction.
I'll do one or two minutes after introducing the episode and never any ads in the middle that break the flow of the conversation. I hope
that works for you and it doesn't hurt the listening experience.
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, get
5 stars on Apple Podcast, support it on Patreon, or simply connected with me on Twitter.
Alex Friedman spelled F-R-I-D-M-A-N.
This show is presented by CashApp, the number one finance app in the App Store.
I personally use CashApp to send money to friends, but you can also use it to buy, sell,
and deposit Bitcoin in just seconds.
CashApp also has a new investing feature.
You can buy fractions of a stock, say $1
worth, no matter what the stock price is. Roker services are provided by Cash App investing,
a subsidiary of Square, a member of SIPC. I'm excited to be working with Cash App to support
one of my favorite organizations called the First, best known for their first robotics and
Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries
and have a perfect rating on charity navigator, which means the donated money is
used in maximum effectiveness.
When you get cash app from the app store, Google Play and use code Lex podcast,
you'll get $10 and cash app will also donate $10 to first, which
again is an organization that I've personally seen inspired girls and boys to dream of
engineering a better world.
And now here's my conversation with Nome Chomsky. I apologize for the absurd philosophical question, but if an alien species were to visit
Earth, do you think we would be able to find a common
language or protocol of communication with them?
There are arguments to the effect that we could.
In fact, one of them was Marviminsky's.
Back about 20 or 30 years ago, he performed a brief experiment with a student of his, Dan Bob Rowley,
essentially ran the simplest possible touring machines, just free to see what would happen.
And most of them crashed, either got into an infinite loop or
into an infinite loop or a plot stopped. The few that persisted essentially
gave something like arithmetic.
And his conclusion from that was that if some alien species
developed higher intelligence, they would at least have
arithmetic.
They would at least have what, they would at least have what
the simplest computer would do. And in fact, he didn't know that at the time, but the core
principles of natural language are based on operations which yield something like arithmetic in the limiting case and the minimal case.
So it's conceivable that a mode of communication could be established based on the core properties of human language
and the core properties of arithmetic, which maybe are universally shared. So it's conceivable. What is the structure of that language, of language as an internal system inside our mind
versus an external system as it's expressed?
It's not an alternative, it's two different concepts of language.
Different.
It's a simple fact that there's something about you, a trait of yours, part of your
the organism, you, that determines that you're talking English and not Tagalog, let's say.
So there is an inner system.
It determines the sound and meaning of the infinite number of expressions of your language.
It's localized. It's not in your foot,
obviously, it's in your brain. I feel it more closely. It's in specific configurations of your
brain. And that's essentially like the internal structure of your laptop, whatever programs it has,
or in there. Now, one of the things you can do with language, the marginal thing, in fact,
is use it to externalize what's in your head. Actually most of your use of language is thought,
internal thought, but you can do what you and I are now doing. We can externalize it. Well,
the set of things that we're externalizing are an external system, their noises in the atmosphere.
And you can call that language in some other sense of the word, but it's not a set of alternatives,
these are just different concepts. So how deep do the roots of language go in our brain?
Our mind. Is it yet another feature like vision or is it something more fundamental from which everything else springs in our in the human mind?
Well, it's in a way. It's like vision. There's a, you know, there's something about our genetic endowment that determines that we have a
mammalian rather than an insect visual system. And there's something in our
genetic endowment that determines that we
have a human language faculty. No other organism has anything remotely similar. So in that
sense, it's internal. Now, there is a long tradition, which I think is valid, going back
centuries to the early scientific revolution, at least, that holds that language is the core of human cognitive nature.
It's the mode for constructing thoughts and expressing them.
That is what forms thought.
And it's got fundamental creative capacities.
It's free, independent, unbounded, and so on.
And undoubtedly, I think the basis for creative capacities and the other remarkable human
capacities that lead to the unique achievements and not so great achievements of the species.
The capacity to think and reason. Do you think that's deeply linked with language?
Do you think the way the internal language system is essentially the mechanism by which we also reason internally?
It is undoubtedly the mechanism by which we reason. There may also be other fact there are undoubtedly other faculties involved in reasoning.
We have a kind of scientific faculty, nobody knows what it is, but whatever it is that
enables us to pursue certain lines of endeavor and inquiry and to decide what makes sense and doesn't make sense and
to achieve a certain degree of understanding of the world.
That's uses language, which goes beyond it.
Just as using our capacity for arithmetic is not the same as having the capacity.
The idea of capacity, our biology, evolution, you've talked about it defining
a sense of our capacity, our limit and our scope. Can you try to define what limit and scope
are? And the bigger question, do you think it's possible to find the limit of human cognition?
Well, that's an interesting question.
It's commonly believed, most scientists believe, that human intelligence can answer any
question in principle.
I think that's a very strange belief.
If we were biological organisms, which are not angels, then we are capacities ought to have scope and limits,
which are interrelated.
Can you define those two terms?
Well, let's take a concrete example.
Your genetic endowment determines that you can have a male individual system, arms and
legs and so on.
But it, and therefore, become a rich, complex organism.
But if you look at that same genetic endowment, it prevents you from developing in other
directions.
There's no kind of experience which would yield the embryo to develop an insect visual
system or to develop wings instead of arms.
So the very endowment that confers richness and complexity also sets bounds on what can
be attained.
Now, I assume that our cognitive capacities are part of the organic world, therefore they
should have the same properties.
If they had no built-in capacity to develop a rich and complex structure, we would have
understand nothing.
Just as if your genetic endowment did not compel you to develop arms and legs.
You would just be some kind of a random, a me-boyed creature with no structure at all.
So I think it's plausible to assume that there are limits.
And I think we even have some evidence as to what they are.
So for example, there's a classic moment in the history of science.
At the time of Newton, there was from Galileo to Newton, modern science, developed on a fundamental
assumption, which Newton also accepted, namely that the world is the entire universe, is a
mechanical object.
And by mechanical, they meant something like
the kinds of artifacts that were being developed by skilled artisans all over Europe,
the gears, the levers and so on. And their belief was, well, the world is just a more complex
variant of this. Newton, to his astonishment and distress, proved that there are no machines, that there's
interaction without contact.
His contemporaries like Leibniz and Huygens just dismissed this as returning to the mysticism
of the neoskeletics.
Newton agreed.
As he said, it is totally absurd. No person of any scientific intelligence
could ever accept this for a moment. In fact, he spent the rest of his life trying to get around
it somehow, as did many other scientists. That was the very criterion of intelligibility,
for say Galileo or Newton theory, did not produce an intelligible world unless he could duplicate
it in the machine.
He said, you can't, there are no machines.
Any.
Finally, after a long struggle took a long time, scientists just accepted this as common
sense.
But that's a significant moment. That means they abandoned the search for an intelligible world.
And the great philosophers of the time
understood that very well.
So for example, David Hume in his in Comium,
the Newton wrote that it was the greatest thinker ever,
and so on.
He said that he unveiled many of the secrets
of nature, but by showing the imperfections of the mechanical, philosophy, mechanical
science, he left us with, he showed that there are mysteries which ever will remain. And
science just changed its goals. It abandoned the mysteries. It can't solve it, but it
aside, we only look for intelligible theories. Newton's theories were intelligible.
It's just what they described, it wasn't. Well, what a locks at the same thing. I think
they're basically right. And if so, that should something about the limits of human cognition.
We cannot attain the goal of understanding the world of finding an intelligible world.
This mechanical philosophy in Galileo to Newton, this good case can be made that that's our
instinctive conception of how things work
so if they infants are
tested with
things that if this moves and then this moves they kind of invent something that must be
Invisible that's in between them. It's making the move. It's not yeah. We like physical contact something about our brain
making the movements. Yeah, we like physical contact. Something about our brain makes us want a world like that just like it wants a world that has a
regular geometric figures. So for example, Descartes pointed this out that if
you have an infant who's never seen a triangle before and you draw a triangle before, and you draw a triangle. The infant will see a distorted triangle, not whatever
crazy figure it actually is. Three lines not coming quite together, one of them a little bit curved
and so on. We just impose a conception of the world in terms of geometric, perfect geometric objects. It's now been shown that goes
way beyond that. If you show on a tachystop, let's say a couple of light shining, you do it
three or four times in a row. What people actually see is a rigid object in motion, not whatever's
there. We all know that from a television set, basically.
So that gives us hints of potential limits to our cognition?
I think it does, but it's a very contested view.
If you do a poll among scientists,
it's impossible, we can understand anything.
Let me ask and give me a chance with this. So I just
spent a day at a company called Neuralink. And what they do is try to design what's called
the brain machine, brain computer interface. So they try to do thousands readings in the brain,
be able to read what the neurons are firing and then stimulate back. So, two way. Do you think
their dream is to expand the capacity of the brain to attain information, sort of increase the
bandwidth of which we can search Google kind of thing. Do you think our cognitive capacity might be
expanded, our linguistic capacity, our ability to reason
might be expanded by adding a machine into the picture.
It can be expanded in a certain sense, but a sense that was known thousands of years
ago, a book expands your cognitive capacity.
Okay, so this could expand it too.
But it's not a fundamental expansion.
It's not totally new things could be understood.
Well, nothing that goes beyond our native cognitive capacities, just like you can't turn
the visual system into an insect system.
Well, I mean, the thought is perhaps, you can't't directly but you can map.
So, we know that without this experiment, you can map what a B C's and
present it in the form so that we could follow it. In fact, every B scientist doesn't.
But you don't think there's something greater than B's that we can map
and then all of a sudden discover something be able to
understand a quantum world, quantum mechanics, be able to start to be able to make sense.
Students are going to study and understand quantum mechanics.
But they always reduce it to the infant, the physical.
I mean, they don't really understand.
Not physical, oh, you don't, there's thing. That may be another area where there's just a limit to understanding.
We understand the theories, but the world that it describes doesn't make any sense. So,
you know, the experiment, the Schrodinger's cat, for example, can understand the theory,
but a Schrodinger pointed out that's an unintelligible world. One of the reasons
why Einstein was always very skeptical about quantum theory. He described himself as a
classical realist in one's intelligibility. He has something in common with infants in that way.
has something in common with infants in that way. So, back to linguistics, if you could humor me, what are the most beautiful or fascinating aspects of language, or ideas
in linguistics, or cognitive science that you've seen in a lifetime of studying, language
and studying the human mind?
Well, I think the deepest property of language and puzzling property that's been discovered
is what is sometimes called structure dependence.
We know, understand it pretty well, but it was puzzling for a long time.
I'll give you a concrete example.
So suppose you say the guy who fixed the car carefully packed his tools.
It's ambiguous.
He could fix the car carefully or carefully pack his tools.
Suppose you put carefully in front, carefully the guy who fixed the car packed his tools.
Then it's carefully packed, not carefully fixed. And in fact,
you do that even if it makes no sense. So suppose you say, carefully, the guy who fixed
the car is tall. You have to interpret it as carefully as tall, even though that doesn't
make any sense. And notice that that's a very puzzling fact, because you're relating carefully not to the
linearly closest firm, but to the linear, more remote firm. A linear, a procloseness is an
easy computation, but here you're doing a much more, what looks like a more complex computation.
You're doing something that's taking you essentially
to the more remote thing. Now, if you look at the actual structure of the sentence, where
the phrases are and so on, turns out you're picking out the structurally closest thing,
but the linearly more remote thing. But notice that what's linear is a hundred percent
of what you hear. You never hear structure, can't. So what you're doing is, and suddenly
this is universal, all constructions, all languages. And what we're compelled to do is
carry out what looks like the more complex computation on material that
we never hear, and we ignore 100% of what we hear and the simplest computation.
By now there's even a neural basis for this that's somewhat understood, and there's good
theories by now that explain why it's true. That's a deep insight into the surprising
nature of language with many consequences. Let me ask you about a field of machine learning,
deep learning. There's been a lot of progress in neural networks based, neural network based
machine learning in the recent decade. Of course, neural network research goes back many decades.
What do you think are the limits of deep learning of neural network based machine learning?
Well, to give a real answer to that, you'd have to understand the exact processes that are
taking place. And those are pretty opaque. So it's pretty hard to prove a theorem about
what can be done and what can't be done.
But I think it's reasonably clear.
I'm in putting technicalities aside, but deep learning is doing
is taking huge numbers of examples
and finding some patterns. Okay, that could be interesting in some areas it is, but we have to ask you a certain question.
Is it engineering or is it science?
Engineering in the sense of just trying to build something that's useful or science in
the sense that it's trying to understand something about elements of the world.
So it takes a Google parser.
We can ask that question.
Is it useful?
Yeah, it's pretty useful.
I use a Google translator.
So on engineering grounds, it's kind of worth having, like a bulldozer.
Does it tell you anything about human language? Zero. Nothing.
And in fact, it's very striking. From the very beginning, it's just totally remote from science.
So what is a Google parser doing? It's taking an enormous text, let's say the Wall Street Journal Corpus, and asking,
how close can we come to getting the right description of every sentence in the
Corpus? Well, every sentence in the Corpus is essentially an experiment. Each
sentence that you produce is an experiment, which is a myogromatical sentence.
The answer is usually yes. So most of the stuff in the MI, a grammatical sentence. The answer is usually yes.
So most of the stuff in the corpus is grammatical sentences.
But now ask yourself, is there any science
which takes random experiments,
which are carried out for no reason whatsoever
and tries to find out something from them?
Like if you're a, say, a chemistry PhD student,
you want to get a thesis, can you say,
well, I'm just going to do a lot of, mix, mix a lot of things together,
no purpose, just, and maybe I'll find something.
It'd be left out of the department.
Science tries to find critical experiments,
ones that answer some theoretical question,
doesn't care about coverage of millions of experiments.
So it just begins by being very remote from science, and it continues like that.
So the usual question that's asked about the Google Parser is how well does it do,
or some parser, how well does it do on a corpus? But there's another question that's never asked.
How well does it do on something that violates all the rules of language?
So for example, take the structure dependence case that I mentioned.
Suppose there was a language in which he used a linear proximity as the mode of interpretation.
These deep learning had worked very easily on that.
In fact, much more easily in an unactual language.
Is that a success? No, that's a failure.
From a scientific point of view, it's a failure.
It shows that we're not discovering the nature of the system.
At all, because it does just as well, or even better run things that violate the structure of the system.
And it goes on from there.
It's not an argument against doing it.
It is useful to have devices like this.
So, yes, so neural networks are kind of approximators that look, there's echoes of the behavioral
debates, right?
Behaviorals.
More than echoes.
Many of the people in deep learning say they've indicated
Terri Sanyoski, for example, and his recent books is is vindicated
Scenery and behaviors. It doesn't have anything to do with it. Yes, but I think there's something
Actually fundamentally different when the data set is huge
something actually fundamentally different when the data set is huge, but your point is extremely well taken.
But do you think we can learn, approximate, that interesting complex structure of language
with neural networks that will somehow help us understand the science?
It's possible.
I mean, you find patterns that you had noticed, let's say, could be. In fact,
it's very much like a kind of linguistics that's done. What's called corpus linguistics?
When you suppose you have some language where all the speakers have died out, but you have
records. So you just look at the records and see what you can figure out from that.
It's much better than it's much better to have actual speakers where you can do critical experiments,
but if they're all dead, you can't do them. So you have to try to see what you can find out
from just looking at the data that's around. You can learn things, actually paleo-mythropologies,
very much like that.
You can't do a critical experiment on what happened two million years ago.
So you kind of force just to take what data is around and see what you can figure out
from it.
Okay, that's a serious study.
So let me venture into another whole body of work and philosophical question, you've said that evil in society arises from
institutions, not inherently from our nature. Do you think most human beings are good,
they have good intent, or do most have the capacity for intentional evil that depends on
their bringing, depends on their environment on the context. I wouldn't say that they don't arise from our nature.
Anything we do arises from our nature.
And the fact that we have certain institutions, not others,
is one mode in which human nature has expressed itself.
But as far as we know, human nature could yield many different kinds of institutions.
The particular ones that have developed have to do with historical contingency, who conquered
whom, and that sort of thing.
They're not rooted in our nature in the sense that there are essential to our nature. So it's commonly argued that
these days, that something like market systems is just part of our nature, but we know from
a huge amount of evidence that that's not true. There's all kinds of other structures. It's a
particular fact about a moment of modern history. Others have argued that the roots of classical liberalism actually
argue that what's called sometimes an instinct for freedom,
an instinct to be free of domination by illegitimate authority
is the core of our nature. I would be the opposite of this.
And we don't know. We just know that human nature can accommodate both kinds.
If you look back at your life,
is there a moment in your intellectual life or life in general
that jumps from memory that brought you happiness,
that you would love to relive again?
Sure.
Falling in love, having children. What about, so you have put forward into the world a
lot of incredible ideas and linguistics and cognitive science in terms of ideas that just excites you when
it first came to you, you would love to relive those moments.
Well, I mean, when you make a discovery about something that's exciting,
like, say, the, even the observation of structured dependence and
gone from that, the explanation for it, but the major things just seem like common sense. So if you go back to take your
question about external and internal language, you go back to say the 1950s, almost entirely
language is regarded next-earned law object, something outside the mind. It just seemed obvious that can't be true. Like I
said, there's something about you that says determines you're talking the
English, not Swahili or something. But that's not really a discovery. That's just
an observation with transparent. You might say it's kind of like the 17th century, the beginnings of modern science, 17th century.
They came from being willing to be puzzled about things that seemed obvious.
So it seems obvious that a heavy ball of little,
for faster than a light ball of light. But Galileo was not impressed by the fact that it seemed obvious.
So we wanted to know if it's true.
I carried out experiments, actually thought experiments
never actually carried them out, which
showed that it can't be true.
And out of things like that, observations of that kind, what white is a bore fall to
the ground instead of rising, seems obvious. Do you start thinking about it? Why does
steam rise? And I think the beginnings of modern linguistics, roughly in the 50s, are kind of like that,
just being willing to be puzzled about phenomena that look from some point of view obvious.
For example, a kind of doctrine, almost official doctrine of structural linguistics in the 50s was that languages can differ from one another in arbitrary
ways, and each one has to be studied on its own without any presuppositions.
In fact, they were similar views among biologists about the nature of organisms, that each one's
they're so different when you look at them that almost anything you can
be almost anything. Well in both domains it's been learned that that's very far from true.
They've earned our own constraints on what could be an organism or what could be a language.
But these are you know that's just the nature of inquiry. Science and general, yeah, inquiry.
So one of the peculiar things about us,
human beings is our mortality.
Ernest Becker explored it in general.
Do you ponder the value of mortality?
Do you think about your own mortality?
I used to, when I was about 12 years old.
I wondered, I didn't care much about my own mortality, but I was worried about the fact that
if my consciousness disappeared, would the entire universe disappear?
That was frightening.
Did you ever find an answer to that question?
No.
Nobody's ever found an answer, but I stopped being bothered.
It's kind of like Woody Allen, and one of his films you may recall, he starts, he goes to a shrink when he's a child,
and the shrink-ass and what's your problem?
He says, I just learned that the universe is expanding.
I can't handle that.
And another absurd question is, what do you think is the meaning of our existence here,
our life on earth?
Our brief little moment of time.
The time we answer by our own activities.
There's no general answer.
We determine what the meaning of it is.
The action, determining meaning, meaning in the sense of significance,
not meaning in the sense that chair means this, you know, but the significance of your life
is something you create. No, thank you so much for talking today. It was a huge honor. Thank you so
much. Thanks for listening to this conversation with No Chomsky and thank you to our presenting
sponsor CashApp.
Download it, use code LEXpodcast, you'll get $10 and $10 will go to first, a STEM education
nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering
our future.
If you enjoyed this podcast, subscribe on YouTube, give it 5 stars on Apple Podcasts, support
on Patreon or connect with me on Twitter.
Thank you.