Lex Fridman Podcast - Tomaso Poggio: Brains, Minds, and Machines

Episode Date: January 19, 2019

Tomaso Poggio is a professor at MIT and is the director of the Center for Brains, Minds, and Machines. Cited over 100,000 times, his work has had a profound impact on our understanding of the nature o...f intelligence, in both biological neural networks and artificial ones. He has been an advisor to many highly-impactful researchers and entrepreneurs in AI, including Demis Hassabis of DeepMind, Amnon Shashua of MobileEye, and Christof Koch of the Allen Institute for Brain Science. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.

Transcript
Discussion (0)
Starting point is 00:00:00 The following is a conversation with Tomasopojo. He's the professor at MIT and is a director of the Center for Brains, Mines, and Machines. Sighted over 100,000 times, his work has had a profound impact on our understanding of the nature of intelligence in both biological and artificial neural networks. He has been an advisor to many highly impactful researchers
Starting point is 00:00:23 and entrepreneurs in AI, including Demisasabis of Deep Mind, Imlan Shashwav Mobileye and Kristoff Koch of the Ellen Institute for Brain Science. This conversation is part of the MIT course on Artificial General Intelligence and the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman spelled F-R-I-D. And now, here's my conversation with Tomaso Poggio. You've mentioned that in your childhood, you've developed the fascination with physics, especially the theory of relativity, and that Einstein was also a childhood hero to you.
Starting point is 00:01:21 What aspect of Einstein is genius? The nature of his genius, do you think was essential for discovering the theory of relativity? You know, Einstein was a hero to me and I'm sure to many people because he was able to make, of course, a major, major contribution to physics with simplifying a bit, just a gedanken experiment, a photexperiment. Imagining
Starting point is 00:01:54 communication with lights between a stationery observer and somebody on a train. And I thought the fact that just with the force of his thought, of his thinking, of his mind, it could guide to something so deep in terms of physical reality, how time depends on space and speed.
Starting point is 00:02:18 It was something absolutely fascinating. It was the power of intelligence, the power of the mind. Do you think the ability to imagine, to visualize as he did, as a lot of great physicists do? Do you think that's in all of us human beings, have, in principle, similar breakthroughs. There is a lesson to be learned from Einstein. He was one of five PhD students at ATA, the Atgenostische, Technische Okschule in Zurich, in physics. And it was the worst of the five.
Starting point is 00:03:08 The only one who did not get an academic position when he graduated, when he finished his PhD, and he went to work as everybody knows for the patent office. So it's not so much that he worked for the patent office, but the fact that obviously Znača je tako je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da je stve, da stve, da je stve, da stve, da je stve, da stve, da je stve, da je stve, da trying to be to do the opposite or something quite different from what other people are doing. That's certainly true for the stock market. Never buy if everybody's buying. And also true for science. Yes. So you've also mentioned staying on a theme of physics that you were excited at a young age by the mysteries of the universe that physics could uncover. Such, as I saw mentioned, the possibility of time travel. So the most out of the
Starting point is 00:04:15 box question I think I'll get to ask today, do you think time travel is possible? It would be nice if it were possible right now. In science you never say no. But your understanding of the nature of time. It's very likely that it's not possible to travel in time. It may be able to travel forward in time, if we can, for instance, freeze ourselves or go on some spacecraft traveling close to the speed of light, but in terms of actively traveling for instance back in time, I find probably very unlikely. So do you still hold the underlying dream of the engineering intelligence that will build systems that are able to do such
Starting point is 00:05:12 huge leaps like discovering the kind of mechanism that will be required to travel through time? Do you still hold that dream or echoes of it from each other? Yeah, I don't think whether there are certain problems that probably cannot be solved, depending what you believe about the physical reality, like maybe totally impossible to create energy energy from nothing or to travel back in time, but about making machines that can think as well as we do or better or more likely, especially in the short and midterm, help us think better, which is an essence happening already with the computers we have and it will happen more and more, but that I certainly believe, and I don't see in principle why computers at some point could not become more intelligent than we are, although the word intelligence is a tricky one and one should discuss what's the mean with that. And intelligence consciousness, words like love is all these are very, you need to be disentangled.
Starting point is 00:06:33 So you've mentioned also that you believe the problem of intelligence is the greatest problem in science, greater than the origin of life and the origin of the universe. You've also in the talk, I've listened to said that you're open to arguments against you. So, what do you think is the most captivating aspect of this problem of understanding the nature of intelligence? Why does it captivate you as it does? Well, originally, I think one of the motivations that I had as a teenage, when I was infatuated with theory of relativity, was really that I found that there was the problem of time and space and general relativity, but there are so many other problems of the same level of difficulty and importance that I could, even if I were Einstein, it was
Starting point is 00:07:34 difficult to hope to solve all of them. So what about solving a problem with solution and allow me to solve all the problems. And this was what if we could find the key to an intelligence, you know, ten times better or faster than Einstein? So that's sort of seeing artificial intelligence as a tool to expand our capabilities. But is there just an inherent curiosity in you and just understanding What it is in our in here that makes it all all work? Yes, absolutely. You're right. So I was starting I started saying this was the motivation when I was a teenager, but you know soon after I Think the problem of human intelligence became a real focus of my science and my research,
Starting point is 00:08:32 because I think for me the most interesting problem is really asking who we are, right? It is asking not only a question about science, but even about the very tool we are using to do science, which is our brain. How does our brain work? From where does it come from? What are its limitations? Can we make it better? In many ways is the ultimate question that underlies this whole effort of science. So you've made significant contributions in both the science of intelligence and the engineering of intelligence.
Starting point is 00:09:18 In a hypothetical way, let me ask, how far do you think we can get in creating intelligence systems without understanding the biological, the understanding how the human brain creates intelligence? Put another way, do you think we can build a strong AI system without really getting at the core, the functioning of the brain? Well this is a real difficult question. You know we did solve problems like flying without really using too much our knowledge about how birds fly. It was important guess, to know that you could have things heavier than air being able to fly, like birds. But beyond that, probably we did not learn very much, you know, some. The brothers' right did learn a lot of observation about birds and designing their aircraft. But you know, you can argue we did not use much of biology in that particular case. Now, in the case of intelligence, I think that it's a bit of a bet right now.
Starting point is 00:10:45 If you ask, okay, we all agree we'll get at some point maybe soon, maybe later to a machine that is indistinguishable from my secretary, say in terms of what I can ask the machine to do. I think we'll get there. of what I can ask the machine to do. I think we'll get there. And now the question is, you can ask people, do you think we'll get there without any knowledge about, you know, the human brain or the best way to get there is to understand better the human brain.
Starting point is 00:11:19 Okay, this is, I think, an educated bet that different people with different background will decide in different ways. The recent history of the progress in AI in the last, I would say, five years or 10 years has been the main breakthroughs, the main recent breakthroughs. I really start from neuroscience. prvog vzelo, kaj vzelo, je zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo,elo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo,elo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo, zelo,, Leesidol, and two, three years ago in Soule. That's one, and that started really with the work of Pavlov
Starting point is 00:12:16 1900, Marvin Minsky, the 60s, many other neuroscientists later on. is many other neuroscientists later on. And deep learning started, which is the core gain of alpha-go and systems like autonomous driving systems for cars, like the systems at Mobile Eye, which is a company started by one of my ex-posts, Doc Hamnon-Sashua. So that is the core of those things. company started by one of my ex-postdoc, Amnon Shashua. That is a core of those things. And deep learning really the initial ideas in terms of the architecture of this layered hierarchical networks
Starting point is 00:12:56 started with work of Torsten Viesel and David Hubel at Harvard up the river in the 60s. So recent history suggests the neuroscience played And David Huber at Harvard, up the river in the 60s. So recent history suggests that neuroscience played a big role in this breakthroughs. My personal bet is that there is a good chance they continue to play a big role, maybe not in all the future breakthroughs, but in some of them. At least in inspiration. At least in inspiration. So at least in an inspiration. Absolutely. Yes. So you studied both
Starting point is 00:13:27 artificial and biological neural networks. You said these mechanisms that underlie deep learning and reinforcement learning. But there is nevertheless significant differences between biological and artificial neural networks as they stand now. So between the two, what he finds the most interesting, mysterious, maybe even beautiful difference as it currently stands in our understanding. I must confess that until recently I found that the artificial networks networks to simplistic relative to real neural networks. But you know recently I've been started to think that yes there are very big simplification of what you find in the brain. But on the other hand there are much closer in terms of the architecture to the brain, then other models that we had, that computer
Starting point is 00:14:27 science used as model of thinking, which were mathematical logics, you know, Lisp, Prolog, and those kind of things. So in comparison to those, they're much closer to the brain. You have networks of neurons, which is what the brain is about. The artificial neurons in the models, as I said, caricature of the biological neurons, but there are still neurons, single units, communicating with other units, something that is absent in the traditional computer type models of mathematics, reasoning and so on. So what aspect would you like to see in artificial neural networks added over time as we try to figure out ways to improve them?
Starting point is 00:15:17 So one of the main differences and problems in terms of deep learning today, and it's not only deep learning, and the brain is the need for deep learning techniques to have a lot of labeled examples. For example, for ImageNet, you have a training set, which is one million images, each one labeled by some human, in terms of which object is there. And it's clear that in biology, a baby may be able to see millions of images in the first years of life, but will not have million of labels given to him or her by parents or caretakers. So how do you solve that? I think that is this interesting challenge that today deep learning and related techniques are all about big data, big data,
Starting point is 00:16:27 meaning a lot of examples labeled by humans. Whereas in nature you have, so that this big data is and going to infinity, that's the best, you know, and meaning label data. But I think the biological word is more, and going to one. It's child can learn. It's a beautiful way. Very small number of, you know, labeled examples. Like you tell a child, this is a car.
Starting point is 00:17:02 You don't need to say, like you can imagine it, you know, this is a car, this is a car, this is not a car, this is a car. You don't need to say like an image net, you know, this is a car, this is a car, this is not a car, this is not a car, one million times. So and of course with AlphaGo and or at least the Alpha0 variants, there's because of the because of the world of go, it's so simplistic that you can actually learn by yourself through self-play, you can play against each other. In the real world, I mean, the visual system that you've studied extensively is a lot more complicated than the game of Go. On the comment about children, which are fascinatingly good at learning new stuff,
Starting point is 00:17:40 how much of it do you think is hardware, how much of it is software? That's a good, deep question. In a sense essence, is the old question of nurture and nature, how much is in the gene and how much is in the experience of an individual. Obviously, it's both that player role, and I believe that the way evolution gives put prior information so to speak hardwired is not really hardwired but that's essentially an hypothesis. I think what's going on is that evolution as almost necessarily if you believe in Darwin is very opportunistic. And think about our DNA and the DNA of the Rhozofila.
Starting point is 00:18:41 Our DNA does not have many more genes than the philosophy. Oh, now the fly. The fly, the fruit fly. Now we know that the fruit fly does not learn very much during its individual existence. It looks like one of these machinery that it's really mostly, not 100%, but you know, 95% hard-coded by the genes. But since we don't have many more genes than the Zofi, evolution couldn't code in us a kind of general learning machinery. And then had to give very weak priors, like for instance, I mean, they give a specific
Starting point is 00:19:31 example, which is recent to work by a member of our center of brains, minds and machines. We know because of work of other people in our group and other groups that are cells in a part of our brain, neurons, that are tuned to faces. They seem to be involved in face recognition. Now, this face area exists, seems to be present in young children and adults. And one question is there from the beginning, is hardwired by evolution? Or, you know, somehow is learned very quickly. So what's your, by the way, a lot of the questions I'm asking, we, the answer is we don't really know,
Starting point is 00:20:18 but as a person who has contributed some profound ideas in these fields, you're a good person to guess at some of these. So, of course, there's a caveat before a lot of the stuff we talk about. But what is your hunch? Is the part of the brain that seems to be concentrated on face recognition? Are you born with that? Or are you just designed to learn that quickly, like the face of the mother? My hunch, by bias, like the face of the mother. My hunch by bias was the second one, learned very quickly.
Starting point is 00:20:49 And it turns out that Marge Livingston at Harvard has done some amazing experiments in which she raised baby monkeys, depriving them of faces during the first weeks of life. So they see technicians, but the technicians have a mask. And so when they looked at the area in the brain of these monkeys that were usually found faces, they found no face preference. So my guess is that what evolution does in this case is there is a plastic anerea, which
Starting point is 00:21:36 is plastic, which is kind of predetermined to be imprinted very easily, but the command from the gene is not a detailed circuitry for a phase template. Could be, but this will require probably a lot of bits. You have to specify a lot of connection or a lot of neurons. Instead the command from the gene is something like imprint, memorize what you see most often in the first two weeks of life, especially in connection with food. And maybe nipples.
Starting point is 00:22:10 I don't know. Right. Well, source of food. And so in that area is very plastic at first and it's a little bit of ice. I'd be interesting if a variant of that experiment would show a different kind of pattern associated with food than a face pattern, whether they are quite stick. There are indications that during that experiment, what the monkey saw quite often were the blue gloves of the technicians that were giving to the baby monkeys the the milk. And some of the cells, instead of being face sensitive in that area,
Starting point is 00:22:47 are hand sensitive. That's fascinating. Can you talk about what are the different parts of the brain, and in your view, sort of loosely, and how do they contribute to intelligence? Do you see the brain as a bunch of different modules, and they together come in the human brain to create intelligence, or is it all one mush of the same kind of fundamental architecture? Yeah, that's an important question. Akutek? Ja, to je... To je čečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečečeč you could cut out a piece and nothing special happened apart a little bit less performance. There was a surgeon, lastly, with a lot of experiments of this type with mice and rats and concluded that every part of the brain was essentially equivalent to any other one. It turns out that that's really not true.
Starting point is 00:24:12 There are very specific modules in the brain, as you said, and people may lose the ability to speak if you have a stroke in a certain region je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je kind of takeover functions from one part of the brain to the other, but but really there are specific modules. So the answer that we know from this old work which was basically based on lesions, either on animals or very often there was a mine of very interesting data coming from the war, from different types of injuries that soldiers had in the brain. more recently functional MRI which allow you to check which part of the brain are active when you are doing different tasks as you know can replace some of this. You can see that certain parts of the brain are involved are active in certain language. Yeah, that's right.
Starting point is 00:25:49 But sort of taking a step back to that part of the brain that discovers that specializes in the face and how that might be learned, what's your intuition behind? You know, it's possible that sort of from a physicist's perspective, when you get lower and lower, that it's all the same stuff and it just, when you're born, it's plastic and it quickly figures out this part is going to be about vision, this is going to be about language, this is about common sense reasoning. Do you have any tuition that that kind of learning is going on really quickly or is it really kind of solidified and hardware?
Starting point is 00:26:26 That's a great question. So there are parts of the brain, like the cerebellum or the apocampus, that are quite different from each other. They clearly have different anatomy, different connectivity. Then there is the cortex, which is the most developed part of the brain in humans. And in the cortex, you have different regions of the cortex that are responsible for vision, for audition, for motor control, for language. Now, one of the big puzzles of this is that in the cortex,
Starting point is 00:27:10 is the cortex, is the cortex. It looks like it is the same in terms of hardware, in terms of type of neurons and connectivity across these different modalities. i konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne konečne koneč learning and so on, it's, I think it's rather open. And, you know, I find very interesting for a reason to think about an architecture, computer architecture that is good for vision and at the same time is good for language. It seems to be, you know, so different problem areas that you have to solve. But the underlying mechanism might be the same.
Starting point is 00:28:08 And that's really instructive for maybe artificial neural networks. So we've done a lot of great work in vision and human vision, computer vision. And you mentioned the problem of human vision is really as difficult as the problem of general intelligence. And maybe that connects to the cortex discussion. Can you describe the human visual cortex and how the humans begin to understand the world
Starting point is 00:28:37 through the raw sensory information, that what's for folks enough familiar, especially on the computer vision side, we don't often actually take a step back, except saying what the sentence are to that one is inspired by the other. What is it that we know about the human visual cortex? That's interesting. So we know quite a bit at the same time. We don't know a lot, but the bet we know, you know, in a sense, we know a lot of the details and many we don't know and we know a lot of the top level, the answer to the top level question, but we don't know some basic ones, even in terms of general neuroscience, forget the vision. You know, why do we sleep?
Starting point is 00:29:26 It's such a basic question. And we really don't have an answer to that. Do you think so? Taking a step back on that. So sleep, for example, is fascinating. Do you think that's a neuroscience question? Or if we talk about abstractions, what do you think is an interesting way to study intelligence or a most effective on the levels of abstractions, what do you think is an interesting way to study intelligence or
Starting point is 00:29:45 most effective on the levels of abstractions? The chemicals, the biological, is the electro-physical mathematical, as you've done a lot of x and y, on that side, which psychology, sort of like at which level of abstraction do you think? Well, in terms of levels of abstraction, I think we need all of them. It's like if you ask me, what does it mean to understand a computer? That's much simpler, but in a computer I could say, well, I understand how to use PowerPoint. That's my level of understanding a computer. It's, it has reasonable, you know, it gives me some power to produce slides and beautiful slides. And now, you can ask
Starting point is 00:30:33 somebody else, he says, well, I know how the transistor work that are inside the computer. I can write the equation for, you know, transistor and diodes and circuits, logical circuits. And I can ask this guy, do you know how to operate PowerPoint, no idea? So do you think if we discovered computers walking amongst us full of these transistors that are also operating under windows and have PowerPoint, do you think it's digging in a little bit more? How useful is it to understand the transistor in order to be able to understand PowerPoint and these higher levels? Very good. Intelligual processes. So I think in the case of
Starting point is 00:31:19 computers, because they were made by engineers, by us, this different level of understanding are rather separate on purpose. You know, they are separate modules, so that the engineer that designed the circuit for the chips, that's not need to know what is inside PowerPoint. And somebody you can write the software translating from one to the other. So in that case, I don't think understanding the transistor help you understand power point or very little. If you want to understand the computer this question, I would say you have to understand it at different levels. You really want to build it. But for the brain, I think these levels of understanding, so the algorithms, which kind of
Starting point is 00:32:16 computation, the equivalent power point and the circuits, the transistors, I think they are much more intertwined with each other. There is not a neatly level of the software separate from the hardware. And so that's why I think in the case of the brain, the problem is more difficult, more than for computers that requires the interaction, the collaboration between different types of expertise. So the brain is a big hierarchical mess? You can't just disentangle a lot of levels. I think you can, but it's much more difficult and it's not completely obvious. And I think it's one of the, personally, I think it's the greatest problem in science. He's one of the, person I think is the greatest problem in science. So, I think it's fair that it's difficult.
Starting point is 00:33:08 That's a difficult one. That said, you do talk about compositionality and why it might be useful. And when you discuss why these neural networks in artificial or biological sense learn anything, you talk about compositionality, there's a sense that nature can be disentangled, well, all aspects of our cognition could be disentangled, a little to some degree. So why do you think, what, first of all, how do you see compositionality, and why do you think it exists at all in nature? I spoke about, I used the term compositionality when we looked at deep neural networks, multilayers,
Starting point is 00:34:02 and trying to understand when and why they are more powerful than more classical, one-layer networks, like linear classifier, or kernel machines, so-called. And what we found is that in terms of approximating or learning or representing a function, a mapping from an input to an output, like from an image to the label in the image, if this function as a particular structure, then deep networks are much more powerful than cello networks to approximate the underlying function. And the particular structure is a structure of compositionality. If the function is made up of functions of function, so that you need to look on when you
Starting point is 00:35:00 are interpreting an image, classifying an image, you don't need to look at all pixels at once, but you can compute something from small groups of pixels, and then you can compute something on the output of this local computation and so on. So, it's similar to what you do when you read the sentence. You don't need to read the first and the last letter, but you can read syllables, combine them in words, combine the
Starting point is 00:35:33 words in sentences. So this is this kind of structure. So that's as part of a discussion of why deep neural networks may be more effective in the shallow methods. And is your sense for most things we can use neural networks for, those problems are going to be compositional in nature, like language, like vision, how far can we get in this kind of way? Right. So here is almost philosophy. vision, how far can we get in this kind of way? Right. So here is almost philosophy. Well, let's go there.
Starting point is 00:36:10 Yeah, let's go there. So friend of mine, Max Tagmark, who is a physicist at MIT, I've talked to him on this thing. Yeah, and he disagrees with you, right? A little bit. Yeah, we agree on most, but the conclusion is a bit different. His conclusion is that, for images, for instance, the compositional structure of this function that we have to learn or to solve these problems comes from physics, comes from the fact that you have local interactions in physics
Starting point is 00:36:48 between atoms and other atoms, between particle of matter and other particles, between planets and other planets, between stars and other, it's all local. And that's true, but you could push this argument a bit further, not this argument actually, you could argue that, you know, maybe that's part of the truth, but maybe what happens is kind of the opposite is that our brain is wired up as a deep network. So, you can learn, understand, solve problems that have this compositional structure. And you cannot solve problems that don't have this compositional extraction. So the problem is we are accustomed to, we think about, we test our algorithm, are this compositional extraction because our brain is made up.
Starting point is 00:37:59 And that's in a sense, an evolutionary perspective that we've, so the ones that didn't have, that weren't dealing with a compositional nature of reality died off. Yes, but also could be, maybe the reason why we have this local connectivity in the brain, like simple, sensing cortex looking on you, the small part of the image each one of them and then other cells looking at it small number of these simple cells and so on the reason for this maybe purely that was difficult to grow long range connectivity So suppose it's you know for biology it's possible to grow short range connectivity, but not long range also because there is a limited number of long range that you
Starting point is 00:38:56 and so you have this limitation from the biology and this means you build a deep convolutional neck. This would be something like a deep convolutional network. And this is great for solving certain class of problems. These are the ones we find easy and important for our life. And yes, they were enough for us to survive. And you can start a successful business on solving those problems, right? Like with mobile, driving is a compositional problem.
Starting point is 00:39:34 So on the learning task, I mean, we don't know much about how the brain learns in terms of optimization. But so the thing that's the CACI gradient descent is what artificial neural networks use for the most part to adjust the parameters in such a way that it's able to deal based on the label data, it's able to solve the problem.
Starting point is 00:39:59 So what's your intuition about why it works at all, how hard of a problem it is to optimize a neural network, artificial neural network, is there other alternatives, just in general, your intuition behind this very simplistic algorithm that seems to do pretty good surprising. Yes. So I find neuro-science, the architecture of cortex is really similar to the architecture of deep networks. So there is a nice correspondence there
Starting point is 00:40:37 between the biology and this kind of local connectivity or article architecture. The stochastic reading in the cent, as you said, is a very simple technique. It seems pretty unlikely that biology could do that from what we know right now about cortex and neurons and synapses. So it's a big question open whether there are other optimization learning algorithms that can replace stochastic My guess is yes, but nobody has found yet a real answer.
Starting point is 00:41:29 I mean, people are trying, still trying, and there are some interesting ideas. The fact that stochastic reading the senti is so successful, this has become clear, it is not so mysterious. And the reason is that it's an interesting fact. It's a change in a sense in how people think about statistics. And this is the following. Typically, when you had data and you had, say, a model with parameters, you are trying to fit the model to the data, you know, to fit the parameters.
Starting point is 00:42:13 Typically the kind of crowd wisdom type idea, I was, you should have at least twice the number of data than the number of parameters. Maybe ten times is better. Now, the way you train neural networks this is that they have ten or a hundred times more parameters than data. Exactly the opposite. And which, you know, it has been one of the puzzles about neural networks. How can you get something that really works when you have so much freedom in, you know, from that little data, you can generalize somehow. Right. Exactly. Do you think this, the stochastic nature of it is essential to randomness?
Starting point is 00:43:05 So I think we have some initial understanding why this happens. But one nice side effect of having this over parameterization, more parameters than data, is that when you look for the minima or the loss function, like stochasticity, the i grinile centi, s'il ti di. Si fini, maio i calcolatio, maio i calcolatio, maio i calcolatio, maio i calcolatio,
Starting point is 00:43:35 maio i calcolatio, maio i calcolatio, maio i calcolatio, maio i calcolatio, maio i calcolatio, maio i calcolatio, maio i calcolatio, maio i calcolatio,
Starting point is 00:43:44 maio i calcolatio, maio i calcolatio, maio i calcolatio, maio i calcolatio, maio i calcolatio, number of solution of a system of polynomial equation. Anyway, the bottom line is that there are probably more minima for a typical deep networks than atoms in the universe. Just to say, there are a lot because of the overparameterization. More global minima, zero zero minimum, good minimum. So, it's not too global minimum. Yeah, a lot of that. So, you have a lot of solutions. So, it's not so surprising that you can find them relatively easily. This is because of the overparameterization.
Starting point is 00:44:21 The overparameterization sprinkles that entire space of solutions are pretty good. Yeah, and so you're not so surprised, right? It's like, you know, if you have a system of linear equation and you have more unknowns than equations, then you have, we know, you have an infinite number of solutions. And the question is to pick one. That's story but have an infinite number of solutions so a lot of value of your unknowns that satisfy the equations. But it's possible that there's a lot of those solutions that aren't very good. What's surprising is that it's a very good question. Why'd separate answers. Yeah. One one theorem that people like to talk about that kind of inspires imagination of the power, you know, networks is the universality, a universal approximation theorem that you can approximate any computable function, which is to find that number of neurons in a single hidden layer.
Starting point is 00:45:21 Do you find this theorem one surprising? Do you find it useful, interesting, inspiring? No, this one, you know, I never found it very surprising. It was known since the 80s since I entered the field because it's basically the same as Viestras theorem, which says that I can approximate any continuous function with a polynomial of sufficiently with a sufficient number of terms, on omniors. It's basically the same and the proofs are very similar. So your intuition was there was never any doubt that neural networks in theory could be very strong approximated. The interesting question is that if this theorem says you can approximate fine, but when you ask how many neurons, for instance, or in the
Starting point is 00:46:20 case of polynomial, how many monomials? I need to get a good approximation. Then it turns out that that depends on the dimensionality of your function, how many variables you have. But it depends on the dimensionality of your function in a bad way. It's for instance, suppose you want an error which is no worse than 10% in your approximation. You come up with a network that approximates your function within 10%. Then it turns out that the number of units you need are in the order of 10 to the dimensionality, how many variables.
Starting point is 00:47:07 So if you have, you know, two variables is these two, you have 100 units and okay, but if you have say 200 by 200 pixel images, now this is, you know, 40,000 whatever. And we can go to the size of the universe pretty quickly. Exactly. 10 to the 40,000 or something. And so this is called the curse of a dimensionality. Not quite appropriate. And the hope is with the extra layers you can remove the curse.
Starting point is 00:47:45 What we prove is that if you have deep layers, or hierarchical architecture with the local connectivity of the type of convolutional deep learning, and if you're dealing with a function that has this kind of hierarchical architecture, then you avoid completely the curves. You've spoken a lot about supervised deep learning. What are your thoughts, hopes, views on the challenges of unsupervised learning with GANS, with the generative editor on networks? Do you see those as distinct, the power of GANS, do you see those as distinct from supervised methods in your networks, are they really all in the same representation
Starting point is 00:48:32 ballpark? GANS is one way to get estimation of probability densities, which is somewhat new way probability densities, which is somewhat new way that people have not done before. I don't know whether this will really play an important role in intelligence or it's interesting. I'm less enthusiastic about it than many people in the field. I have the feeling that many people in the field are really impressed by the ability to produce realistic looking images in this generative way. Which describes the popularity of the methods, but you're saying that while that's exciting and cool to look at, it may not be the tool that's useful for it. So you describe it beautifully. Current supervised methods go into infinity,
Starting point is 00:49:33 in terms of a number of labeled points, and we really have to figure out how to go to N to 1. And you're thinking GANs might help, but they might not be the right. I don't think for that problem, which I really think is important. I think they may help, they might not be the right. I don't think for that problem, which I really think is important. I think they may help, they certainly have applications, for instance, in computer graphics. I did work a little bit similar in terms of saying, je, nače, nače, prezendim imaj, i je, nače, in svoj imaj, i in svoj imaj, i svoj imaj, i svoj imaj,
Starting point is 00:50:14 nače, svoj imaj, je svoj, je svoj, je svoj, je svoj, je svoj, je svoj, je svoj, je svoj, je svoj,
Starting point is 00:50:23 je svoj, je svoj, je svoj, je svoj, je svoj, je svoj, What about having a network that I train with the same data set, but now I'm very important output. Now the input is the pause or the expression number, set the numbers, and the output is the image, and I train it. And we did pretty good interesting results in terms of producing very realistic looking images. You know, less sophisticated mechanism, but the output was pretty less than GANs, but the output was pretty much of the same quality.
Starting point is 00:50:56 So I think for computer graphics type application, definitely GANs can be quite useful and not only for that, but for You know helping for instance on this problem unsupervised Example of reducing the number of labeled examples I think people It's like they think they can get out more than they put in. You know, it's no free lunches. Yeah. Right.
Starting point is 00:51:31 So what do you think, what's your intuition? How can we slow the growth of end to infinity and supervise, end to infinity and supervise learning? So, for example, mobile I has very successfully, I mean essentially annotated large amounts of data to be able to drive a car. Now one thought is, so we're trying to teach machines, school of AI, and we're trying to, so how can we become better teachers maybe? That's one way. No, you are, you know, what I like that because one, again, one caricature of the history of computer science, you could say, is begins with programmers, Yeah, continuous laborers cheap. Yeah
Starting point is 00:52:26 And the future would be schools like we have for kids. Yeah Currently the labeling methods We're not selective about which examples We we teach networks with so I think the focus of making one-shut networks that learn much faster is often on the architecture side. But how can we pick better examples with which to learn? Do you have intuitions about that? Well, that's part of the problem. But the other one is, you know, if we look at biology, the reasonable assumption I think is, in the
Starting point is 00:53:13 same spirit that I said, evolution is opportunistic and has weak priors, you know, the way I think the intelligence of a child, a baby, may develop is by bootstrapping weak priors from evolution. For instance, you can assume that you have most organisms, including human babies, built in some basic machinery to detect motion and relative motion. And in fact, we know all insects from fruit flies to other animals. They have this, even in the rightness, in the very peripheral part of it. It's very conservative cross species, something that evolution discovered early. It may be the reason why babies tend to look in the first few days to moving objects, and not to not moving objects. Now moving objects means, okay, they're
Starting point is 00:54:27 attracted by motion, but motion also means that motion gives automatic segmentation from the background. So because of motion boundaries, you know, either the object is moving or the eye of the baby is tracking the moving object and the background is moving, right? Yeah, so just purely on the visual characteristics of the scene, that seems to be the most useful. Right, so it's like looking at an object without background. It's ideal for learning the object, otherwise it's really difficult, because you have so much stuff. So, suppose you do this learning the object, otherwise it's really difficult, because you have so much stuff. So, suppose you do this at the beginning, first weeks,
Starting point is 00:55:12 then after that you can recognize the object. Now they are imprinted, the number of them, even in the background, even without motion. So, that's it, by the way, I just want to ask on an object recognition problem. So there is this being responsive to movement and edge detection essentially. What's the gap between being effectively, effectively visually recognizing stuff, detecting what it is and understanding the scene. Is this a huge gap in many layers or is it close? No, I think that's a huge gap.
Starting point is 00:55:52 I think present algorithm with all this success that we have and the fact that are a lot very useful. I think we are in a golden age for applications of low-level vision and low-level speech recognition and so on, you know, Alexa and so on. There are many more things, similar level to be done, including medical diagnosis and so on, but we are far from what we call understanding of a scene,
Starting point is 00:56:23 of language, of actions, of people, that is despite the claims that I think are very far. We're a little bit off. So in popular culture and among many researchers, some of which I spoke with, the Stuart Russell and Elon Musk, in and out of the AI field. There's a concern about the existential threat of AI. Yeah. And how do you think about this concern? And is it valuable to think about large scale, long-term unintended consequences of
Starting point is 00:57:06 intelligent systems we try to build. I always think it's better to worry first, you know, early rather than late. So, worry is good. Yeah, I'm not against worrying at all. Personally, I think that, it will take a long time before there is real reason to be worried. But as I said, I think it's good to put in place and think about possible safety against. What I find a bit misleading are things like that have been said by people I know,
Starting point is 00:57:47 like Elon Musk and what is Bostrom in particular, what is his first name, a Nick Bostrom. And a couple of other people that for instance, AI is more dangerous than nuclear weapons. I think that's really wrong. And that can be misleading because in terms of priority, we should still be more worried about nuclear weapons. And people are doing about it and some than AI. And it's spoken about Demos Sabis and yourself saying that you think it'll be about 100 years out before we have a general intelligence system that's on par with the human being.
Starting point is 00:58:37 Do you have any updates for those predictions? Well, I think he said he said 20. He said 20. Right. This was a couple of years ago, I have not asked him again. So, should I? Your own prediction. What's your prediction about when you'll be truly surprised and what's the confidence interval
Starting point is 00:58:58 on that? And I'm so difficult to predict the future and even the presence. It's pretty hard to predict. But I would be, as I said, this is completely, it would be more like Rod Brooks. I think he's about 200 years. 200 years. When we have this kind of AGI system, artificial general intelligence system, you're sitting in a room with her, him, it.
Starting point is 00:59:29 Do you think the underlying design of such a system is something we'll be able to understand? It'll be simple. Do you think it'll be explainable? Understandable by us? You, your intuition again, we're in the realm of philosophy a little bit. Well, probably no, but it again, it depends what you really mean for understanding. We don't understand how deep networks work. I think we are beginning to have a theory now. But in the case of deep networks, or even in the case of the simple, simpler kernel machines or linear classifier, we really don't understand the individual units also. But we understand, you know, the computation and the limitations and the properties of it are.
Starting point is 01:00:37 It's similar to many things, you know, we, what does it mean to understand how a fusion bomb works? How many of us understand the basic principle and some of us may understand deeper details? In that sense, understanding is as a community, as a civilization, can we build another copy of it? Okay. And in that sense, do you think there'll be, there'll need to be some evolutionary component where it runs away from our understanding? Or do you think it could be engineered from the ground up? The same way you go from the transistor to a power point.
Starting point is 01:01:19 Right. Right. So, many years ago, this was actually, 40, 41 years ago. Mene, jasno, je je 40-41. I je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je of understanding, which is related to the question we discussed earlier about understanding PowerPoint, or the standing transistors, and so on. And in that kind of framework, we had a level of the hardware and the top level of the algorithms. We did not have learning. Recently, I updated adding levels. je to je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je je do. But being unable to describe in detail what the learning machines will discover.
Starting point is 01:02:28 Right? Now that would be still a powerful understanding if I can build a learning machine, even if I don't understand in detail every time it learns something. Just like our children, if they start listening to a certain type of music, I don't know, Miley Cyrus or something, you don't understand why they came to that particular preference, but you understand the learning process. That's very interesting. So unlearning for systems to be part of our world It has a certain one of the challenging things that you spoken about is learning ethics
Starting point is 01:03:14 Learning yeah morals and what how hard do you think is the problem of First of all humans understanding our ethics What is the origin on the neural and low level of ethics? What is it at the higher level? Is it something that's learnable from machines and your intuition? I think, yeah, ethics these problems where I think understanding the neuroscience of ethics, you know, people discuss there is an ethics of neuroscience. How a neuroscientist should or should not behave, can think of a neurosurgeon and the ethics
Starting point is 01:04:07 really has to be, or he has to be. But I'm more interested on the neuroscience of the... In my mind right now, the neuroscience of ethics is very meta. And I think that would be important to understand also for being able to design machines that have that are ethical machines in our sense of ethics. And you think there is something in neuroscience, there's patterns, tools in neuroscience that
Starting point is 01:04:38 can help us shed some light on ethics or more seriously on the psychologist's sociology in which higher level. No, there is psychology, but there is also in the meantime, there is evidence of a specific areas of the brain that are involved in certain ethical judgment. And not only this, you can stimulate those areas with magnetic fields and change the ethical decisions. Wow. So that's work by a colleague of mine, Rebecca Sachs, and there is other researchers doing similar work. And I think you know this is the beginning but ideally at some point we'll have an understanding of how this works and why it evolved right? The big Y question yeah, it must have some some purpose. Yeah, obviously it has you know some social purposes is has, you know, some social purposes is probably if neuroscience holds the key to at least a moment in some aspect of ethics, that means it could be a learnable problem.
Starting point is 01:05:54 Yeah, exactly. And as we're getting into harder and harder questions, let's go to the hard problem of consciousness. Yeah. Is this an important problem for us to think about and solve on the engineering of intelligence side of your work of our dream? You know, it's unclear. So, you know, again, this is a deep problem partly because it's very difficult to define
Starting point is 01:06:22 consciousness. And there is a debate among neuroscientists about whether consciousness and philosophers, of course, whether consciousness is something that requires flesh and blood, so to speak, or could be, you know, that we could have silicon devices that are conscious, or up to statement like everything, has some degree of consciousness and some more than others. This is like Giulio Toneone and Fee. We just recently talked to Christav Kog. Okay.
Starting point is 01:07:14 So Christav was my first graduate student. Yeah. Do you think it's important to illuminate aspects of consciousness in order to engineer intelligence systems, do you think an intelligence system would ultimately have consciousness? Are they two, are they intro-linked? You know most of the people working in artificial intelligence, I think with that answer, we don't strictly need consciousness to have an intelligence system. That's sort of the easier question because it's a very engineering answer to the question.
Starting point is 01:07:52 Yes. As a touring test, we don't need consciousness. But if you were to go, you think it's possible that we need to have that kind of self-awareness. We may, yes. So for instance, I personally think that when test a machine or a person in a touring test in an extended touring test, I think consciousness is part of what we require in that test, you know, implicitly, to say that this is intelligent. Christ of this agrees.
Starting point is 01:08:34 So, he does. Despite many other romantic notions, he holds, he disagrees with that one. Yes, that's right. So, you know, we'll see. Do you think, as a quick question, Ernest Becker, Fear of Death, do you think mortality in those kinds of things are important for, well, for consciousness and for intelligence the finiteness of life finiteness of existence Or is that just a side effect of evolution?
Starting point is 01:09:13 Evolutionary side effect is useful to a for natural selection Do you think this kind of thing that we're gonna this interview is gonna run out of time soon? Our life will run out of time soon The interview is going to run out of time soon, our life will run out of time soon. Do you think that's needed to make this conversation good and life good? I never thought about it. It's a very interesting question. I think Steve Jobs in his commencement speech
Starting point is 01:09:38 Stemford argued that having a finite life was important for stimulating achievements. It was a different. Yeah, live every day, like it's your last, right? Yeah. So, rationally, I don't think strictly you need mortality for consciousness. But they seem to go together in our biological systems. Yeah. Yeah. You've mentioned before and the students are associated with, AlphaGo immobilized the
Starting point is 01:10:16 big recent success stories in AI. And I think it's captivated the entire world of what AI can do. So what do you think will be the next breakthrough? What's your intuition about the next breakthrough? Of course, I don't know where the next breakthroughs is. I think that there is a good chance, as I said before, that the next breakthrough also being inspired by neuroscience. But which one, I don't know. And there's, so MIT has this quest for intelligence. And there's a few moon shots, which in that spirit, which ones are you excited about?
Starting point is 01:10:58 Which projects kind of? Well, of course, I'm excited about one of the moonshaws, which is our center for brains, minds and machines, the one which is fully funded by NSF. And it is about visual intelligence. And that one is particularly about understanding. Visual intelligence, visual cortex and visual intelligence in the sense of how we look around ourselves and understand the word around ourselves, meaning what is going on, how we could go from here to there without hitting obstacles, you know, whether there are other agents, people in the environment. These are all things that we perceive very quickly. And it's something actually quite close to being conscious, not quite. But there is this interesting experiment that was run at Google X, which is in a sense,
Starting point is 01:12:11 is just a virtual reality experiment, but in which the subject sitting in a chair with goggles, like Oculus and so on. Earphones and they were seeing through the eyes of a robot nearby to cameras, microphones for a scene. So their sensory system was there. And the impression of all the subject, very strong, they could not shake it off, was that they were where the robot was. They could look at themselves from the robot and still feel they were where the robot is. They were looking their body. Their self had moved.
Starting point is 01:13:08 So some aspect of seeing understanding has to have ability to place yourself, have a self-awareness about your position in the world and what the world is. So, so we may have to solve the hard problem of consciousness to solve it on their way. Yes, it's quite quite a moonshine. So you've been an advisor to some incredible minds, including Demis Lasabas, Christav Koch. I'm now Shashwale, like you said, all went on to become some little figures in their respective fields. From your own success as
Starting point is 01:13:40 a researcher and from perspective as a mentor of these researchers, having guided them in the way of advice, what does it take to be successful in science and engineering careers? Whether you're talking to somebody in their teens, 20s and 30s. What does that path look like? It's curiosity and having fun. And I think it's important also having fun without the curious minds. It's the people who are out with too. So yeah, fun and curiosity. Is there? the people is around with too. So yeah, fun and curiosity. Is there mention Steve Jobs? Is there also an underlying ambition that's unique that you saw or is it really does blow down to insatiable curiosity and fun? Well, of course, you know, it's been active and ambitious way, yes, and definitely. But I think sometime in science, there are friends of mine who are like this.
Starting point is 01:14:56 There are some of the scientists like to work by themselves and kind of communicate only when they complete their work or discover something. I think I always found the actual process of discovering something is more fun if it's together with other intelligent and curious and fun people. So if you see the fun in that process, the side effect of that process will be the election of discovering some interesting things. So as you've led many incredible efforts here, what's the secret to being a good advisor, mentor, leader in a research setting? What's the secret to being a good advisor, mentor, leader in a research setting? Is it a similar spirit? Or, yeah, what advice could you give to people, young faculty and so on? It's partly repeating what I said about an environment that should be friendly and fun and ambitious. And, you know, I think I learned a lot from some of my advisors and friends and
Starting point is 01:16:10 some of our physicists. And there was, for instance, this behavior that was encouraged of when somebody comes with a new idea in the group, you are, unless it's really stupid, but you are always enthusiastic. And then, and you are enthusiastic for a few minutes, for a few hours. Then you start, you know, asking critical, a few questions, like, testing basis.
Starting point is 01:16:40 But, you know, this is a process that is, I think it's very, very good. You have to be enthusiastic. Sometimes people are very critical from the beginning. That's not... Yes, you have to give it a chance. Yes. That's easy to grow.
Starting point is 01:16:56 That said, with some of your ideas, which are quite revolutionary, so there's a witness, especially in the human vision side and neuroscience side, there could be some pretty heated arguments. Do you enjoy these? Is that a part of science and academic pursuits that you enjoy? Yeah. Is that something that happens in your group as well? Yeah, absolutely. I also spent some time in Germany again, there is this tradition in which people are more And sometimes in Germany again, there is this tradition in which people are more fort-right, less kind than here. So you know, in the US, when you write a bad letter, you still say, this guy is nice, you
Starting point is 01:17:39 know. Yes, yes. So, here in America it's degrees of nice. Yes, it's all just degrees of nice. Right, yes. So, yeah, here in America, it's degrees of nice. Yes, it's all just degrees of nice. Right, right. So as long as this does not become personal, and it's really like, you know, a football game with his rules, that's great.
Starting point is 01:18:03 So if you somehow found yourself in a position to ask one question of an oracle, like a genie, maybe a god, and you guarantee to get a clear answer, what kind of question would you ask? What would be the question you would ask? In the spirit of our discussion, it could be, how could I become ten times more intelligent? And so, but see, you only get a clear short answer. So, do you think there's a clear short answer to that? No. And that's the answer you'll get. So, you've mentioned flowers of Elgarnon. Oh, yeah. There's a story that inspired you in your childhood.
Starting point is 01:18:55 As this story of a mouse and human achieving genius level intelligence and then understanding what was happening, whilst slowly becoming not intelligent again in this strategy of gaining intelligence and losing intelligence. Do you think in that spirit, in that story, do you think intelligence is a gift or curse? From the perspective of happiness and meaning of life, you try to create intelligence system that understands the universe, but on an individual level, the meaning of life. Do you think intelligence is a gift? It's a good question.
Starting point is 01:19:34 I don't know. As one of the, as one people consider the smartest people in the world, and some, in some dimension, at the very least, what do you think? I don't know, he may be invariant to intelligence, let it be a glee of happiness, it would be nice if it were. That's the hope. Yeah, you could be smart and happy and cool, so hope. Yeah. You could be smart and happy and cool as I'm happy. Yeah.
Starting point is 01:20:08 As always, on the discussion of the meaning of life, it's probably a good place to end. Tomas, I'll thank you so much for talking today. Thank you. This was great. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.