Lex Fridman Podcast - #416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Episode Date: March 7, 2024

Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI. Please support this podcast by checking out our s...ponsors: - HiddenLayer: https://hiddenlayer.com/lex - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Yann's Twitter: https://twitter.com/ylecun Yann's Facebook: https://facebook.com/yann.lecun Meta AI: https://ai.meta.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:10) - Limits of LLMs (20:47) - Bilingualism and thinking (24:39) - Video prediction (31:59) - JEPA (Joint-Embedding Predictive Architecture) (35:08) - JEPA vs LLMs (44:24) - DINO and I-JEPA (45:44) - V-JEPA (51:15) - Hierarchical planning (57:33) - Autoregressive LLMs (1:12:59) - AI hallucination (1:18:23) - Reasoning in AI (1:35:55) - Reinforcement learning (1:41:02) - Woke AI (1:50:41) - Open source (1:54:19) - AI and ideology (1:56:50) - Marc Andreesen (2:04:49) - Llama 3 (2:11:13) - AGI (2:15:41) - AI doomers (2:31:31) - Joscha Bach (2:35:44) - Humanoid robots (2:44:52) - Hope for the future

Transcript
Discussion (0)
Starting point is 00:00:00 The following is a conversation with Yan Likun, his third time on the podcast. He is the chief AI scientist at Meta, professor at NYU, touring award winner, and one of the seminal figures in the history of artificial intelligence. He and Meta AI have been big proponents of open sourcing AI development and have been walking the walk by open sourcing many of their biggest models including Lama 2 and eventually Lama 3. Also, Yan has been an outspoken critic of those people in the AI community who warned about the looming danger and existential threat of AGI. He believes the AGI will be created one day, but it will be good. It will not escape human control, nor will it dominate and kill all humans. At this moment of rapid
Starting point is 00:00:55 AI development, this happens to be somewhat a controversial position. And so it's been fun seeing Yon get into a lot of intense and fascinating discussions online as we do in this very conversation. And now a quick few second mention of each sponsor. Check them out in the description. It's the best way to support this podcast. We've got hidden layer for securing your AI models, element for electrolytes, Shopify for shopping for stuff online, and AG1 for delicious health. Choose wisely my friends. Also if you want to get in touch with me for whatever reason, maybe to work with our amazing
Starting point is 00:01:39 team, go to LexVeeman.com slash contact. And now onto the full ad reads reads never any ads in the middle I try to make these interesting I don't know what I'm talking like this but I am there's a staccato nature to it speaking of staccato I've been playing a bit of piano anyway if you skip these ads please do check out the sponsors we love them I love them. I love them. I enjoy their stuff. Maybe you will too. This episode is brought to you by an un-themed in-context. See what I did there? Sponsor. Since this is Yanlacun, artificial intelligence, machine learning, one of the seminal figures in the field. So of course you're gonna have a sponsor
Starting point is 00:02:26 that's related to artificial intelligence, hidden layer. They provide a platform that keeps your machine learning models secure. The ways to attack machine learning models, large language models, all the stuff we talk about with Yian, there's a lot of really fascinating work, not just large language models. All the stuff we talk about with Yon, there's a lot of really fascinating work, not just large language models, but the same for video, video prediction, tokenization where the tokens are in the space of concept versus the space of literally letters, symbols, japa, vjapa,
Starting point is 00:02:59 all of that stuff that they're open sourcing, all the stuff they're publishing on, it's just really incredible. But that said, all of those models have security holes in ways that we can't even anticipate or imagine at this time. And so you want good people to be trying to find those security holes, trying to be one step ahead of the people that are trying to attack. So if you're especially a company that's relying on these models, you need to have a person who's in charge of saying, yeah, this model that you got from this place has been tested, has been secured. Whether that place is a hugging face
Starting point is 00:03:37 or any other kind of stuff, or any other kind of repository or model zoo kind of place. I think the more and more we rely on large language models or just AI systems in general, the more the security threats that are always going to be there become dangerous and impactful. So protect your models.
Starting point is 00:04:00 Visit hiddenlayer.com slash lex to learn more about how Hidden Layer can accelerate your AI adoption in a secure way. This episode is also brought to you by Element. A thing I drink throughout the day I'm drinking now. When I'm on a podcast you'll sometimes see me with a mug and clear liquid in there that looks like water. In fact, it is not simply water. It is water mixed with element. Watermelon salt, cold.
Starting point is 00:04:34 What I do is I take one of them Powerade 28 fluid ounces bottles, fill it up with water, one packet of watermelon salt, shake it up, put it in the fridge, that's it. I reuse the bottles and drink from a mug, or sometimes from the bottle. Either way, delicious, good for you, especially if you're doing fasting, especially if you're doing low carb kinds of diets, which I do. You can get a sample pack for free with any purchase.
Starting point is 00:05:05 Try to drinkelement.com slash Lex. This episode is brought to you by Shopify as I take a drink of element. It is a platform designed for anyone, even me, to sell stuff anywhere on a great looking store. I use a basic one, like a really minimalist one. you can check it out if you go to Lex Ruhman comm slash store There's a few shirts on there if that's your thing. It was so easy to set up. I Imagine there's like a million features they have that can make it look better and All kinds of extra stuff you can do with the store, but I use the basic thing and the basic thing is pretty damn good I like basic I like minimalism and they integrate with a lot of third-party apps Including what I use which is
Starting point is 00:05:55 on-demand printing So like you buy the shirt on Shopify, but it gets printed and shipped by another company That I always keep forgetting but I think it's called Printful or Printify or I think it's Printful. I'm not sure. It doesn't matter. I think there's several integrations. You can check it out yourself. For me, it works. I'm using the most popular one. Printful I think it's called. Anyway, I look forward to your letters correcting me on my pronunciation. Shopify is great. I'm a big fan of the good side of the machinery of capitalism, selling
Starting point is 00:06:34 stuff on the internet, connecting people to the thing that they want, or rather the thing that would make their life better, both in advertisement and e-commerce, shopping in general, I'm a big believer when that's done well, your life legitimately in the long term becomes better. And so whatever system can connect one human to the thing that makes their life better is great. And I believe that Shopify is sort of a platform that enables that kind of system.
Starting point is 00:07:08 You can sign up for a $1 per month trial period at Shopify.com. That's all lowercase. Go to Shopify.com to take your business to the next level today. This episode is also brought to you by AG1. And all in one daily drink to support better health and peak performance. It is delicious. It is nutritious. And I ran out of words that rhyme with those two. Actually, let me use a large language model to figure out what rhymes are delicious. Words that rhyme with delicious include ambitious, auspicious, capricious, fictitious, suspicious. So there you have it. Anyway, I drink it twice a day. Also put it in the fridge. And sometimes in the freezer, like it gets a little bit frozen,
Starting point is 00:08:03 just like a little bit, just a little bit frozen just like a little bit Just a little bit frozen. You got that like slushy consistency. I'll do that too sometimes It's freaking delicious. It's delicious no matter what it's delicious warm. It's delicious cold is delicious slightly frozen All of it is just incredible and of course it covers like the basic multi vitamin foundation of what I think of as a good diet. So it's just a great multivitamin. That's the way I think about it.
Starting point is 00:08:32 So all the crazy stuff I do, the physical challenges, the mental challenges, all of that, at least I got AG1. They'll give you one month supply of fish oil when you sign up at drinkag1.com slash Lex. This is the Lex Fremont podcast. The supported, please check out our sponsors in the description. And now dear friends, here's Yan Lacoon. You've had some strong statements, technical statements about the future of artificial intelligence recently throughout your career actually, but recently as well.
Starting point is 00:09:22 You've said that autoregressive LLMs are not the way we're going to make progress towards superhuman intelligence. These are the large language models like GPT-4 like Lama 2 and 3 soon and so on. How do they work and why are they not going to take us all the way? For a number of reasons. The first is that there is a number of characteristics of intelligent behavior. For example, the capacity to understand the world, understand the physical world, the ability to remember and retrieve things, persistent memory, the ability to reason and the ability to plan. Those are four essential characteristics of intelligent systems or entities, humans,
Starting point is 00:10:11 animals. LNMs can do none of those. Or they can only do them in a very primitive way. And they don't really understand the physical world. They don't really have persistent memory. They can't really reason and they certainly can't plan. And so, if you expect the system to become intelligent just without having the possibility
Starting point is 00:10:35 of doing those things, you're making a mistake. That is not to say that autoregressive elements are not useful. They're certainly useful. That they're not interesting, that we can't build a whole ecosystem of applications around them. Of course, we can. But as a path towards human level intelligence,
Starting point is 00:10:58 they are missing essential components. And then there is another tidbit or fact that I think is very interesting. Those data lines are trained on enormous amounts of text, basically the entirety of all publicly available texts on the internet, right? That's typically on the order of 10 to the 13 tokens. Each token is typically two bytes. So that's two 10 to the 13 bytes as training data. It would take you or me 170,000 years to just read through this at eight hours a day. So it seems like an enormous amount of knowledge, right?
Starting point is 00:11:34 That those systems can accumulate. But then you realize it's really not that much data. If you talk to a developmental psychologist and they tell you a four-year-old has been awake for 16,000 hours in his or her life. And the amount of information that has reached the visual cortex of that child in four years is about 10 to 15 bytes. And you can compute this by estimating that the optical nerve carry about 20 megabytes per second, roughly. And so 10 to the 15 bytes for a four-year-old versus 2 times
Starting point is 00:12:16 10 to the 13 bytes for 170,000 years worth of reading. What that tells you is that through sensory input, we see a lot more information than we do through language. And that despite our intuition, most of what we learn and most of our knowledge is through our observation and interaction with the real world, not through language. Everything that we learn in the first few years of life and certainly everything that animals learn has nothing to do with language. So it'd be good to maybe push against some of the intuition behind what you're saying.
Starting point is 00:12:54 So it is true there's several orders of magnitude, more data coming into the human mind, how much faster and the human mind is able to learn very quickly from that filter able to learn very quickly from that filter of the data very quickly. You know somebody might argue your comparison between sensory data versus language. That language is already very compressed. It already contains a lot more information than the bytes it takes to store them if you compare it to visual data. So there's a lot of wisdom in
Starting point is 00:13:23 language there's words and the way we stitch them together, it already contains a lot of information. So is it possible that language alone already has enough wisdom and knowledge in there to be able to from that language construct a world model and understanding of the world and understanding of the physical world that you're saying all limbs lack. So it's a big debate among philosophers and also cognitive scientists like whether intelligence
Starting point is 00:13:56 needs to be grounded in reality. I'm clearly in the camp that yes, intelligence cannot appear without some grounding in some reality. It doesn't need to be physical reality. It could be simulated, but the environment is just much richer than what you can express in language. Language is a very approximate representation of or percepts and or mental models. I mean, there's a lot of tasks that we accomplish where we manipulate a mental model
Starting point is 00:14:28 of the situation at hand, and that has nothing to do with language. Everything that's physical, mechanical, whatever, when we build something, when we accomplish a task, model a task of grabbing something, et cetera, we plan for action sequences, and we do this by essentially imagining the outcome of sequence of action. So we might imagine and that requires mental models that don't have much to do with language. And that's, I would argue, most of our knowledge is derived from that interaction with the physical world.
Starting point is 00:15:06 So a lot of my colleagues who are more interested in things like computer vision are really on that camp that AI needs to be embodied, essentially. And then other people coming from the NLP side or maybe some other motivation, don't necessarily agree with that. And philosophers are split as well. And the complexity of the world is hard to imagine. It's hard to represent all the complexities that we take completely for granted in the real world that
Starting point is 00:15:46 we don't even imagine require intelligence, right? This is the old Morovaq paradox from the pioneer of robotics, Hans Morovaq. We said, you know, how is it that with computers, it seems to be easy to do high level complex tasks like playing chess and solving integrals and doing things like that? Whereas the thing we take for granted that we do every day, like I don't know, learning to drive a car or grabbing an object, we can't do with computers. And we have LLMs that can pass the bar exam, so they must be smart. But then they can't learn to drive in 20 hours like any 17 year old.
Starting point is 00:16:28 They can't learn to clear up the dinner table and fill up the dishwasher like any 10 year old can learn in one shot. Why is that? Like, what are we missing? What type of learning or reasoning architecture or whatever are we missing that basically prevent us from having level five sort of in cars
Starting point is 00:16:51 and domestic robots? Can a large language model construct a world model that does know how to drive and does know how to fill a dishwasher, but just doesn't know how to deal with visual data at this time. So it can operate in a space of concepts. So yeah, that's what a lot of people are working on.
Starting point is 00:17:12 So the answer, the short answer is no. And the more complex answer is you can use all kinds of tricks to get an LLM to basically digest an LLM to basically digest visual representations of images or video or audio for that matter. And a classical way of doing this is you train a vision system in some way. And we have a number of ways to train vision systems. These are supervised, semi-supervised, self-supervised, all kinds of different ways. That will turn any image into high-level representation. Basically a list of tokens that are really similar to the kind of tokens that typical LLM takes as an input. And then you just feed that to the LLM in addition to the text.
Starting point is 00:18:09 And you just expect the LLM to kind of, during training to kind of be able to use those representations to help make decisions. I mean, it's been working along those lines for quite a long time. And now you see those systems, right? I mean, there are LLMs that have some vision extension. But they're basically hacks in the sense
Starting point is 00:18:30 that those things are not trained end to end to handle, to really understand the world. They're not trained with video, for example. They don't really understand intuitive physics, at least not at the moment. So you don't think there's something special to you about intuitive physics, at least not at the moment. So you don't think there's something special to you about intuitive physics, about sort of common sense reasoning about the physical space, about physical reality.
Starting point is 00:18:51 That to you is a giant leap that LLMs are just not able to do. We're not gonna be able to do this with the type of LLMs that we are working with today. And there's a number of reasons for this, but the main reason is the way LLM are trained is that you take a piece of text, you remove some of the words in that text, you mask them, you replace them by black markers, and you train a genetic neural net to predict the
Starting point is 00:19:17 words that are missing. And if you build this neural net in a particular way, so that it can only look at words that are to the left of the one it's trying to predict, then what you have is a system that basically is trying to predict the next word in a text. So then you can feed it a text, a prompt, and you can ask it to predict the next word. It can never predict the next word exactly. And so what it's going to do is produce a probability distribution over all the possible
Starting point is 00:19:46 words in your dictionary. In fact, it doesn't predict words, it predicts tokens that are kind of subword units. And so it's easy to handle the uncertainty in the prediction there because there's only a finite number of possible words in the dictionary and you can just compute a distribution over them. Then what the system does is that it picks a word from that distribution. Of course, there's a higher chance of picking words that have a higher probability within that distribution.
Starting point is 00:20:14 So you sample from that distribution to actually produce a word. And then you shift that word into the input. And so that allows the system not to predict the second word. And once you do this, you to predict the second word, right? And once you do this, you shift it into the input, etc. That's called autoregressive prediction, which is why those LLMs should be called autoregressive LLMs. But we just call them LLMs.
Starting point is 00:20:39 And there is a difference between this kind of process and a process by which before producing a word, when you talk, when you and I talk, you and I are bilingual, we think about what we're gonna say and it's relatively independent of the language in which we're gonna say it. When we talk about like, I don't know, let's say a mathematical concept or something, the kind of thinking that we're doing and the answer that we're planning to produce is not linked to whether we're gonna see it in French, Russian or English. Chomsky just rolled his eyes, but I understand.
Starting point is 00:21:14 So you're saying that there's a bigger abstraction that goes before language. Yeah, maps onto language. Right. It's certainly true for a lot of thinking that we do. Is that obvious that we don't? Like you're saying your thinking is same in French as it is in English?
Starting point is 00:21:33 Yeah, pretty much. Pretty much? Or is this like, how flexible are you? Like if there's a probability distribution? Well, it depends what kind of thinking, right? If it's just, if it's like producing puns, I get much better in French than English about that. No, but so we're right.
Starting point is 00:21:50 Much worse. Is there an abstract representation of puns? Like, is your humor an abstract representation? Like when you tweet, and your tweets are sometimes a little bit spicy, what's, is there an abstract representation in your brain of a tweet before it maps onto English? There is an abstract representation of imagining the reaction of a reader to that text. You start with laughter and then figure out how to make that happen? Or a figure out a reaction you want to cause and then figure out how to say it, so that it causes that reaction.
Starting point is 00:22:22 But that's really close to language. But think about like a mathematical concept or, you know, imagining, you know, something you want to build out of wood or something like this, right? The kind of thinking you're doing is absolutely nothing to do with language really. Like it's not like you have necessarily like an internal monologue in any particular language, you're, you know, imagining mental models of the thing, right? If I ask you to imagine what this water bottle will look like if I rotate it 90 degrees, that has nothing to do with language. And so clearly there is a more abstract level of representation in which we do most of our thinking and we plan what we're going to say if the output is uttered words as opposed to an output being muscle actions.
Starting point is 00:23:19 We plan our answer before we produce it. And LLMs don't do that. They just produce one word after the other instinctively if you want. It's a bit like the you know subconscious actions where you don't like you're distracted, you're doing something, you're completely concentrated and someone comes to you and you know ask you a question and you kind of answer the question. You don't have time to think about the answer, but the answer is easy so you don't need to pay attention. You sort of respond automatically.
Starting point is 00:23:48 That's kind of what the NLLM does. It doesn't think about its answer really. It retrieves it because it's accumulated a lot of knowledge, so it can retrieve some things, but it's going to just spit out one token after the other without planning the answer. But you're making it sound just one token after the other, one token at a time, the most likely thing it generates is a sequence of tokens is going to be a deeply profound thing. Okay, but then that assumes that those systems actually possess an internal world model.
Starting point is 00:24:37 So really goes to the, I think the fundamental question is can you build a really complete world model, not complete, but one that has a deep understanding of the world? Yeah. So, can you build this, first of all, by prediction? Right. And the answer is probably yes. Can you predict, can you build it by predicting words? And the answer is most probably no, because language is very poor in terms of weak or low bandwidth, if you want. There's just not enough information
Starting point is 00:25:13 there. So building world models means observing the world and understanding why the world is evolving the way it is. And then the extra component of a world model is something that can predict how the world is going to evolve as a consequence of an action you might take. So what model really is, here is my idea of the state of the world at time t, here is an action I might take. What is the predicted state of the world at time t. Here is an action I might take. What is the predicted state of the world at time t plus 1?
Starting point is 00:25:48 Now, that state of the world does not need to represent everything about the world. It just needs to represent an off that's relevant for this planning of the action, but not necessarily all the details. Now, here is the problem. You're not going to be able to do this with generative models. So a generative model has trained on video, and we've tried to do this for 10 years.
Starting point is 00:26:11 You take a video, show a system a piece of video, and then ask it to predict the reminder of the video. Basically, predict what's going to happen. One frame at a time. Do the same thing as sort of the auto aggressive LLMs do, but for video. Right. Either one from your time or a group of friends at a time. But yeah, a large video model if you want. The idea of doing this has been floating around for a long time and at at fair, some of my colleagues and I have been trying to do this for about 10 years. And you can't really do the same trick as with LLMs,
Starting point is 00:26:51 because LLMs, as I said, you can't predict exactly which word is going to follow a sequence of words, but you can predict the distribution over words. Now, if you go to video, what you would have to do is predict the distribution over all possible frames in a video. And we don't really know how to do that properly. We do not know how to represent distributions
Starting point is 00:27:15 over high-dimensional continuous spaces in ways that are useful. And that's there lies the main issue. And the reason we can do this is because the world is incredibly more complicated and richer in terms of information than text. Text is discrete. Video is highly dimensional and continuous. A lot of details in this.
Starting point is 00:27:40 So if I take a video of this room, and the video is, you know, camera panning around. There is no way I can predict everything that's going to be in the room as I pan around. The system cannot predict what's going to be in the room as the camera is panning. Maybe it's going to predict this is, this is a room where there's a light and there is a wall and things like that. It can't predict what the painting on the wall looks like or what the texture of the couch looks like.
Starting point is 00:28:06 Certainly not the texture of the carpet. So there's no way I can predict all those details. So the way to handle this is one way to possibly to handle this, which we've been working for a long time, is to have a model that has what's called a latent variable. And the latent variable is fed to a neural net, and it's supposed to represent all the information
Starting point is 00:28:27 about the world that you don't perceive yet, and that you need to augment the system for the prediction to do a good job at predicting pixels, including the fine texture of the carpet and the couch and the painting on the wall. That has been a complete failure, essentially. And we've tried lots of things. We tried just straight neural nets, we tried GANs,
Starting point is 00:28:57 we tried VAEs, all kinds of regularized auto encoders, we tried many things. We also tried those kind of methods to learn good representations of images or video that could then be used as input to, for example, an image classification system. And that also has basically failed. Like all the systems that attempt to predict
Starting point is 00:29:25 missing parts of an image or video from a corrupted version of it basically. So I take an image or a video, corrupt it or transform it in some way, and then try to reconstruct the complete video or image from the corrupted version. And then hope that internally the system will develop a good representations of images
Starting point is 00:29:47 that you can use for object recognition, segmentation, whatever it is. That has been essentially a complete failure. And it works really well for text. That's the principle that is used for LNMs, right? So where's the failure exactly? Is that it's very difficult to form a good representation of an image, like a good embedding of all the important
Starting point is 00:30:11 information in the image? Is it in terms of the consistency of image to image to image to image that forms the video? Like what does the video highlight real of all the ways you failed? What does that look like? OK, so the reason this doesn't work is, first of all, I have to tell you exactly what doesn't work because there is something else that does work. So the thing that does not
Starting point is 00:30:34 work is training the system to learn representations of images by training it to reconstruct a good image from a corrupted version of it. Okay, that's what doesn't work. And we have a whole slew of techniques for this that are, you know, a variant of denoising autoencoders, something called MAE developed by some of my colleagues at FAIR, Max.Autoencoder.
Starting point is 00:30:59 So it's basically like the, you know, LLMs or things like this where you train the system by corrupting text except you corrupt images, you remove patches from it and you train a gigantic neural net to reconstruct. The features you get are not good. And you know they're not good because if you now train the same architecture, but you try to supervise with label data, with textualual descriptions of images, etc.
Starting point is 00:31:26 You do get good representations and the performance on recognition tasks is much better than if you do this self-supervised free training. So the architecture is good? The architecture is good. The architecture of the encoder is good. Okay, but the fact that you train the system to reconstruct images does not lead it to produce to learn good generic features of images. When you train in a self-supervised way.
Starting point is 00:31:51 Self-supervised by reconstruction. Yeah, by reconstruction. Okay, so what's the alternative? The alternative is joint embedding. What is joint embedding? What are these architectures that you're so excited about? Okay, so now instead of training a system to encode the image and then training it to reconstruct the full image from a corrupted version, you take the full image, you take the
Starting point is 00:32:16 corrupted or transformed version, you run them both through encoders, which in general are identical but not necessarily. And then you train a predictor on top of those encoders to predict the representation of the full input from the representation of the corrupted one. Okay, so joint embedding, because you're taking the full inputs and the corrupted version or transform version, run them both through encoders, you get a joint embedding.
Starting point is 00:32:50 And then you're saying, can I predict the representation of the full one from the representation of the corrupted one? Okay. And I call this a JEPA, so that means joint embedding predictive architecture because there's joint embedding and there is this predictor that predicts the representation of the good guy from the bad guy. And the big question is how do you train something like this? And until five years ago or six years ago, we didn't have particularly good answers for how you train those things, except for one called Contrastive Learning, where, and the idea of Contrastive Learning,
Starting point is 00:33:29 where, and the idea of Contrastive Learning is you take a pair of images that are, again, an image and a corrupted version or degraded version somehow, or transformed version of the original one, and you train the predicted representation to be the same as that. If you only do this, the system collapses. It basically completely ignores the input
Starting point is 00:33:48 and produces representations that are constant. So the contrastive methods avoid this. And those things have been around since the early 90s. I had a paper on this in 1993. Is you also show pairs of images that you know are different. And then you push away the representations from each other. So you say, not only do representations of things that we know are the same, should be the same or should be similar,
Starting point is 00:34:16 but representation of things that we know are different should be different. And that prevents the collapse, but it has some limitation. And there's a whole bunch of techniques that have appeared over the last six, seven years that can revive this type of method. Some of them from fair, some of them from Google and other places. But there are limitations to those contrasting method. What has changed in the last three, four years is now we have methods that are non-contrastive.
Starting point is 00:34:47 So they don't require those negative, contrastive samples of images that we know are different. You turn them only with images that are different versions or different views of the same thing. And you rely on some other tweaks to prevent the system from collapsing. And we have half a dozen different methods for this now. So what is the fundamental difference between joint embedding architectures and LLMs? So can JAP take us to AGI? Whether we should say that you don't like the term AGI, and we'll probably argue. I think every single time I've talked to you
Starting point is 00:35:27 with argued about the G in AGI, I get it, I get it. I get it. We'll probably continue to argue about it. It's great. Because you like French, and Amis is, I guess, friend in French. Yes. And AMI stands for Advanced Machine Intelligence.
Starting point is 00:35:48 Right. But either way, can JAPA take us to that, towards that advanced machine intelligence? Well, so it's a first step. Okay. So first of all, what's the difference with generative architectures like LLMs. So LLMs or vision systems that are trained by reconstruction generate the inputs, right, that generate the original input that is non-corrupted, non-transformed, right. So you have to predict all the pixels. And there is a huge amount of resources spent in the system to actually predict all those pixels predict all the pixels. And there is a huge amount of resources
Starting point is 00:36:25 spent in the system to actually predict all those pixels, all the details. In a JEPA, you're not trying to predict all the pixels. You are only trying to predict an abstract representation of the inputs. And that's much easier in many ways. So what the JEPA system, when it's being trained, is trying to do is extract as much information
Starting point is 00:36:46 as possible from the input, but yet only extract information that is relatively easily predictable. Okay, so there's a lot of things in the world that we cannot predict. For example, if you have a self-driving car driving down the street or road, there may be trees around the road.
Starting point is 00:37:06 And it could be a windy day. So the leaves on the tree are kind of moving in kind of semi-chaotic random ways that you can't predict and you don't care. You don't want to predict. So what you want is your encoder to basically eliminate all those details. We'll tell you there's moving leaves,
Starting point is 00:37:21 but it's not going to keep the details of exactly what's going on. And so when you do the prediction in representation space, you're not going to have to predict every single pixel of every leaf. And that not only is a lot simpler, but also it allows the system to essentially learn and abstract representation of the world
Starting point is 00:37:42 where what can be modeled and predicted is preserved and the rest is viewed as noise and eliminated by the encoder. So it kind of lifts the level of abstraction of the representation. If you think about this, this is something we do absolutely all the time. Whenever we describe a phenomenon, we describe it at a particular level of abstraction. And we don't always describe every natural phenomenon in terms of quantum field theory, right? That would be impossible, right? So we have multiple levels of abstraction
Starting point is 00:38:12 to describe what happens in the world, you know, starting from quantum field theory to like atomic theory and molecules, you know, and chemistry, materials, and, you know, all the way up to, you know, kind of concrete objects in the real world and things like that. So we can't just only model everything at the lowest level. And that's what the idea of JEPA is really about, learn abstract representation in a self-supervised manner, and you can do it
Starting point is 00:38:43 hierarchically as well. So that I think is an essential component of an intelligent system. And in language, we can get away without doing this, because language is already, to some level, abstract and already has eliminated a lot of information that is not predictable. And so we can get away without doing the joint embedding, without lifting the abstraction level and by directly predicting words. So joint embedding, it's still generative, but it's generative in this abstract representation space. And you're saying language, we were lazy with language
Starting point is 00:39:20 because we already got the abstract representation for free. And now we have to zoom out, actually think about generally intelligent systems. We have to deal with the full mess of physical reality of reality and you can't you do have to do this step of jumping from the full rich detailed reality to a abstract representation of that reality based on what you can then reason and all that kind of stuff. Right. And the thing is, those cell supervised algorithms that learn by prediction, even in representation space, they learn more concept if the input
Starting point is 00:40:03 data you feed them is more redundant. the more redundancy there is in the data, the more they're able to capture some internal structure of it. And so there there is way more redundancy and structure in perceptual inputs, sensory input like vision than there is in text, which is not nearly as redundant. This is back to the question you were asking a few minutes ago. Language might represent more information really because it's already compressed. You're right about that, but that means it's also less redundant. And so self-supervised learning will not work as well. Is it possible to join the self-supervised training on visual data and self-supervised training on language data. There is a huge amount of knowledge,
Starting point is 00:40:49 even though you talk down about those 10 to the 13 tokens. Those 10 to the 13 tokens represent the entirety, a large fraction of what us humans have figured out. Both the shit talk on Reddit and the contents of all the books and the articles and the full spectrum of human intellectual creations. So is it possible to join those two together? Well, eventually, yes.
Starting point is 00:41:16 But I think if we do this too early, we run the risk of being tempted to cheat. And in fact, that's what people are doing at the moment with the Vision Language model. We're basically cheating. We're using language as a crutch to help the deficiencies of our vision systems to kind of learn good representations from images and video. And the problem with this is that we might improve our Vision language system a bit.
Starting point is 00:41:46 I mean, our language models by feeding them images. But we're not going to get to the level of even the intelligence or level of understanding of the world of a cat or dog, which doesn't have language. They don't have language and they understand the world much better than any LLM. They can plan really complex actions and sort of imagine the result of a bunch of actions. How do we get machines to learn that before we combine that with language? Obviously, if we combine this with language, this is going to be a winner. But before that, we have to focus on like how do we get systems to learn how the world works?
Starting point is 00:42:25 So this kind of joint embedding Predictive architecture for you that's going to be able to learn something that common sense something like what a cat uses to predict How to mess with its owner most optimally by knocking over a thing? That's that's the In fact, the techniques we're using are non-contrastive. So not only is the architecture non-generative, the learning procedures we're using are non-contrastive. So we have two sets of techniques. One set is based on distillation and there's a number of methods that use this principle. One by DeepMind could be way well. methods that use this principle. One by DeepMind called B-YOL, a couple by Fair,
Starting point is 00:43:07 one called Vicreg, and another one called IJEPA. And Vicreg, I should say, is not a distillation method, actually, but IJEPA and B-YOL certainly are. And there's another one also called Dino or Dino, also produced from Fair. And the idea of those things is that you take the full input, let's say an image. You run it through an encoder, produces a representation.
Starting point is 00:43:34 And then you corrupt that input or transform it, run it through essentially what amounts to the same encoder with some other differences. And then train a predictor. Sometimes the predictor is very simple. Sometimes it doesn't exist. But train a predictor. Sometimes the predictor is very simple. Sometimes it doesn't exist, but train a predictor to predict a representation of the first uncorrupted input from the corrupted input.
Starting point is 00:43:54 But you only train the second branch. You only train the part of the network that is fed with the corrupted input. The other network you don't train, but it seems to share the same weight. When you modify the first one, it also modifies the second one. And with various tricks, you can prevent the system from collapsing with the collapse of the type I was explaining before,
Starting point is 00:44:17 with the system basically ignores the input. So that works very well. The two techniques we developed at FAIR, Dino and IJEPA work really well for that. So what kind of data are we talking about here? So this several scenario, one scenario is you take an image, you corrupt it by changing the cropping, for example, changing the size a little bit,
Starting point is 00:44:46 maybe changing the orientation, blurring it, changing the colors, doing all kinds of horrible things to it. But basic horrible things. Basic horrible things that sort of degrade the quality a little bit and change the framing, you know, crop the image. And in some cases, in the case of IJEPA, you don't need to do any of this. You just mask some parts of it, right? You just basically remove some regions, like a big block, essentially.
Starting point is 00:45:14 And then run through the encoders and train the entire system, encoder and predictor, to predict the representation of the good one from the representation of the corrupted one. So that's the IJEPA. Doesn't need to know that it's an image, for example, because the only thing you need to know is how to do this masking.
Starting point is 00:45:35 Whereas with Dino, you need to know it's an image because you need to do things like, you know, geometry transformation and blurring and things like that that are really image specific. A more recent version of this that we have is called VJPA. So it's basically the same idea as IJPA except it's applied to video. So now you take a whole video and you mask a whole chunk of it. And what we mask is actually kind of a temporal tube.
Starting point is 00:45:57 So an all, like a whole segment of each frame in the video over the entire video. And that tube was like statically positioned throughout the frames? Right, throughout the tube. The tube, yeah, typically is 16 frames or something, and we mask the same region over the entire 16 frames. It's a different one for every video, obviously. And then again, train that system so as to predict the representation of the full video from the partially masked
Starting point is 00:46:25 video. That works really well. It's the first system that we have that learns good representations of video so that when you feed those representations to a supervised classifier head, it can tell you what action is taking place in the video with pretty good accuracy. So that's the first time we get something of that quality. So that's a good test, that a good representation is formed. That means there's something to this. Yeah. We also preliminary result that seemed to indicate that the representation allows our system
Starting point is 00:47:00 to tell whether the video is physically possible or completely impossible because some object disappeared or an object you know suddenly jumped from one location to another or or change shape or something. So it's able to capture some physical, some physics-based constraints about the reality represented in the video. Yeah. About the appearance and the disappearance of objects? Yeah, that's really new. Okay, but can this actually get us to this kind of world model that understands enough about the world
Starting point is 00:47:39 to be able to drive a car? Possibly, this is gonna take a while before we get to that point. But there are systems already, you know, robotic systems that are based on this idea. And what you need for this is a slightly modified version of this where imagine that you have a video and a complete video. And what you're doing to this video
Starting point is 00:48:06 is that you're either translating it in time towards the future. So you only see the beginning of the video, but you don't see the latter part of it that is in the original one. Or you just mask the second half of the video, for example. And then you train a JEPA system of the type I describe to predict the representation of
Starting point is 00:48:26 the full video from the shifted one. But you also feed the predictor with an action. For example, the wheel is turned 10 degrees to the right or something. So if it's a dash cam in a car and you know the angle of the wheel, you should be able to predict to some extent what's going to happen to what you see. You're not going to be able to predict all the details of objects that appear in the view, obviously, but at an abstract representation level, you can probably predict what's going to happen. So now what you have is a internal model that says, here is my idea of state of the world at time t, here is an action I'm taking, here is a prediction of the state of the world at time t plus one, t plus delta t, t plus two seconds, whatever it is. If you have a model
Starting point is 00:49:17 of this type, you can use it for planning. So now you can do what LLMs cannot do, which is planning what you're going to do. So as you arrive at a particular outcome or satisfy a particular objective. So you can have a number of objectives. If I can predict that if I have an object like this and I open my hand, it's going to fall, right? And if I push it with a particular force on the table, it's going to move. If I push the table itself, it's probably not going to move with the same force. So we have this internal model of the world in our mind, which allows us to plan sequences of actions to arrive at a particular goal. And so now if you have this world model, we can imagine a sequence of actions, predict
Starting point is 00:50:14 what the outcome of the sequence of action is going to be, measure to what extent the final state satisfies a particular objective, like, you know, moving the bottle to the left of the table. And then plan a sequence of actions that will minimize this objective at runtime. We're not talking about learning. We're talking about inference time. Right. So this is planning, really. And in optimal control, this is a very classical thing. It's called model predictive control. You have a model of the system you want to control that can predict the sequence of states corresponding to a sequence of commands. And you're planning a sequence of commands
Starting point is 00:50:55 so that according to your world model, the end state of the system will satisfy an objective that you fix. This is the way rocket trajectories have been planned since computers have been around, so since the early 60s essentially. So yes, for model predictive control, but you also often talk about hierarchical planning.
Starting point is 00:51:18 Yeah. Can hierarchical planning emerge from this somehow? Well, so no, you will have to build a specific architecture to allow for hierarchical planning. So hierarchical planning is absolutely necessary if you want to plan complex actions. If I want to go from, let's say, from New York to Paris, this is the example I use all the time, and I'm sitting in my office at NYU, my objective that I need to minimize is my distance to Paris. At a high level, a very abstract representation of my location, I would have to decompose this into two sub-goals.
Starting point is 00:51:52 First one is go to the airport. Second one is catch a plane to Paris. Okay, so my sub-goal is now going to the airport. My objective function is my distance to the airport. going to the airport. My objective function is my distance to the airport. How do I go to the airport? Where I have to go in the street and hail the taxi, which you can do in New York. Okay, now I have another sub goal, go down on the street. What that means, going to the elevator,
Starting point is 00:52:20 going down the elevator, walk out the street. How do I go to the elevator? I have to stand up for my chair, open the door in my office, go to the elevator, push the button. How do I get up from my chair? You can imagine going down all the way down to basically what amounts to millisecond by millisecond muscle control. And obviously you're not going to plan your entire trip from New York to Paris in terms of millisecond by millisecond muscle control. First that would be incredibly expensive, but it will also be completely impossible because you don't know all the conditions of what's going to happen, you know, how long
Starting point is 00:53:01 it's going to take to catch a taxi or to go to the airport with traffic. I mean, you would have to know exactly the condition of everything to be able to do this planning. And you don't have the information. So you have to do this hierarchical planning so that you can start acting and then sort of replanning as you go. And nobody really knows how to do this in AI.
Starting point is 00:53:25 Nobody knows how to train a system to learn the appropriate multiple levels of representation so that hierarchical planning works. There's something like that already emerge. So like, can you use an LLM, state of the art LLM, to get you from New York to Paris by doing exactly the kind of detailed set of questions that you just did, which is can you give me a list of 10 steps I need to do to get from New York to Paris? And then for each of those steps, can you give me a list of 10 steps? How I make that step happen? And for each of those steps, can you give me a list of 10 steps to make each one of those until you're moving your individual muscles? Maybe not. Whatever you can actually act upon using your mind.
Starting point is 00:54:14 Right, so there's a lot of questions that are sort of implied by this, right? So the first thing is, LNMs will be able to answer some of those questions down to some level of abstraction under the condition that they've been trained with similar scenarios in their training set. They would be able to answer all of those questions, but some of them may be hallucinated, meaning nonfactual.
Starting point is 00:54:37 Yeah, true. I mean, they will probably produce some answer, except they're not going to be able to really kind of produce millisecond by millisecond more so control of how you stand up from your chair. Right? So, but down to some level of abstraction where you can describe things by words, they might be able to give you a plan, but only under the condition that they've been trained to produce those kind of plans. Right? They're not going to be able to plan for situations where that they never encountered before. They basically are going to have to regurgitate the template that they've been trained on. But where, like just for the example of New York to. They basically are going to have to regurgitate the template that they've been trained on.
Starting point is 00:55:05 But where, just for the example of New York to Paris, is it gonna start getting into trouble? Like at which layer of abstraction do you think you'll start? Cause like I can imagine almost every single part of that and I'll be able to answer somewhat accurately, especially when you're talking about New York and Paris, major cities. So I mean, certainly, and I would be able to solve that problem if you fine-tune it for it.
Starting point is 00:55:28 You know, just... And so, I can't say that NNM cannot do this. It can do this if you train it for it. There's no question. Down to a certain level where things can be formulated in terms of words. But like, if you want wanna go down to like how you, you know, climb down the stairs or just stand up from your chair in terms of words, like you can't do it. You need, that's one of the reasons you need experience of the physical world,
Starting point is 00:55:59 which is much higher bandwidth than what you can express in words, in human language. So everything we've been talking about on the joint embedding space, is it possible that that's what we need for like the interaction with physical reality for on the robotics front and then just the LLMs are the thing that sits on top of it for the bigger reasoning about like the fact that I need to book a plane ticket and I need to know I
Starting point is 00:56:24 know how to go to the websites and so on. Sure. And a lot of plans that people know about that are relatively high level are actually learned. Most people don't invent the plans by themselves. We have some ability to do this, of course, obviously, but most plans that people use are plans that they've been trained on. Like they've seen other people use those plans or they've been told how to do things, right? Like you can't invent how you, like take a person who's never heard of airplanes and tell them like, how do you go from New York to Paris? And they're probably not going to be able to kind of, you know, deconstruct the whole plan and as I've seen examples of that before. So certainly, LMS are going to be able to do this.
Starting point is 00:57:13 But then how you link this from the low level of actions, that needs to be done with things like JEPA that basically lift the abstraction level of the representation without attempting to reconstruct the detail of the situation. That's why we need JPAs for. I would love to sort of linger on your skepticism around autogressive LLMs. So one way I would like to test that skepticism is everything you say makes a lot of sense.
Starting point is 00:57:47 But if I apply everything you said today and in general to like, I don't know, 10 years ago, maybe a little bit less, no, let's say three years ago, I wouldn't be able to predict the success of LLMS. So, does it make sense to you that autoregressive LLMS are able to be so damn good? Yes. Can you explain your intuition? Because if I were to take your wisdom and intuition at face value, I would say there's no way autoregressive LLMS,Ms one token at a time would be able to do the kind of things they're doing. No, there's one thing that Autoregressive LLMs or that LLMs in general, not just the Autoregressive one, but including the bird style by directional ones are
Starting point is 00:58:40 exploiting and it's self supervised learning. And I've been a very, very strong advocate of self supervised learning for many years. So those things are an incredibly impressive demonstration that self-supervised learning actually works. The idea that, you know, started, it didn't start with Bert, but it was really kind of a good demonstration with this. So the idea that, you know, you take a piece of text, you corrupt it, and then you train some gigantic neural net to reconstruct the parts that are missing.
Starting point is 00:59:11 That has been an enormous, produced an enormous amount of benefits. It allowed us to create systems that understand language, systems that can translate, systems that can translate hundreds of languages in any direction, systems that are multilingual, so they're not, it's a single system that can be trained to understand hundreds of languages and translate in any direction, and produce summaries and then answer questions and produce text. And then there's a special case of it where which is the autoregressive trick where you constrain the system to not elaborate the
Starting point is 00:59:53 representation of the text from looking at the entire text, but only predicting a word from the words that are come before. And you do this by the constraining the architecture of the network. And that's what you can build an autoregressive LLM from. So there was a surprise many years ago with what's called decoder only LLM. So since, you know, systems of this type that are just trying to produce words from the previous one. And the fact that when you scale them up, one. And the fact that when you scale them up, they tend to really kind of understand more about the language. When you train them on lots of data, you make them really big. That was kind of a surprise. And that surprise occurred quite a while back, like, you know, with work from Google Meta, OpenAI, et cetera, going back to the GPT kind of work general pretrained
Starting point is 01:00:48 transformers. They mean like GPT2? Like there's a certain place where you start to realize scaling might actually keep giving us an emergent benefit. Yeah. I mean, there were work from various places, but if you want to kind of place it in the GPT timeline, there would be around GPT too. Well, I just, because you said it, you're so charismatic.
Starting point is 01:01:14 You said so many words, but self-supervised learning, yes. But again, the same intuition you're applying to saying that autoregressive LLMs cannot have a deep understanding of the world, if we just apply that same intuition, does it make sense to you that they're able to form enough of a representation in the world to be damn convincing, essentially passing the original Turing test with flying colors? Well, we're fooled by their fluency, right? We just assume that if a system is fluent in manipulating language,
Starting point is 01:01:50 then it has all the characteristics of human intelligence, but that impression is false. We're really fooled by it. What do you think Alan Turing would say? Without understanding anything, just hang it out with it. Alan Turing would decide that his Turing test is a really bad test. Okay, this is what the AI community has decided many years ago that the Turing test was a really bad test of intelligence.
Starting point is 01:02:14 What would Hans Moravek say about the larger language models? Hans Moravek would say that Moravek paradox still applies. Okay. Okay. Okay. We can pass. You don't think you would be really impressed? No, of course, everybody would be impressed. But, you know, it's not a question of being impressed or not.
Starting point is 01:02:32 It's the question of knowing what the limit of those systems can do. Like there, again, they are impressive. They can do a lot of useful things. There's a whole industry that is being built around them. They're going to make progress. But there's a lot of things they cannot do, and we have to realize what they cannot do, and then figure out how we get there.
Starting point is 01:02:52 And I'm seeing this from basically 10 years of research on the idea of self-supervised learning. Actually, that's going back more than 10 years. But the idea of self-supervised learning, actually that's going back more than 10 years, but the idea of self-supervised learning. So basically capturing the internal structure of a set of inputs without training the system for any particular task, right?
Starting point is 01:03:16 Learning representations. You know, the conference I co-founded 14 years ago is called International Conference on Learning Representations. That's the entire issue that deep learning is dealing with, right? And it's been my obsession for almost 40 years now. So learning representation is really the thing. For the longest time, we could only do this with supervised learning.
Starting point is 01:03:38 And then we started working on what we used to call unsupervised learning, and sort of revive the idea of unsupervised learning in the early 2000s with Yosha Benjo and Jeff Hinton, then discovered that supervised learning actually works pretty well if you can collect enough data. And so the whole idea of unsupervised or supervised learning, can I took a backseat for a bit
Starting point is 01:04:03 and then I kind of tried to revive it in a big way, starting in 2014, basically, when we started fair. And really pushing for finding new methods to do self-supervised running, both for text and for images and for video and audio. And some of that work has been incredibly successful. I mean, the reason why we have multilingual translation system, you know,
Starting point is 01:04:30 things to do content moderation on meta, for example, on Facebook that are multilingual to understand whether piece of text is HP drawn out or something is due to that progress using self-supervised learning for NLP, combining this with, you know, transformer architectures and blah, blah, blah. But that's the big success of self-supervised learning for NLP, combining this with transformer architectures and blah, blah, blah.
Starting point is 01:04:46 But that's the big success of self-supervised learning. We had similar success in speech recognition, a system called Wave2Vec, which is also a joint embedding architecture, by the way, trained with contrastive learning. And that system also can produce speech recognition systems that are multilingual with mostly unlabeled data and only need a few minutes of labeled data to actually do speech recognition systems that are multilingual with mostly unlabeled data and only need a few minutes of label data to actually do speech recognition. That's amazing. We have systems
Starting point is 01:05:12 now based on those combination of ideas that can do real-time translation of hundreds of languages into each other. Speech to speech. Speech to speech, even including just fascinating languages that don't have written forms. That's right. Just spoken only. That's right. We don't go through text. It goes directly from speech to speech using an internal representation of speech units
Starting point is 01:05:34 that are discrete, but it's called text less in LP. We used to call it this way, but yeah. So that, I mean, incredible success there. And then, you know, for 10 years, we tried to apply this idea to learning representations of images by training a system to predict videos, learning intuitive physics by training a system to predict what's going to happen in the video, and tried and tried and failed and failed with generative models, with models that predict pixels.
Starting point is 01:06:03 We could not get them to learn good representations of images. We could not get them to learn good representations of images. We could not get them to learn good representations of videos. And we tried many times. We published lots of papers on it. You know, they kind of sort of work, but not really great. They started working. We abandoned this idea of predicting every pixel and basically just doing the joint embedding
Starting point is 01:06:23 and predicting in representation space. That works. So there's ample evidence that we're not going to be able to learn good representations of the real world using generative model. So I'm telling people, everybody's talking about generative AI. If you're really interested in human level AI, abandon the idea of generative AI. Okay, but you really think it's possible to get far with the joint embedding representation.
Starting point is 01:06:50 So like there's common sense reasoning and then there's high level reasoning. I feel like those are two, the kind of reasoning that LLMs are able to do. Okay, let me not use the word reasoning, but the kind of stuff that LLMs are able to do. Okay, let me not use the word reasoning, but the kind of stuff that LLMs are able to do seems fundamentally different than the common sense reasoning we use to navigate the world.
Starting point is 01:07:13 It seems like we're gonna need both. You're not, would you be able to get, with a joint embedding, would you do the JAPA type of approach looking at video, would you be able to learn, let's see, well, how to get from New York to Paris or how to understand the state of politics in the world today. Right? These are things where various humans generate a lot of language and opinions on in the space of language, but don't visually represent that in any clearly compressible way.
Starting point is 01:07:48 Right. Well, there's a lot of situations that might be difficult for a purely language based system to know. Like, okay, you can probably learn from reading texts the entirety of the publicly available texts in the world that I cannot get from New York to Paris by slapping my fingers. That's not going to work. But there's probably more complex scenarios
Starting point is 01:08:14 of this type, which an NLM may never have encountered and may not be able to determine whether it's possible or not. So that link from the low level to the high level, the thing is that the high level that language expresses is based on the common experience of the low level, which LLMs currently do not have. When we talk to each other, we know we have a common experience of the world. A lot of it is similar. And LLMs don't have that. But see, it's present. You and I have a common experience of the world in terms of the physics of how gravity works and stuff like this. And that common knowledge of the world, I feel like is there in the language.
Starting point is 01:09:08 We don't explicitly express it, but if you have a huge amount of text, you're going to get this stuff that's between the lines. You're going to, in order to form a consistent world model, you're going to have to understand how gravity works, even if you don't have an explicit explanation of gravity. So even though in the case of gravity, there is explicit explanations of gravity and Wikipedia. But the stuff that we think of as common sense reasoning, I feel like to generate language correctly, you're going to have to
Starting point is 01:09:43 figure that out. Now you could say as you have, there's not enough text, sorry. Okay, so what? You don't think so? No, I agree with what you just said, which is that to be able to do high level common sense, to have high level common sense, you need to have the low level common sense to build on top of. Yeah. But that's not there.
Starting point is 01:10:02 And that's not there in LLMs. LLMs are purely trained from text. So then the other statement you made, I would not agree with the fact that implicit in all languages in the world is the underlying reality. There's a lot about underlying reality, which is not expressed in language. Is that obvious to you? Yeah, totally. Yeah, totally. So like all the conversations we have, okay, there's the dark web, meaning whatever, the private conversations like DMs and stuff like this, which is much, much larger probably
Starting point is 01:10:36 than what's available, what LLMs are trained on. You don't need to communicate the stuff that is common. But the humor, all of it. No, you do. Like when you, you don't need to, but it comes through. Like, if I accidentally knock this over, you'll probably make fun of me. And in the content of the you making fun of me will be explanation of the fact that cups fall and then, you know, gravity works in this way.
Starting point is 01:11:02 And then you'll have some very vague information about what kind of things explode when they hit the ground. And then maybe you'll make a joke about entropy or something like this that you will never be able to reconstruct this again. Like, okay, you'll make a little joke like this and there'll be trillion of other jokes. And from the jokes, you can piece together the fact
Starting point is 01:11:22 that gravity works and mugs can break and all this kind of stuff. You don't need to see, uh, it'll be very inefficient. It's easier for like, not the thing over. But, uh, I feel like it would be there if you have enough of that data. I just think that most of the information of this type that we have accumulated when we were babies is just not present in text, in any description, essentially. And the sensory data is a much richer source for getting that kind of understanding. I mean, that's the 16,000 hours of wake time of a four-year
Starting point is 01:12:00 old and 10 to the 15 bytes bites going through vision, just vision. There is a similar bandwidth of touch and a little less through audio. And then language doesn't come in until a year in life. And by the time you are nine years old, you've learned about gravity. You know about inertia, about gravity, the stability, about the distinction between animate and inanimate objects. By 18 months, you know about why people want to do things and you help them if they can't.
Starting point is 01:12:38 I mean, there's a lot of things that you learn mostly by observation, really, not even through interaction. In the first few months of life, babies don't really have any influence on the world. They can only observe, right? And you accumulate like a gigantic amount of knowledge just from that. So that's what we're missing from current AI systems.
Starting point is 01:13:00 I think in one of your slides, you have this nice plot that is one of the ways you show that LLMs are limited. I wonder if you could talk about hallucinations from your perspectives, why hallucinations happen from large language models and why and to what degrees that are fundamental flaw of large language models. Right. of large language models. Right, so because of the autoregressive prediction, every time an LNM produces a token or a word, there is some level of probability for that word
Starting point is 01:13:33 to take you out of the set of reasonable answers. And if you assume, which is a very strong assumption, that the probability of such error is that those errors are independent across a sequence of tokens being produced. What that means is that every time you produce a token, the probability that you stay within the set of correct answer decreases and it decreases exponentially. So there's a strong, like you said, assumption there that if there's a non-zero probability of making a mistake, which there appears to be, then there's going to be a kind of drift. Yeah.
Starting point is 01:14:12 And that drift is exponential. It's like errors accumulate. Right? So the probability that an answer would be nonsensical increases exponentially with the number of tokens. Is that obvious too, by the way? Like, well, so mathematically speaking, maybe, but like, isn't there a kind of gravitational pull towards the truth?
Starting point is 01:14:33 Because on an average, hopefully, the truth is well represented in the training set? No, it's basically a struggle against the curse of dimensionality. So the way you can correct for this is that you fine tune the system by having it produce answers for all kinds of questions that people might come up with. And people are people. So they, a lot of the questions that they have are very similar to each other. So you can probably cover, you know, 80% or whatever of questions that people will ask by you know collecting data and then
Starting point is 01:15:14 and then you fine tune the system to produce good answers for all of those things and it's probably going to be able to learn that because it's got a lot of capacity to learn. to be able to learn that because it's got a lot of capacity to learn. But then there is, you know, the enormous set of prompts that you have not covered during training. And that set is enormous. Like within the set of all possible prompts, the proportion of prompts that have been used for training is absolutely tiny. It's a tiny, tiny, tiny subset of all possible prompts. And so the system will behave properly on the prompts that has been either trained, pre-trained, or fine-tuned. But then there is an entire space of things that it cannot possibly have been trained on, because it's just the number is gigantic. So whatever training the system has been subject to,
Starting point is 01:16:08 to produce appropriate answers, you can break it by finding out a prompt that will be outside of the the set of prompts has been trained on or things that are similar. And then it will just spew complete nonsense. When you say prompt, do you mean that exact prompt? Or do you mean a prompt that's like in many parts very different than... Is it that easy to ask a question or to say a thing that hasn't been said before on the internet? I mean, people have come up with things where you put essentially a random sequence of characters in the pond. That's enough to throw the system into a mode where it's going to answer something completely different than it would have answered without this.
Starting point is 01:16:56 So that's a way to jailbreak the system, basically get it go outside of its conditioning. That's a very clear demonstration of it, but of course, you know, that's that goes outside of what is designed to do, right? If you actually stitch together reasonably grammatical sentences, is that is it that easy to break it? Yeah, some people have done things like you, you, you write a sentence in English, right? That has an, or you ask a question in English, and it produces a perfectly fine answer. And then you just substitute a few words
Starting point is 01:17:32 by the same word in another language. And all of a sudden, the answer is complete nonsense. Yes. So I guess what I'm saying is like, which fraction of prompts that humans are likely to generate are going to break the system. So the problem is that there is a long tail. Yes. This is an issue that a lot of people have realized in social networks and stuff like that, which is there's a very, very long tail of things that people will ask. And you can fine tune the system for the 80% or whatever of the things that most people will ask. And you can fine tune the system for the 80% or whatever of the things that most people
Starting point is 01:18:07 will ask. And then this long tail is so large that you're not going to be able to fine tune the system for all the conditions. And in the end, the system has been kind of a giant lookup table, right, essentially, which is not really what you want. You want systems that can reason, certainly they can plan. So the type of reasoning that takes place in LLM is very, very primitive. And the reason you can tell is primitive is because the amount of computation that is spent per token produced is constant. So if you ask a question, and that question has an answer in a given number of tokens, the amount of computation devoted to computing that answer can be exactly estimated. It's like, you know, it's the size of the prediction network, you know, with its 36 layers or 92 layers or whatever it is, multiplied by the number
Starting point is 01:18:57 of tokens. That's it. And so essentially, it doesn't matter if the question being asked Essentially, it doesn't matter if the question being asked is simple to answer, complicated to answer, impossible to answer because it's undecidable or something. The amount of computation the system will be able to devote to the answer is constant or is proportional to the number of tokens produced in the answer. This is not the way we work. The way we reason is that when we're faced with a complex problem or complex question, we spend more time trying to solve it and answer it, right? Because it's more difficult. There's a prediction element. There's an iterative element where you're like adjusting your understanding of a thing
Starting point is 01:19:45 by going over and over and over. There's a hierarchical element, so on. Does this mean that it's a fundamental flaw of LLMs? Or does it mean that there's more part to that question? Now you're just behaving like an LLM. Immediately answering. No, that is just the low level world model on top of which we can then build some of these kinds
Starting point is 01:20:10 of mechanisms, like you said, persistent long-term memory or reasoning, so on. But we need that world model that comes from language. Is it, maybe it is not so difficult to build this kind of reasoning system on top of a well-constructed world model? Okay. Whether it's difficult or not, the near future will say because a lot of people are working
Starting point is 01:20:35 on reasoning and planning abilities for dialogue systems. I mean, even if we restrict ourselves to language, just having the ability to plan your answer before you answer in terms that are not necessarily linked with the language you're going to use to produce the answer, right? So this idea of this mental model that allows you to plan what you're going to say before you say it. That is very important. I think there's going to be a lot of systems over the next few years that are going to
Starting point is 01:21:08 have this capability. But the blueprint of those systems would be extremely different from autoregressive LLMs. So it's the same difference as the difference between what psychologists call System 1 and System 2 in humans. So system one is the type of task that you can accomplish without deliberately, consciously think about how you do them. You've done them enough that you can just do it subconsciously without thinking about them. If you're an experienced driver, you can drive without really thinking about it and you can talk to someone at the same time or listen to the radio, right? If you are a very experienced
Starting point is 01:21:50 chess player, you can play against a non-experienced chess player without really thinking either. You just recognize the pattern that you play, right? That's the system one. So all the things that you do instinctively without really having to deliberately plan and think about it. And then there is all the tasks where you need to plan. So if you are not too experienced chess player or you are experienced when you play against another experienced chess player, you think about all kinds of options, right? You think about it for a while, right?
Starting point is 01:22:20 And you're much better if you have time to think about it than you are if you play Blitz with limited time. So this type of deliberate planning which uses your internal world model, that system too, this is what LLMs currently cannot do. So how do we get them to do this? How do we build a system that can do this kind of planning or reasoning that devotes more resources to complex problems than to simple problems? And it's not going to be autoregressive prediction of tokens. It's going to be more something
Starting point is 01:22:58 akin to inference of latent variables in what used to be called probabilistic models or graphical models and things of that type. So basically, the principle is like this. The prompt is like observed variables. And what the model does is that it's basically a measure of, it can measure to what extent an answer is a good answer for a point. So think of it as some gigantic neural net, but it's got only one output and that output is a scalar number, which is, let's say zero,
Starting point is 01:23:39 if the answer is good answer for the question, and a large number if the answer is not a good answer for the question, imagine you had this model. If you had such a model, you could use it to produce good answers. The way you would do is produce the prompt and then search through the space of possible answers for one that minimizes that number. That's called an energy-based model. But that energy-based model would need the model constructed by the LLM? Well, so really what you need to do would be to not search over possible strings of text
Starting point is 01:24:17 that minimize that energy. But what you would do is do this in abstract representation space. So in sort of the space of abstract thoughts, you would elaborate a thought using this process of minimizing the output of your model, which is just a scalar. It's an optimization process. So now the way the system produces its answer is through optimization by minimizing an objective function, basically.
Starting point is 01:24:47 And this is, we're talking about inference, we're not talking about training. The system has been trained already. So now we have an abstract representation of the thought of the answer, representation of the answer. We feed that to basically on the two-vector decoder, which can be very simple, that turns this into a
Starting point is 01:25:05 text that expresses this thought. So that in my opinion is the blueprint of future data systems. They will think about their answer, plan their answer by optimization before turning it into text. And that is Turing Complete. Can you explain exactly what the optimization problem there is? Like what's the objective function? Just link on it, you kind of briefly described it.
Starting point is 01:25:33 But over what space are you optimizing? The space of representations. It goes up. Abstract representation. Abstract representation. So you have an abstract representation inside the system. You have a prompt. The prompt goes through an encoder,
Starting point is 01:25:46 produces a representation, perhaps goes through a predictor that predicts a representation of the answer, of the proper answer. But that representation may not be a good answer because there might be some complicated reasoning you need to do, right? So then you have another process
Starting point is 01:26:04 that takes the representation of the answers and modifies it so as to minimize a cost function that measures to what extent the answer is a good answer for the question. Now, we sort of ignore the fact for, I mean, the issue for a moment of how you train that system to measure whether an answer is a good answer for a question. Sure. But suppose such a system could be created. Right.
Starting point is 01:26:32 But what's the process, this kind of search-like process? It's an optimization process. You can do this if the entire system is differentiable, that scalar output is the result of running through some neural net, running the representation of of running through some neural net, running the representation of the answers to some neural net. Then by gradient descent, by backpropagating gradients, you can figure out how to modify the representation of the answers,
Starting point is 01:26:56 so as to minimize that. So that's still a gradient-based? It's gradient-based inference. So now you have a representation of the answer in abstract space, now you can turn representation of the answer in abstract space. Now you can turn it into text. Right. And the cool thing about this is that the representation now can be optimized through gradient descent, but also is independent of the language in which you're going to express the answer.
Starting point is 01:27:19 Right. So you're operating in the subtract representation. I mean this goes back to the joint embedding right that is better to work in the In the space of I don't know or to romanticize the notion like space of concepts versus. Yeah, the space of concrete Sensory information right Okay, but is can just do something like reasoning which is what we're talking about? Well, not really. Only in a very simple way. I mean, basically, you can think of those things that's doing the kind of optimization I was talking about, except they optimize in the discrete space, which is the space of possible sequences of tokens. And they do this optimization in a horribly inefficient way, which is generate a lot of hypothesis and then select the best ones. And that's incredibly
Starting point is 01:28:08 wasteful in terms of computation because you have you run you basically have to run your LLM for like every possible, you know, generating sequence and it's incredibly wasteful. So it's much better to do an optimization in continuous space where you can do great and descent as opposed to like generate tons of things and instead act the best. You just iteratively refine your answer to go towards the best, right?
Starting point is 01:28:35 That's much more efficient. But you can only do this in continuous spaces with differentiable functions. You're talking about the reasoning, like ability to think deeply or to reason deeply. How do you know what is an answer that's better or worse based on deep reasoning? Right. So then we're asking the question of conceptually, how do you train an energy-based model? So an energy-based model is a function with a scalar output, just a number. You give it two inputs, x and y, and it tells you whether y is compatible with x or not. x you observe, let's say it's a prompt, an image, a video, whatever. And y is a
Starting point is 01:29:18 proposal for an answer, a continuation of the video, whatever. And it tells you whether y is compatible with x. And the way it tells you that y is compatible with x is that the output of that function will be 0. If y is compatible with x, it will be a positive number, non-zero. If y is not compatible with x. Okay, how do you train a system like this at a completely general level? Is you show it pairs of x and Y that are compatible, a question and a corresponding answer. And you train the parameters of the big neural net inside to produce zero. Okay, now that doesn't completely work because the system might decide, well, I'm just going to say zero for everything.
Starting point is 01:30:04 So now you have to have a process to make sure that for a wrong Y, the energy would be larger than 0. And there you have two options. One is contrastive methods. So contrastive method is you show an X and a bad Y. And you tell the system, well, that's give a high energy to this, like push up the energy, right? Change the weights in the neural net that confuse the energy so that it goes up. So that's contrasting methods.
Starting point is 01:30:30 The problem with this is if the space of y is large, the number of such contrasting samples you're going to have to show is gigantic. But people do this. They do this when you train a system with RLHF, basically what you're training is what's called a reward model, which is basically an objective function that tells you whether an answer is good or bad, and that's basically exactly what this is. So we already do this to some extent. We're just not using it for inference.
Starting point is 01:31:02 We're just using it for training. There is another set of methods, which are non-contrastive, and I prefer those. And those non-contrastive methods basically say, okay, the energy function needs to have low energy on pairs of X, Ys that are compatible that come from your training set. How do you make sure that the energy is going to be higher everywhere else?
Starting point is 01:31:30 And the way you do this is by having a regularizer, a criterion, a term in your cost function that basically minimizes the volume of space that can take low energy. And the precise way to do this is all kinds of different specific ways to do this depending on the architecture, but that's the basic principle. So that if you push down the energy function for particular regions in the XY space, it will automatically go up in other places because it is only a limited volume of space that can take low energy. Okay. By
Starting point is 01:32:04 the construction of the system or by the regularizing function. We've been talking very generally about what is a good X and a good Y, what is a good representation of X and Y, because we've been talking about language and if you just take language directly, that presumably is not good. So there has to be some kind of abstract representation of ideas. Yeah, so you can do this with language directly
Starting point is 01:32:32 by just, you know, x is a text and y is a continuation of that text. Yes. Or x is a question, y is an answer. But you're saying that's not going to take it. I mean, that's going to do what LLMs are doing. Well, no, it depends on how the internal structure of the system is built. If the internal structure of the system is built in such a way that inside of the system,
Starting point is 01:32:55 there is a latent variable, that's called a z, that you can manipulate so as to minimize the output energy, then that z can be viewed as to minimize the output energy, then that Z can be viewed as a representation of a good answer that you can translate into a Y that is a good answer. So this kind of system could be trained in a very similar way? Very similar way, but you have to have this way of preventing collapse, of ensuring that, you know,
Starting point is 01:33:23 there is high energy for things you don't train it on. And currently, it's very implicit in LLM. It's done in a way that people don't realize it's being done, but it is being done. It's due to the fact that when you give a high probability to a word, automatically you give low probability to other words because you only have a finite amount of probability to go around right there to sum to one. So when you minimize the cross entropy or whatever, when you train your LLM to produce the, to predict the next word, you're increasing the probability your system will give to the correct word, but you're also decreasing the probability we'll give to the incorrect words. Now, indirectly, that gives a low probability to a high probability
Starting point is 01:34:10 to sequences of words that are good and low probability to sequences of words that are bad, but it's very indirect. And it's not, it's not obvious why this actually works at all. But because you're not doing it on a joint probability of all the symbols in a sequence. You're just doing it kind of factorized that probability in terms of conditional probabilities over successive tokens. So how do you do this for visual data? So we've been doing this with OJEPA architectures basically. The joint embedding.
Starting point is 01:34:40 OJEPA. So there the compatibility between two things is, you know, here's an image or a video, here's a corrupted, shifted or transformed version of that image or video or masked. Okay. And then the energy of the system is the prediction error of the representation, the predicted representation of the good thing, that was the actual representation of the good thing that was the actual representation of the good thing Right. So so you run the corrupted image to the system predict the representation of the good input and corrupted And then compute the prediction error. That's the energy of the system. So this system will tell you this is a good system. So this system will tell you this is a good, you know, this is a good image and this is a corrupted version. It will give you zero energy if those two things are effectively one of them is
Starting point is 01:35:34 a corrupted version of the other. It will give you a high energy if the two images are completely different. And hopefully that whole process gives you a really nice compressed representation of the whole process gives you a really nice compressed representation of reality, of visual reality. And we know it does because then we use those for our presentations as input to a classification system. And then that classification system works really nicely. Okay. Well, so to summarize, you recommend in a spicy way that only Yalakoon can, you recommend
Starting point is 01:36:03 that we abandon generative models in favor of joint embedding architectures? Yes. Abandon auto-aggressive generation? Yes. Abandon, this feels like court testimony. Abandon probabilistic models in favor of energy-based models,
Starting point is 01:36:17 as we talked about, abandon contrastive methods in favor of regularized methods. And let me ask you about this. You've been for a while a critic of reinforcement learning. Yes. So the last recommendation is that we abandoned RL in favor of model predictive control as you were talking about, and only use RL when planning doesn't yield the predicted outcome. And we use RL in that case to adjust the world model or the critic. So you mentioned RLHF, reinforcement learning with human feedback. Why do you still hate reinforcement learning? I don't hate reinforcement learning. And I think
Starting point is 01:37:01 it should not be abandoned completely. But I think it's used to be minimized because it's incredibly inefficient in terms of samples. And so the proper way to train a system is to first have it learn good representations of the world and world models from mostly observation, maybe a little bit of interactions. And then steered based on that, if the representation is good, then the adjustments should be minimal. Yeah, and now there's two things you can use.
Starting point is 01:37:31 If you've learned a world model, you can use the world model to plan a sequence of actions to arrive at a particular objective. You don't need a route, unless the way you measure whether you succeed might be an exact. Your idea of, you know, whether fall from your bike might be wrong, or whether the person you're fighting with MMA was going to do something and then do something else.
Starting point is 01:38:04 So there's two ways you can be wrong. Either your objective function does not reflect the actual objective function you want to optimize, or your world model is inaccurate. So the prediction you were making about what was going to happen in the world is inaccurate. So if you want to adjust your world model while you are operating the world or your objective function, that is basically in the realm of RL. This is what RL deals with, to some extent. So adjust your world model. And the way to adjust your world model, even in advance, is to explore parts of the space where you know that your world model is inaccurate. That's called curiosity, basically, or play, right? When you play, you kind of explore parts of
Starting point is 01:38:50 the state space that, you know, you don't want to do in for real, because it might be dangerous, but you can adjust your world model without killing yourself, basically. So that's what you want to use RL for. When it comes time to learning a particular task, you already have all the good representations, you already have your world model, but you need to adjust it for the situation at hand. That's when you use RL. What do you think RLHF works so well?
Starting point is 01:39:22 This enforcement learning with human feedback. Why did it have such a transformational effect on large language models it came before? What's had the transformational effect is human feedback. There is many ways to use it, and some of it is just purely supervised actually. It's not really reinforcement learning. So it's the HF. It's the HF. And then there is various ways to use human feedback, right? So you can ask humans to rate answers, multiple answers that are produced by a world model. And then what you do is you train an objective function
Starting point is 01:39:58 to predict that rating. And then you can use that objective function to predict whether an answer is good. And you can back that objective function to predict, you know, whether an answer is good and you can back propagate gradient through this to fine tune your system so that it only produces highly rated answers. Okay, so that's one way. So that's like, in RL, that means training what's called a
Starting point is 01:40:20 reward model, right? So something that, you know, basically a small number on that that estimates to what extent an answer reward model, right? So something that, you know, basically a small normal net that estimates to what extent an answer is good, right? It's very similar to the objective I was talking about earlier for planning, except now it's not useful for planning, it's useful for fine-tuning your system. I think it would be much more efficient to use it for planning, but currently it's used to fine-t tune the parameters of the system. Now there's several ways to do this. You know, some of them are supervised.
Starting point is 01:40:50 You just ask a human person, like, what is a good answer for this? Right, then you just type the answer. I mean, there's lots of ways that those systems are being adjusted. Now, a lot of people have been very critical of the recently released Google's Gemini 1.5 for essentially, in my words, I could say super woke,
Starting point is 01:41:16 woke in the negative connotation of that word. There's some almost hilariously absurd things that it does, like it modifies history, like generating images of a Black George Washington, or perhaps more seriously something that you commented on Twitter, which is refusing to comment on or generate images of, or even descriptions of Tiananmen Square or images of, or even descriptions of Tiananmen Square or the Tank Man, one of the most sort of legendary protest images in history. Of course, these images are highly censored by the Chinese government, and therefore, everybody started asking questions of what is the process of designing these LLMs? What is the role of censorship and all that kind of stuff?
Starting point is 01:42:11 So you commented on Twitter saying that open source is the answer. Yeah. Essentially. So can you explain? I actually made that comment on just about every social network I can. I've made that point multiple times in various forums. Here's my point of view on this. People can complain that AI systems are biased, and they generally are biased by the distribution of the training data that they've been trained on
Starting point is 01:42:47 that reflects biases in society. And that is potentially offensive to some people or potentially not. And some techniques to de-b, then become offensive to some people because of historical incorrectness and things like that. And so you can ask the question, you can ask two questions. The first question is, is it possible to produce an AI system that is not biased? And the answer is absolutely not.
Starting point is 01:43:26 And it's not because of technological challenges, although they are technological challenges to that. It's because bias is in the eye of the beholder. Different people may have different ideas about what constitutes bias, you know, for a lot of things. I mean, there are facts that are indisputable, but there are a lot of opinions or things that can be expressed in different ways. And so you cannot have an unbiased system. That's just an impossibility. And so what's the answer to this? And the answer is the same answer that we found
Starting point is 01:44:09 in liberal democracy about the press. The press needs to be free and diverse. We have free speech for a good reason, is because we don't want all of our information to come from a unique source, because that's opposite to the whole idea of democracy and progress of ideas and even science. In science, people have to argue for different opinions and science makes progress when people disagree and they come up with an answer
Starting point is 01:44:45 and you know a consensus forms right and it's true in all democracies around the world. So there is a future which is already happening where every single one of our interaction with the digital world will be mediated by AI systems, AI systems, right? We're going to have smart glasses. You can already buy them from META, the Rayban META, where you can talk to them. And they are connected with an LLM. And you can get answers on any question you have.
Starting point is 01:45:18 Or you can be looking at a monument. And there is a camera in the system that in the glasses, you can ask it like, what can you tell me about this building or this monument? You can be looking at a menu in a foreign language and I think we'll translate it for you or you can do real-time translation if you speak different languages. So a lot of our interactions with the digital world are going to be mediated by those systems in the near future. You know, increasingly the search engines that we're going to use are not going to be search engines. They're going to be data systems that we just ask a question and it will answer and then
Starting point is 01:45:58 point you to perhaps appropriate reference for it. But here is the thing, we cannot afford those systems to come from a handful of companies on the west coast of the US. Because those systems will constitute the repository of all human knowledge. And we cannot have that be controlled by a small number of people. Right? It has to be diverse. For the same reason, the press has to be diverse. It has to be diverse. For the same reason, the press has to be diverse. So how do we get a diverse set of AI assistance? It's very expensive and difficult to train a base model, a base LLM at the moment, in the future, it might be something different. But at the moment, that's an LLM.
Starting point is 01:46:40 So only a few companies can do this properly. And if some of those sub systems are open source, anybody can use them, anybody can fine tune them. If we put in place some systems that allows any group of people, whether they are individual citizens, groups of citizens, individual citizens, groups of citizens, government organizations, NGOs, companies, whatever, to take those open source systems, AI systems, and fine tune them for their own purpose on their own data. They were going to have a very large diversity of different AI systems that are specialized for all of those things, right? So I tell you, I talked to the French government quite a bit, and the French government will
Starting point is 01:47:32 not accept that the digital diet of all their citizens be controlled by three companies on the west coast of the US. That's just not acceptable. It's a danger to democracy, regardless of how well-intentioned those companies are. And it's also a danger to local culture, to values, to language. I was talking with the founder of Infosys in India. He's funding a project to fine tune Lamatu, the open source model produced by Meta, so that Lamatu speaks all 22 official languages in India.
Starting point is 01:48:16 It's very important for people in India. I was talking to a former colleague of mine, Mustafa Sisei, who used to be a scientist at Faire, and then moved back to Africa, I created a research lab for Google in Africa, and now is as a new startup called CARA. And what he's trying to do is basically have LLM that speaks the local languages in Senegal,
Starting point is 01:48:35 so that people can have access to medical information, because they don't have access to doctors. It's a very small number of doctors per capita in Senegal. I mean, you can cannot have any of this unless you have open source platforms. So with open source platforms, you can have AI systems that are not only diverse in terms of political opinions or things of that type, but in terms of language, culture, value systems, political opinions, technical abilities in various domains. And you can have an industry, an ecosystem of companies that fine-tune those open source systems for vertical applications in industry, right?
Starting point is 01:49:20 You have, I don't know, a publisher has thousands of books and they want to build a system that allows a customer to just ask a question about any, but the content of any of their books. You need to train on their proprietary data, right? You have a company, we have one within Meta, it's called MetaMate, and it's basically an LLM that can answer any question about internal stuff
Starting point is 01:49:43 about the company. Very useful. A lot of companies want this, right? A lot of companies want this not just for their employees, but also for their customers to take care of their customers. So the only way you're going to have an AI industry, the only way you're going to have AI systems that are not uniquely biased is if you have open source platforms on top of which any group can build specialized systems. So the direction of inevitable direction of history is that the vast majority of AI systems will be built on top of open source platforms.
Starting point is 01:50:21 So that's a beautiful vision. So meaning like a company like Meta or Google or so on should take only minimal fine tuning steps after the building the foundation pre-trained model as few steps as possible. Basically. Can Meta afford to do that? No. So I don't know if you know this but companies are supposed to make money somehow and open source is
Starting point is 01:50:51 Giving away. I don't know mark made a video Mark Zuckerberg Very sexy video talking about 350,000 Nvidia H 100's 350,000 NVIDIA H100s. The math of that is just for the GPUs, that's 100 billion, plus the infrastructure for training everything. So I'm no business guy, but how do you make money on that? So the division you paint is a really powerful one, but how is it possible to make money?
Starting point is 01:51:25 Okay, so you have several business models, right? The business model that Minta is built around is your first service. And the financing of that service is either through ads or through business customers. So for example, if you have an LLM that can help a moment pop pizza place by talking to the customers of WhatsApp, and so the customers can just order a pizza and the system will just, you know, has them like, what topping do you want? Or websites, blah, blah, blah. The business will pay for that. OK, that's a model. And otherwise, you know, if it's a system that
Starting point is 01:52:14 is on the more kind of classical services, it can be ad supported or, you know, there's several models. But the point is, if you have a big enough potential customer base and you need to build a system anyway for them, it doesn't hurt you to actually distribute it in open source. Again, I'm no business guy, but if you release the open source model, then other people can do the same kind of task and compete on it. Basically provide fine-tuned models for businesses. As the bet that Meta is making, by the way, I'm a huge fan of all this, but as the bet
Starting point is 01:52:55 that Meta is making is like, we'll do a better job of it. Well, no, the bet is more we already have a huge user base and customer base. Ah, right. So it's gonna be useful to them. Whatever we offer them is gonna be useful and there is a way to derive revenue from this. And it doesn't hurt that we provide that system, or the base model, the foundation model,
Starting point is 01:53:24 in open source for others to build applications on top of it too. If those applications turn out to be useful for our customers, we can just buy it from them. It could be that they will improve the platform. In fact, we see this already. I mean, there is literally millions of downloads of Lamatoo and thousands of people who have, you know,
Starting point is 01:53:45 provided ideas about how to make it better. So, you know, this clearly accelerates progress to make the system available to a sort of a wide community of people. And there's literally thousands of businesses who are building applications with it. So our ability to, META's ability to derive revenue from this technology is not impaired by the distribution of it, of base models in open source. The fundamental criticism that Gemini is getting is that, as you pointed out on the West Coast just to Just to clarify where currently in the East Coast where I was supposed to matter AI headquarters would be So they're strong words about the West Coast, but I guess the issue that happens is I think it's fair to say that most tech people have a
Starting point is 01:54:44 Political affiliation with the left wing. They lean left. The problem that people are criticizing Gemini with is that there's, in that debicing process that you mentioned, that their ideological lean becomes obvious. Is this something that could be escaped? You're saying open source is the only way. Have you witnessed this kind of ideological lean that makes engineering difficult? No, I don't think the issue has to do with the political leaning of the people designing those systems. It has to do with the acceptability or political leanings of their customer base or audience, right? So a big company cannot afford to offend too many people. So they're going to make sure that whatever product they put out is safe, whatever that means.
Starting point is 01:55:49 they put out is safe, whatever that means. And it's very possible to overdo it. And it's also very possible to, it's impossible to do it properly for everyone. You're not going to satisfy everyone. So that's what I said before, you cannot have a system that is unbiased, that is perceived as unbiased by everyone. It's going to be, you know, you push it in one way, one set of people are going to see it as biased, and then you push it the other way, and another set of people is going to see it as biased. And then in addition to this, there's the issue of if you push the system, perhaps it'll go too far in one direction, it's going to be nonfactual, right? You're going to have, you know, you know, black Nazi soldiers in the... you know, you know, Black Nazi soldiers in the... Yes, we should mention image generation of Black Nazi soldiers, which is not factually accurate. Right, and can be offensive for some people as well, right? So,
Starting point is 01:56:37 so, you know, it's going to be impossible to kind of produce systems that are unbiased for everyone. So the only solution that I see is diversity. And diversity in full meaning of that word diversity in every possible way. Yeah. Mark Andreessen just tweeted today, let me do a TLDR. The conclusion is only startups and open source can avoid the issue that he's highlighting with Big Tech. He's asking, can Big Tech actually field generative AI products? One, ever escalating demands from internal activists, employee mobs, crazed executives, broken boards, pressure groups, extremist regulators, government agencies, the press,
Starting point is 01:57:21 in quotes, experts, and, corrupting the output to constant risk of generating a bad answer or drawing a bad picture or rendering a bad video. Who knows what is going to say or do at any moment, three legal exposure, product liability, slander, election law, many other things and and so on. Anything that makes Congress mad. Four, continuous attempts to tighten grip on acceptable output to grade the model, like how good it actually is, in terms of usable and pleasant to use and effective and all that kind of stuff. And five, publicity of bad text, images, video, actual puts those examples into the training data for the next version, so on. So he just highlights how difficult
Starting point is 01:58:10 this is from all kinds of people being unhappy. He said you can't create a system that makes everybody happy. So if you're going to do the fine-tuning yourself and keep a close source. Essentially the problem there is then trying to minimize the number of people who are going to be unhappy. Yeah. And you're saying that almost impossible to do right. And that's the better ways to do open source. Basically, yeah.
Starting point is 01:58:39 I mean, Mark is right about a number of things that he lists that indeed scare large companies. Certainly, congressional investigations is one of them, legal liability, making things that get people to hurt themselves or hurt others. Big companies are really careful about not producing things of this type because they don't want to hurt anyone first of all and then second, they want to preserve their business. So it's essentially impossible for systems like this to can inevitably formulate political opinions and opinions about various things that may be political or not, but that people may disagree about moral issues and questions about religion
Starting point is 01:59:36 and things like that, right? Or cultural issues that people from different communities would disagree with in the first place. So there's only kind of a relatively small number of things that people will sort of agree on, you know, basic principles. But beyond that, if you want those systems to be useful, they will necessarily have to offend a number of people inevitably. And so open source is just better. And then diversity is better, right? And open source enables diversity. That's right. Open source enables diversity. That's going to be fascinating world where if it's true that the open source world, if Meta Lee's the way and the crisis kind of open source foundation model world, there's going to be like governments will have a fine-do model and Yeah, and and then potentially You know people that vote left and right will have their own model and preference to be able to choose and it will
Starting point is 02:00:35 Potentially divide us even more, but that's on us humans. We get to figure out basically the technology enables humans to human more effectively. All the difficult ethical questions that humans raise will just leave it up to us to figure it out. Yeah. I mean, there are some limits to what, you know, the same way there are limits to free speech, there has to be some limit to the kind of stuff that those systems might be authorized to limit to the kind of stuff that those systems might be authorized to produce, you know, some guardrails. So I mean, that's one thing I've been interested in, which is in the type of architecture that
Starting point is 02:01:13 we were discussing before, where the output of a system is a result of an inference to satisfy an objective. That objective can include guardrails. And we can put guardrails in open source systems. I mean, if we eventually have systems that are built with this blueprint, we can put guardrails in those systems. I guarantee that there is sort of a minimum set of guardrails that make the system non-dangerous and non-toxic, et cetera.
Starting point is 02:01:43 You know, basic things that everybody would agree on. And then the fine-tuning that people will add or the additional guardrails that people will add will cater to their community, whatever it is. And the fine-tuning will be more about the gray areas of what is hate speech, what is dangerous and all that kind of stuff. What different value systems? the gray areas of what is hate speech, what is dangerous and all that kind of stuff. I mean, you've- With different value systems. Still value systems.
Starting point is 02:02:07 I mean, like, but still even with the objectives of how to build a bio weapon, for example, I think something you've commented on or at least there's a paper where a collection of researchers is trying to understand the social impacts of these LLMs. And I guess one threshold is nice is like, does the LLM make it any easier than a search would, like a Google search would? Right. So the increasing number of studies on this seems to point to the fact that it doesn't help. So having an LLM doesn't help you design or build a bio weapon or a chemical weapon if you already have access to such engine in a library.
Starting point is 02:02:55 And so the sort of increased information you get or the ease with which you get it doesn't really help you. That's the first thing. The second thing is it's one thing to have a list of instructions of how to make a chemical weapon, for example, a bio-weapon. It's another thing to actually build it.
Starting point is 02:03:12 And it's much harder than you might think, and then NLNM will not help you with that. In fact, nobody in the world, not even countries, use bio-weapons because most of the times they have no idea how to protect their own populations against it. So it's too dangerous actually to ever use. And it's in fact banned by international treaties. Chemical weapons is different. It's also banned by treaties, but it's the same problem.
Starting point is 02:03:43 It's difficult to use in situations that doesn't turn against the perpetrators. But we could ask Elon Musk, like I can give you a very precise list of instructions of how you build a rocket engine. And even if you have a team of 15 engineers that are re-experienced building it, you're still going to have to blow up a dozen of them before you get one that works. And it's the same with chemical weapons or bioweapons or things like this. It requires expertise in the real world that the Netherlands is not going to help you with.
Starting point is 02:04:18 And it requires even the common sense expertise that we've been talking about, which is how to take language-based instructions and materialize them in the physical world requires a lot of knowledge that's not in the instructions. Yeah, exactly. A lot of biologists have posted on this actually in response to those things saying like, you realize how hard it is to actually do the lab work? And I can know this is not trivial. Yeah. it is to actually do the lab work and I can know this is not trivial. Yeah and that's Hans
Starting point is 02:04:46 Morovic comes comes to light once again. Just the linger on Lama, you know Mark announced that Lama 3 is coming out eventually. I don't think there's a release date but what are you most excited about? First of all Lama 2 that's already out there and maybe maybe the future Alama 3, 4, 5, 6, 10, just the future of the open source under meta. Well, a number of things. So there's going to be like various versions of Lama that are improvements of previous Lamas, bigger, better, multimodal, things like that. And then in future generations, systems that are capable of
Starting point is 02:05:26 planning that really understand how the world works, maybe are trained from video, so they have some world model, maybe, you know, capable of the type of reasoning and planning I was talking about earlier. Like, how long is that going to take? Like, when is the research that is going in that direction going to sort of feed into the product line, if you want of Lama. I don't know, I can tell you. And there's a few breakthroughs that we have to basically go through before we can get there. But you'll be able to monitor our progress because we publish our research, right? So, you know, last week we published the VJPA work, which is sort of a first step towards
Starting point is 02:06:06 training systems for video. And then the next step is going to be world models based on this type of idea, training from video. There's similar work at DeepMind also, and taking place people, and also at UC Berkeley on world models from video. A lot of people are working on this. I think a lot of good ideas are appearing. My bet is that those systems are going to be JEPALITE, they're not going to be generative models. And we'll see what the future will tell. There's really good work at a gentleman called Daniel Jarhafner,
Starting point is 02:06:48 who is not a deep mind, who's worked on models of this type that learn representations and then use them for planning or learning tasks by reinforcement learning. And a lot of work at Berkeley by Peter Ibele, Saag Yalevin, a bunch of other people of that type. I'm collaborating with actually in the context of some grants with my NYU hat. And then collaborations also through Meta, because the lab at Berkeley is associated with Meta in some way, so with fair. So I think it's very exciting. You know, I think I'm super excited about,
Starting point is 02:07:26 I haven't been that excited about the direction of machine learning and AI since 10 years ago when FAIR was started. Before that, 30 years ago, we were working on, what's it, 35 on convolutional nets and the early days of neural nets. So I'm super excited because I see a path towards potentially human level intelligence with, you know, systems that can understand the world, remember, plan, reason. There is some set of ideas to make progress there that might have a chance of working. And I'm really excited about this. What I like is that, you know, it's somewhat, we get onto like a good direction and perhaps
Starting point is 02:08:14 succeed before my brain turns to a white sauce or before I need to retire. Yeah. Yeah. You're also excited by, are you, is it beautiful to you just the amount of GPUs involved? So the whole training process on this much compute is just zooming out, just looking at Earth and humans together have built these computing devices and are able to train this one brain, then we then open source. Like giving birth to this open source brain trained on this gigantic compute system. There's just the details of how to train on that, how to build the infrastructure and the hardware,
Starting point is 02:09:02 the cooling, all of this kind of stuff. Or are you just still the most of your excitement is in the theory aspect of it? Meaning like the software. Well, I used to be a hardware guy many years ago. Yes, yes, that's right. Decades ago. Hardware has improved a little bit. Changed a little bit. Yeah. I mean, certainly scale is necessary, but not sufficient.
Starting point is 02:09:24 Absolutely. So we certainly need computation. I mean, we're still far in terms of complete power from what we would need to match the complete power of the human brain. This may occur in the next couple of decades, but we're still some ways away. And certainly in terms of power efficiency we're really far. So there's a lot of progress to make in hardware. Right now a lot of progress is not, I mean there's a bit coming from Silicon technology but a lot of it coming from architectural innovation and quite a bit coming from more
Starting point is 02:10:02 efficient ways of implementing the architectures that have become popular, basically, combination of transformers and components. So there's still some ways to go until we're going to saturate. We're going to have to come up with new principles, new fabrication technology, new basic components, perhaps based on different principles than the classical digital CMOS. Interesting. So you think in order to build AMI, we potentially might need some hardware innovation too.
Starting point is 02:10:50 Well, if we want to make it ubiquitous, yeah, certainly, because we're going to have to reduce the power consumption. A GPU today is half a kilowatt to a kilowatt. Human brain is about 25 watts. And a GPU is way below the power of human brain. You need something like 100,000 or a million to match it. So we are off by a huge factor here. You often say that AGI is not coming soon, meaning like not this year, not the next few years, potentially farther away. What's your basic intuition behind that? So first of all, it's not going to be an event. Right.
Starting point is 02:11:31 The idea somehow, which, you know, is popularized by science fiction and Hollywood that, you know, somehow somebody is going to discover the secret, the secret to a GI or human level AI or AMI, whatever you want to call it. And then, you know, turn on a machine and then we have a GI. That's just not going to happen. It's not going to be an event. It's going to be gradual progress.
Starting point is 02:11:56 Are we going to have systems that can learn from video how the world works and learn good world presentations? Yeah. Before we get them to the scale and performance that we observe in humans, it's going to take quite a while. It's not going to happen in one day. Are we going to get systems that can have large amount of associative memory so they can remember stuff? Yeah, but same, it's not going to happen tomorrow. I mean, there is some basic techniques that need to be developed. We have a lot of them, but like, you know, to get this to work together with full system
Starting point is 02:12:28 is another story. Are we going to have systems that can reason and plan perhaps along the lines of objective driven AI architectures that I described before? Yeah, but like before we get this to work, you know, properly, it's going to take a while. So, and before we get all those things to work together, and then on top of this have systems that can learn like hierarchical planning, hierarchical representations, systems that can be configured
Starting point is 02:12:51 for a lot of different situation at hands, the way the human brain can. You know, all of this is going to take, you know, at least a decade and probably much more because there are a lot of problems that we're not seeing right now that we have not encountered. And so we don't know if there is an easy solution within this framework. So it's not just around the corner.
Starting point is 02:13:16 I mean, I've been hearing people for the last 12, 15 years claiming that AGI is just around the corner and being systematically wrong. And I knew they were wrong when they were saying it. I called their bullshit. Why do you think people have been calling, first of all, I mean, from the beginning, from the birth of the term artificial intelligence, there has been eternal optimism. That's perhaps unlike other technologies. Is it a Marvellous paradox? Is the explanation for why people are so optimistic about AGI?
Starting point is 02:13:49 I don't think it's just Morovox paradox. Morovox paradox is a consequence of realizing that the world is not as easy as we think. So first of all, intelligence is not a linear thing that you can measure with a scalar, with a single number. Can you say that humans are smarter than orangutans? In some ways, yes. But in some ways, orangutans are smarter than humans in a lot of domains that allows them to survive in the forest, for example.
Starting point is 02:14:19 So IQ is a very limited measure of intelligence. Do you intelligence is bigger than what IQ, for example, measures? Well, IQ can measure, you know, approximately something for humans. But because humans kind of, you know, come in relatively kind of uniform form, right? But it only measures one type of ability that, you know, maybe relevant for
Starting point is 02:14:48 some tasks, but not others. And, but then if you're talking about other intelligent entities for which the, you know, the basic things that are easy to them is very different than it doesn't mean anything. So intelligence is a collection of skills and an ability to acquire new skills efficiently. Right. And the collection of skills that an intelligent, particular intelligent entity possess or is capable of learning quickly is different from the collection of skills of another one. And because it's a multi-dimensional thing, the set of skills is high dimensional space, you can't measure, you can compare, you cannot compare two
Starting point is 02:15:35 things as to whether one is more intelligent than the other. It's multi-dimensional. So you push back against what are called AI doomers a lot. Can you explain their perspective and why you think they're wrong? Okay, so AI doomers imagine all kinds of catastrophes and arios of how AI could escape or control and basically kill us all. And that relies on a whole bunch of assumptions that are mostly false. So the first assumption is that the emergence of
Starting point is 02:16:12 superintelligence is going to be an event. That at some point we're going to have, we're going to figure out the secret and we'll turn on a machine that is super intelligent. And because we've never done it before, it's going to take over the world in Kilosol. That is false. It's not going to be an event. We're going to have systems that are like as smart as a cat, have all the characteristics of human-level intelligence, but their level of
Starting point is 02:16:38 intelligence would be like a cat or a parrot, maybe, or something. And then we're going to walk our way up to kind of make those things more intelligent. And as we make them more intelligent, we're also going to put some guardrails in them and learn how to kind of put some guardrails so they behave properly. And we're not going to do this with just one, it's not going to be one effort that is going to be lots of different people doing this. And some of them are going to succeed at making intelligent systems that are controllable and safe and have the right guardrails. And if some other goes rogue, then we can use the good ones to go against the rogue ones.
Starting point is 02:17:13 So it's going to be my smart AI police against your rogue AI. So it's not going to be like, you know, we're going to be exposed to like a single rogue AI that's going to kill us all. That's just not happening. Now there is another fallacy, which is the fact that because the system is intelligent, it necessarily wants to take over. And there is several arguments that make people scare of this, which I think are completely false as well. So one of them is, in nature, it seems to be that the more intelligent species are the one that end up dominating the other. And even, you know, distinguishing the others
Starting point is 02:17:56 sometimes by design, sometimes just by mistake. And so, you know, there is sort of thinking by which you say, well, if AI systems are more intelligent than us, surely they're going to eliminate us if not by design, simply because they don't care about us. And that's just preposterous for a number of reasons. First reason is they're not going to be a species. They're not going to be a species that competes with us. They're not going to have the desire to dominate because the desire to dominate is something that has to be hardwired into an intelligent system.
Starting point is 02:18:35 It is hardwired in humans. It is hardwired in baboons, in chimpanzees, in wolves, not in orangutans. The species in which this desire to dominate or submit or attain status in other ways is specific to social species. Non-social species like orangutans don't have it. And they are as smart as we are, almost. And to you, there's not significant incentive for humans to encode that into the AI systems. And to the degree they do, there'll be other AIs that sort of punish them for it. I'll compete them over.
Starting point is 02:19:15 Well, there's all kinds of incentive to make AI systems submissive to humans. Right? I mean, this is the way we're gonna build them, right? And so then people say, oh, but look at LLMs. LLMs are not controllable. And they're right.
Starting point is 02:19:27 LLMs are not controllable. But Objective Driven AI, so systems that derive their answers by optimization of an objective means they have to optimize its objective. And that objective can include guardrails. One guardrail is obey humans. Another guardrail is don't obey humans if it's hurting other humans. I've heard that before somewhere. I don't remember.
Starting point is 02:19:52 Yes. Maybe in a book. Speaking of that book, could there be unintended consequences also from all of this? No, of course. So this is not a simple problem, right? I mean, uh, designing those guard rails so that the system behaves properly, it's not going to be a simple, um, issue that for which there is a silver bullet for which you have a mathematical proof that the system can be safe. It's going to be very progressive iterative design system where we put those guard rails in such a way that the system behave
Starting point is 02:20:25 properly. And sometimes they're going to do something that was unexpected because the guardrail wasn't right and we're going to correct them so that they do it right. The idea somehow that we can't get it slightly wrong because if we get it slightly wrong, we all die is ridiculous. We're just going to go progressively. And it's just going to be the analogy I've used many times is turbojet design. How did we figure out how to make turbojets so unbelievably reliable? Right? I mean, those are like incredibly complex pieces of hardware that run at really high temperatures for 20 hours
Starting point is 02:21:08 at a time sometimes. And we can fly halfway around the world with a two-engine jetliner at near the speed of sound. How incredible is this? It is just unbelievable. And did we do this because we invented a general principle of how to make turbojet safe? No, we, it took decades to kind of fine tune the design of those systems so that they were safe. Is there a separate group within a general electric or snack mall or whatever that is specialized in turbojet safety.
Starting point is 02:21:47 No, it's the design is all about safety because a better turbojet is also a safer turbojet. So a more reliable one is the same for AI. Like, do you need, you know, specific provisions to make AI safe? No, you need to make better AI systems and they will be safe because they are designed to be more useful and more controllable. So let's imagine a system, AI system that's able to be incredibly convincing and can convince you of anything. I can at least imagine such a system and I can see such a system be weapon like, because it can control people's minds, we're pretty gullible. We want to believe a thing, you can have any system that controls it. And you could see governments using that as a weapon. So do you think if you imagine
Starting point is 02:22:39 such a system, there's any parallel to something like nuclear weapons? No. So is it why? Why, why is that technology different? So you're saying there's going to be gradual development. Yeah. It's going to be, I mean, it might be rapid, but there'll be iterative and then we'll be able to kind of respond and so on.
Starting point is 02:23:01 So that AI system designed by Vladimir Putin or whatever, or his minions, is going to be trying to talk to every American to convince them to vote for whoever pleases Putin pieces put in or whatever or riled people up against each other as they've been trying to do. They're not going to be talking to you, they're going to be talking to your AI assistant, which is going to be as smart as they are. That AI, because as I said in the future, every single one of your interactions with the digital world will be mediated by your AI assistant. So the first thing you're going to ask is, is this a scam? Like, is this thing like turning me to a truth?
Starting point is 02:23:53 Like, it's not even going to be able to get to you because it's only going to talk to your AI assistant. Your AI assistant is not even going to, it's going to be like a spam filter, right? You're not even seeing the email, the spam email, right? It's automatically put in a folder that you never see. Um, it's going to be the same thing. That AI system that tries to convince you of something is going to be talking to your assistant, which is going to be at least as smart as it.
Starting point is 02:24:19 And it's going to say, this is spam, you know, um, it's not even going to bring it to your attention. So to you, it's very difficult for any one AI system to take such a big leap ahead to where you can convince even the other AI systems. So like it, there's always going to be this kind of race where nobody's way ahead. That's the history of the world. History of the world is, you know, whenever there is a progress someplace, there is a countermeasure. And, you know, it's a cat and mouse game. This is why mostly yes, but this is why nuclear weapons are so interesting, because that was such
Starting point is 02:24:56 a powerful weapon that it matters who got it first. That, you know, you could imagine that you could imagine Hitler, Stalin, Mao getting the weapon first and that having a different kind of impact on the world than the United States getting the weapon first. To you, nuclear weapons, you don't imagine a breakthrough discovery and then Manhattan Project like effort for AI. No, as I said, it's not going to be an event. It's going to be continuous progress. And whenever one breakthrough occurs, it's going to be widely disseminated really quickly. Probably first within industry. I mean, this is not a domain where
Starting point is 02:25:46 government or military organizations are particularly innovative and they're in fact way behind. And so this is going to come from industry and this kind of information disseminates extremely quickly. We've seen this over the last few years, right, where you have a new, like, you know, even take AlphaGo, this was reproduced within three months, even without, like, particularly detailed information, right? Yeah, this is an industry that's not good at secrecy. No, but even if there is just the fact that you know that something is possible, yeah, makes you like realize that it's worth investing the time to actually do it. You may be the second person to do it, but you'll do it. And say for all the innovations of self-supervisioning transformers,
Starting point is 02:26:36 decoder-only architecture, LLMs, I mean, those things, you don't need to know exactly the details of how they worked. You know that it's possible because it's deployed and then it's getting reproduced. And then people who work for those companies move. They go from one company to another. And the information disseminates. What makes the success of the US tech industry and Silicon Valley in particular is exactly that is because information circulates really really quickly and You know disseminates very quickly. And so, you know, the whole region sort of
Starting point is 02:27:13 Is ahead because of that circulation of information So maybe I just to linger on the psychology of AI doomers you give In the classic Gyanlokun way a pretty good example of just when a new technology comes to be. You say engineer says I invented this new thing I call it a ball pen and then the Twitter sphere responds OMG people could write horrible things with it like misinformation propaganda hate, ban it now. Then writing doomers come in. I can do the AI doomers. Imagine if everyone can get a ball pen. This could destroy society. There should be a law against using ball pen to write hate speech. Regular ball pens now. And then the pencil industry mogul says, yeah, ball pens are very dangerous.
Starting point is 02:28:05 Unlike pencil writing, which is erasable, ball pen writing stays forever. Government should require a license for a pen manufacturer. I mean, this does seem to be part of human psychology when it comes up against new technology. So what deep insights can you speak to about this? Well, there is a natural fear of new technology and the impact it can have on society, and people have kind of instinctive reaction to the world they know being threatened by major transformations that are either cultural phenomena or technological revolutions. And they fear for their culture, they fear for
Starting point is 02:28:57 their job, they fear for their future of their children and their way of life. So any change is feared. And you see this a long history, like any technological revolution or cultural phenomenon was always accompanied by groups or reaction in the media that basically attributed all the problems, the current problems of society to that particular change, right? Electricity was going to kill everyone at some point.
Starting point is 02:29:38 The train was going to be a horrible thing because you can't breathe past 50 kilometers an hour. And so there's a wonderful website called a pessimist archive, which has all those newspaper clips of all the horrible things people imagine would would arrive because of either technological or a cultural phenomenon. You know, there is this wonderful examples of, you know, jazz or comic books being blamed for unemployment or, you know, young people not wanting to work anymore and things like that, right? And that has existed for four centuries. And it's, And it's knee-jerk reactions. The question is, do we embrace change or do we resist it? And what are the real dangers as opposed to the imagined ones. So people worry about, I think one thing they worry about with big tech,
Starting point is 02:30:48 something we've been talking about over and over, but I think worth mentioning again, they worry about how powerful AI will be, and they worry about it being in the hands of one centralized power of just our handful of central control. And so that's the skepticism with big tech. You can make, these companies can make a huge amount of money and control this technology.
Starting point is 02:31:14 And by so doing, take advantage, abuse the little guy in society. Well, that's exactly why we need open source platforms. Yeah, I just wanted to nail the point home more and more. Yes. So let me ask you on your, like I said, you do get a little bit flavorful on the internet. Yoshibak tweeted something that you LOLed at in reference
Starting point is 02:31:44 to how 9000. Quote, I appreciate your argument and I fully understand your frustration, but whether the pod bay doors should be opened or closed is a complex and nuanced issue. So you're the head of MetaAI. You know, this is something that really worries me that AI, our AI overlords
Starting point is 02:32:07 Will speak down to us with corporate speak Of this nature and you sort of resist that with your way of being Is this something you can just comment on sort of working at a big company? how you can avoid the overfearing, I suppose, through caution, create harm. Yeah. Again, I think the answer to this is open source platforms and then enabling a widely diverse set of people to build AI assistance that represent the diversity of cultures, opinions, languages, and value systems across the world. So that you're not bound to
Starting point is 02:32:54 just be brainwashed by a particular way of thinking because of a single AI entity. So I think it's's really, really important question for society. And the problem I'm seeing is that, which is why I've been so vocal and sometimes a little sardonic about it. Never stop, never stop, yeah. We love it.
Starting point is 02:33:22 It's because I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. That if we really want diversity of opinion AI systems that in the future that we'll all be interacting through AI systems. We need those to be diverse for the preservation of diversity of ideas and creeds and political opinions and whatever. And the preservation of everybody. Because it could be used by terrorists or something. That would lead to potentially a very bad future in which all of our information diet is controlled by a small number of companies who proprietary systems.
Starting point is 02:34:37 Do you trust humans with this technology to build systems that are on the whole good for humanity. Isn't that what democracy and free speech is all about? I think so. Do you trust institutions to do the right thing? Do you trust people to do the right thing? And yeah, there's bad people who are gonna do bad things, but they're not going to have superior technology to the good people.
Starting point is 02:35:01 So then it's gonna be my good AI against your bad AI, right? I mean, it's the examples that we were just talking about of, you know, maybe some rogue country will build, you know, some AI system that's going to try to convince everybody to go into a civil war or something or elect favorable ruler. And, but then they will have to go past our AI systems. An AI system with a strong Russian accent will be trying to convince us. And doesn't put any articles in their sentences. Well, it'll be at the very least absurdly comedic. Okay, so since we talked about sort of the physical reality, I'd love to ask your vision of the future
Starting point is 02:35:50 with robots in this physical reality. So many of the kinds of intelligence you've been speaking about would empower robots to be more effective collaborators with us humans. So since Tesla's Optimus team has been showing us some progress on humanoid robots, I think it really reinvigorated the whole industry. I think Boston Dynamics has been leading for a very, very long time. So now there's all kinds of companies, Vigor AI, obviously Boston Dynamics, Unity Tree. Unity Tree.
Starting point is 02:36:25 But there's a lot of them. It's great. It's great. I mean, I love it. So do you think there'll be millions of human robots walking around soon? Not soon, but it's going to happen. Like the next decade, I think, is going
Starting point is 02:36:40 to be really interesting in robots. The emergence of the robotics industry has been in the waiting for 10, 20 years without really emerging other than for pre-programmed behavior and stuff like that. And the main issue is, again, the Moravec paradox, like how do we get the systems to understand how the world works and kind of, you know, plan actions? And so we can do it for really specialized tasks. And the way Boston Dynamics goes about it is, you know, basically with a lot of handcrafted dynamical models and careful planning in advance, which is very classical robotics with a lot
Starting point is 02:37:23 of innovation, a little bit of perception. But it's still not like they can't build a domestic robot. And, you know, we're still some distance away from completely autonomous level five driving. And we're certainly very far away from having, you know, level five autonomous driving by a system that can train itself by driving 20 hours like any 17 year old. So until we have, again, world models, systems that can train themselves to understand how the world works, we're not going to have significant progress in robotics. So a lot of the people working on robotic hardware at the moment are betting or banking on the fact that
Starting point is 02:38:17 AI is going to make sufficient progress towards that. And they're hoping to discover a product in it too. Before you have a really strong world model, there'll be an almost strong world model. What's that? And they're hoping to discover a product in it too. Before you have a really strong world model, there'll be an almost strong world model. People are trying to find a product in a clumsy robot, I suppose. Not a perfectly efficient robot. So there's the factory setting where human robots can help automate some aspects of the factory.
Starting point is 02:38:44 I think that's a crazy difficult task because of all the safety required and all this kind of stuff. I think in the home is more interesting, but then you start to think, I think you mentioned loading the dishwasher, right? Yeah. Like I suppose that's one of the main problems you're working on. I mean, there's, you know, cleaning up up the house, clearing up the table after a meal, washing the dishes, all those tasks, cooking. All the tasks that in principle could be automated,
Starting point is 02:39:16 but are actually incredibly sophisticated, really complicated. But even just basic navigation around an unspaceful of uncertainty. That sort of uncertainty. That sort of works. Like you can sort of do this now. Navigation is fine. Well, navigation in a way that's compelling to us humans is a different thing. Yeah, it's not going to be necessarily.
Starting point is 02:39:37 I mean, we have demos actually because there is a so-called embodied AI group at fair. And they've been not building their own robots, but using commercial robots. And you can tell a robot dog go to the fridge. And they can actually open the fridge, and they can probably pick up a can in the fridge and stuff like that and bring it to you. So it can navigate. It can grab objects
Starting point is 02:40:05 as long as it's been trained to recognize them, which vision systems work pretty well nowadays. But it's not like a completely general robot that would be sophisticated enough to do things like clearing up the dinner table. Yeah, to me, that's an exciting future of getting human robots, robots and genre on the whole, more and more, because it gets humans to really directly interact with AI systems in the physical space and so doing it allows us to philosophically, psychologically explore our relationships with robots.
Starting point is 02:40:41 It can be really, really, really interesting. So I hope you make progress on the whole JAPA thing soon. Well, I mean, I hope things kind of work as planned. I mean, again, we've been kind of working on this idea of self-supervised learning from video for 10 years and, you know, only made significant progress in the last two or three. And actually, you've mentioned that there's a lot of interesting breakers that can happen without having access to a lot of compute. So if you're interested in doing a PhD and this kind of stuff, there's a lot of possibilities
Starting point is 02:41:15 still to do innovative work. So like, what advice would you give to a undergrad that's looking to go to grad school and do a PhD. So basically I've listed them already, this idea of how do you train a world model by observation. And you don't have to train necessarily on gigantic datasets or, I mean, you could turn that to be necessary
Starting point is 02:41:39 to actually train on large datasets, to have emergent properties like we have with LLMs. But I think there's a lot of good ideas that can be done without necessarily scaling up. Then there is how do you do planning with a learn world model? If the world the system evolves in is not the physical world, but it's the world of
Starting point is 02:41:58 this at the internet or some sort of world where an action consists in doing a search in a search engine, or interrogating a database, or running a simulation, or calling a calculator, or solving a differential equation. How do you get a system to actually plan a sequence of actions to, you know, give the solution to a problem? And so the question of planning is not just a question of planning physical actions, it could be planning actions to use tools for a dialogue system or for any kind of intelligent system. And there's some work on this, but not a huge amount. Some work at FAIR, one called Toolformer, which was a couple years ago, and some more recent
Starting point is 02:42:46 work on planning. But I don't think we have a good solution for any of that. Then there is the question of hierarchical planning. So the example I mentioned of planning a trip from New York to Paris, that's hierarchical, but almost every action that we take involves hierarchical planning in some in some sense. And we really have absolutely no idea how to do this, like this zero demonstration of hierarchical planning in AI, where the various levels of representations that are necessary have been learned. We can do two-level hierarchical planning when we design the two levels. For example, you have a dog-like robot.
Starting point is 02:43:37 You want it to go from the living room to the kitchen, you can plan a path that avoids the obstacle, and then you can send this to a lower level planner that figures out how to move the legs to kind of follow that trajectory. So that works. But that two level planning is designed by hand. We specify what the proper levels of abstraction, the representation at each level of abstraction has to be. How do you learn this? How do you learn that hierarchical representation of action plans? Right. With cognize and deep learning, we can train the system to learn hierarchical
Starting point is 02:44:16 representations of perceps. What is the equivalent when what you're trying to represent are action plans? For action plans. Yeah. So you want basically a robot dog or humanoid robot that turns on and travels from New York to Paris all by itself. For example, all right, they might have some trouble at the at the TSA, but yeah. No, but even doing something fairly simple like a household task. Sure. Like, you know, cooking or something. Yeah, there's a lot involved. It's a super complex task. And once again, we take it for granted. What hope do you have for the future of humanity? We're talking about so many exciting technologies, so many exciting possibilities. What gives you hope when you look out over
Starting point is 02:45:05 the next 10, 20, 50, 100 years? If you look at social media, there's a lot of, there's wars going on. There's division. There's hatred, all this kind of stuff. That's also part of humanity. But amidst all that, what gives you hope? I love that question. We can make humanity smarter with AI. Okay. I mean, AI basically will amplify human intelligence. It's as if every one of us will have a staff of smart AI assistants. They might be smarter than us.
Starting point is 02:45:47 They'll do our bidding, perhaps execute a task in ways that are much better than we could do ourselves because they'd be smarter than us. And so it's like everyone would be the boss of a staff of super smart virtual people. So we shouldn't feel threatened by this any more than we should feel threatened by being the manager of a group of people, some of whom are more intelligent than us. I certainly have a lot of experience with this of, you know, having people working with me who are smarter than me. That's actually a wonderful thing.
Starting point is 02:46:29 So having machines that are smarter than us that assist us in our all of our tasks or daily lives, whether it's professional or personal, I think would be an absolutely wonderful thing. Because intelligence is the most is the commodity that is most in demand. That's really what, I mean, all the mistakes that humanity makes is because of lack of intelligence really, or lack of knowledge which is, you know, related. So making people smarter, which can only be better. I mean, for the same reason that, you know, public education is a good thing. And books are a good thing. And the internet is also a good thing intrinsically. And even social networks are a good thing if you run them properly.
Starting point is 02:47:14 It's difficult, but you know, you can. Because you know, it helps the communication of information and knowledge and the transmission of knowledge. So AI is going to make humanity smarter. And the analogy I've been using is the fact that perhaps an equivalent event in the history of humanity to what might be provided by journalization of AI assistant is the invention of the printing press. It made everybody smarter. The fact that people could have access to books. Books were a lot cheaper than they were before. And so a lot more people had an incentive to learn to read, which wasn't the case before. And people became smarter. It enabled the Enlightenment. There wouldn't be an Enlightenment without the printing press. It enabled philosophy, rationalism, escape from religious doctrine, democracy, science. And certainly,
Starting point is 02:48:34 without this, it wouldn't have been the American revolution, the French revolution. And so, it would still be under a feudal regime, perhaps. And so, it completely transformed the world because people became smarter and kind of learned about things. Now, it also created 200 years of essentially religious conflicts in Europe, right? Because the first thing that people read was the Bible and realized that perhaps there was a different interpretation of the Bible than what the priests were telling them. And so that created the Protestant movement and created the rift. And in fact, the Catholic Church didn't like the idea of the printing press, but they had no choice. And so it had some bad
Starting point is 02:49:23 effects and some good effects. I don't think anyone today would say that the invention of the printing press, but they had no choice. And so it had some bad effects and some some good effects. I don't think anyone today would say that the invention of the printing press had an overall negative effect, despite the fact that it created 200 years of religious conflicts in Europe. Now, compare this. And I thought I was very proud of myself to come up with this analogy, but realized someone else came up with the same idea before me. Compare this with what happened in the Ottoman Empire. The Ottoman Empire banned the printing press for 200 years. And it didn't ban it for all languages, only for Arabic. You could actually print books in Latin or Hebrew or whatever,
Starting point is 02:50:09 in the Ottoman Empire, just not in Arabic. And I thought it was because the rulers just wanted to preserve the control over the population and the dogma, religious dogma and everything. But after talking with the UAE minister of AI, Omar Al-Alama, he told me, no, there was another reason. And the other reason was that it was to preserve the corporation of calligraphers. Right? it was to preserve the corporation of categorographers. Right?
Starting point is 02:50:46 There's like an art form, which is, you know, writing those beautiful, you know, Arabic poems or whatever religious text in this thing. And it was a very powerful corporation of scribes, basically that kind of, you know, run a big chunk of the empire and it couldn't put them out of business. So they banned the printing press in part to protect that business. Now, what's the analogy for AI today? Who are we protecting by banning AI?
Starting point is 02:51:18 Who are the people who are asking that AI be regulated to protect their jobs. And of course, it's a real question of what is going to be the effect of technological transformation like AI on the job market and the labor market. And the economies too are much more expert at this than I am. But when I talk to them, they tell us, you know, we're not gonna run out of job. This is not gonna cause mass unemployment. This is just gonna be gradual shift of different professions. The professions are gonna be hot 10 or 15 years from now.
Starting point is 02:51:59 We have no idea today what they're gonna be. The same way if we go about 20 years in the past, like who could have thought 20 years ago that like the hottest job even like five 10 years ago was mobile app developer, like smartphones weren't invented. Most of the jobs of the future might be in the metaverse. Well, it could be. Yeah. But the point is you can't possibly predict. But you're right, I mean, you made a lot of strong points and I believe that people are fundamentally good.
Starting point is 02:52:31 And so if AI, especially open source AI, can make them smarter, it just empowers the goodness in humans. So I share that feeling, okay? I think people are fundamentally good. And in fact, a lot of do-mers are do-mers because they don't think that people are fundamentally good. And they either don't trust people
Starting point is 02:52:57 or they don't trust the institution to do the right thing so that people behave properly. Well, I think both you and I believe in humanity. And I think I speak for a lot of people in saying, thank you for pushing the open source movement, pushing to making both research and AI open source, making it available to people and also the models themselves, making it open source.
Starting point is 02:53:21 So thank you for that. And thank you for speaking your mind in such colorful and beautiful ways on the internet. I hope you never stop. You're one of the most fun people I know and get to be a fan of. So yeah, thank you for speaking to me once again, and thank you for being you. Thank you. Thanks. Thanks for listening to this conversation with Yann LeCun. To support this podcast, please check out our sponsors in the description. down the code to support the spot gas please check out our sponsors in the description and now let me leave you with some words from Arthur C. Clarke. The only way to discover the limits of the possible is to go beyond them and to the impossible. Thank you for listening and hope to see you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.