Lex Fridman Podcast - #383 – Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

Episode Date: June 9, 2023

Mark Zuckerberg is CEO of Meta. Please support this podcast by checking out our sponsors: - Numerai: https://numer.ai/lex - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: htt...ps://betterhelp.com/lex to get 10% off EPISODE LINKS: Mark's Facebook: https://facebook.com/zuck Mark's Instagram: https://instagram.com/zuck Meta AI: https://ai.facebook.com/ Meta Quest: https://www.meta.com/quest/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (05:38) - Jiu-jitsu competition (23:01) - AI and open source movement (35:32) - Next AI model release (47:48) - Future of AI at Meta (1:08:25) - Bots (1:23:53) - Censorship (1:38:34) - Meta's new social network (1:45:20) - Elon Musk (1:49:25) - Layoffs and firing (1:56:55) - Hiring (2:02:48) - Meta Quest 3 (2:09:45) - Apple Vision Pro (2:16:00) - AI existential risk (2:22:23) - Power (2:25:55) - AGI timeline (2:33:17) - Murph challenge (2:38:33) - Embodied AGI (2:41:39) - Faith

Transcript
Discussion (0)
Starting point is 00:00:00 The following is a conversation with Mark Zuckerberg, his second time on this podcast. He's the CEO of Meta that owns Facebook, Instagram, and WhatsApp. All services used by billions of people to connect with each other. We talk about his vision for the future of Meta and the future of AI in our human world. And now a quick few second mention of each sponsor. Check them out in the description. It's the best way to support this podcast. We got Numerai for the world's hardest data science tournament.
Starting point is 00:00:33 Shopify for e-commerce and better help for mental health. Choose wise in my friends. Also, if you want to work with our amazing team, we're always hiring got Alex Friedman dot com slash hiring. And now onto the full ad, we're always hiring gotlexfreedman.com slash hiring. And now onto the full ad reads, as always, no ads in the middle. I find those annoying. But these here ads, I try to make interesting. Though you may skip them if you must, my friends, but please still check out the sponsors.
Starting point is 00:01:00 They help this podcast out. I enjoy their stuff. Maybe you will too. This show is brought to you by Numerai, a hedge fund that uses AI and machine learning to make investment decisions. I'm a huge fan of real world data sets and real world machine learning competitions to figure out what works. This is not ImageNet. This is not an artificial toy data set for the development of toy systems that illustrate toy concepts. Those are the early early early stages of research. But when you really want to see what works, you want benchmarks that have stakes, that have the highest of stakes, especially ones that have money involved. So I'm a huge fan, money or not of data sets
Starting point is 00:01:48 that represent the real world and demonstrate that the system can operate in the real world at the highest of stakes. That's why I was really interested in autonomous vehicles on the stakes, our life and death, safety critical systems, incredibly exciting to work on systems that are truly real world data sets.
Starting point is 00:02:06 Anyway, that kind of thing interests you if your machine learning engineer head over to numer.ai slash Lex to sign up for tournament and hone your machine learning skills. That's n-u-m-e-r dot ai slash Lex for chance to play against me and win share of the term in prize pool. This show is also brought to you by Shopify. A platform designed for anyone to sell anywhere with a great looking online store that brings your ideas to life and tools to manage the day-to-day operations. Operations is such a badass word. I you're running things anyway a few folks ask me about merch. I'm a huge fan of buying merch for the podcast shows bands I love and so I love the camaraderie of merch and I think Shopify is a great place to sell merch
Starting point is 00:02:58 I'm definitely gonna put out some merch. I'm really sorry. I've been taking forever. I've been working with this incredible artist I just love art I love artistic representation of the funny the profound on the t-shirt that allows you to celebrate with all this something super cool I love it to me. There's nothing like a promotional about it all that kind of stuff. It's just Sharing your happiness. Anyway, so I'll definitely you Shopify to To create a merch store so that people can share a bit of their happiness with others. If you have stuff to sell or you have merch to sell, or you want to share some of your
Starting point is 00:03:33 happiness with others, sign up for a $1 per month trial period at Shopify.com slash Lex. That's all lowercase. Go to Shopify.com slash Lex to take your business to the next level. This episode is also brought to you by BetterHelp spelled H-E-L-P-H-H-H-L-P. They figure out what you need to match with the licensed professional therapist in under 48 hours. I do a podcast, obviously I'm a big fan of talk therapy. In fact, when I just listen to podcasts, it's a kind of talk therapy because I'm having a conversation with the people I'm listening to in my mind, whenever it's an interview shown as to folks talking, I'm always the third person in the room, kind of almost participating in
Starting point is 00:04:18 the conversation. And there's something therapeutic about that. So if you're listening to two other people tell their life stories and you be able to project your trauma, your struggles, your hopes, your dreams, your triumphs, all that kind of stuff onto their life and kind of dance with that. Of course to do that rigorously and really just put it all out there in a raw and honest way. I think that's what therapy is about. There's a lot of things you can do for your mental health, but therapy is one of the obvious things you should have in the two kit of lifestyle flourishing. Anyway, better help just makes the whole thing super easy. Super easy to sign up, super easy to find lessons therapist, all of that. It's obviously discrete, it's easy, affordable, it's available anywhere. Check them out at BetterHelp.com slash Lux and save when you first month. That's BetterHelp.com slash Here's Mark Zuckerberg. So you competed in your first eGis tournament and me as a fellow Gisoo practitioner and competitor, I think that's really inspiring,
Starting point is 00:05:46 given all the things you have going on. So I gotta ask, what was that experience like? Oh, it was fun. I know, yeah, I mean, I'm a pretty competitive person. Yeah. Doing sports that basically require your full attention, I think is really important to my mental health and the way I just stay focused
Starting point is 00:06:05 at doing everything I'm doing. So I decided to get into martial arts. And it's awesome. I got a ton of my friends into it. We all trained together. We have like a mini academy in my garage. And I got some, and one of my friends was like, hey, we should go do a tournament.
Starting point is 00:06:23 I was like, okay, yeah, let's do it. I'm not gonna shy away from a challenge like that. So, yeah, it was, but it was, it was awesome. It was, it was just a lot of fun. You weren't scared? There was no fear. I don't know. I was, I was pretty sure that I, that I'd do okay.
Starting point is 00:06:35 I like the confidence. Well, so for people who don't know, Giu-Jitsu is a martial art where you're trying to break your opponent's limbs or choke them to sleep and do so with grace and elegance and efficiency and all that kind of stuff. It's a It's a kind of art form, I think they can do for your whole life And it's basically a game a sport of human chess. You can think of a lot of strategy There's a lot of sort of interesting human dynamics of using leverage
Starting point is 00:07:05 and all that kind of stuff. It's kind of incredible what you could do. You could do things like a small opponent could defeat a much larger opponent and you get to understand the way the mechanics of the human body works because of that. But you certainly can't be distracted. No. It's a hundred percent focus. To compete, I need to get around the fact that I didn't want it to be like this,
Starting point is 00:07:28 this big thing. So I basically rolled up with a pat and sunglasses and I was wearing a COVID mask and I registered under my first and middle name, so Mark Elliott. And it wasn't until I actually pulled all that stuff off right before I got on the map that I think people knew as me. So it was pretty low key. But you're still a public figure. Yeah, I mean, I didn't want to lose. Right. The thing you're partially afraid of is not just the losing, but being almost like embarrassed.
Starting point is 00:07:55 It's so raw the sport in that like it's just you and another human being. There's a primal aspect there. Oh yeah, it's great. For a lot of people can be terrifying, especially the first time you're doing the come competing. And you wasn't for you. I see the look of excitement in your face. Yeah, I know it's great. For a lot of people can be terrifying, especially the first time you're doing the come competing, and you wasn't for you. I see the look of excitement in your face. Yeah, I was, no fear. I just think part of learning is failing, right?
Starting point is 00:08:12 So, I mean, the main thing, like people who trained your Jetsu, it's like you need to not have pride because I mean, all the stuff that you were talking about before about getting choked or getting joint lock. You only get into a bad situation if you're not willing to tap once you've already lost. But obviously, when you're getting started with something, you're not going to be an expert at it immediately. So you just need to be willing to go with that.
Starting point is 00:08:39 But I think this is like, I don't know. I mean, maybe I've just been embarrassed enough times in my life. I do think that there's a thing where it has people grow up. Maybe they don't want to be embarrassed or anything. They've built their adult identity and they have a sense of who they are and what they want to project. I don't know, I think maybe to some degree, your ability to keep doing interesting things is your willingness to be embarrassed again and go back to step one and start as a beginner and get your ass kicked and, you
Starting point is 00:09:15 know, look stupid doing things. And, you know, I think so many of the things that we're doing, whether it's whether it's this, I mean, this is just like a kind of a physical part of my life, but running the company. It's like we just take on new adventures and you know, all the big things that we're doing, I think of is like 10 plus year missions that we're on, where you know, often early on, you know, people doubt that we're going to be able to do it, and the initial work seems kind of silly, and our whole ethos says we don't want to wait until something is perfect to put it out there We want to get it out quickly and get feedback on it and so I don't know
Starting point is 00:09:48 I mean, there's probably just something about how I approach things in there But I just kind of think that the moment that you decide that you're going to be too embarrassed to try something new Then you're not going to learn anything anymore But like I mentioned that fear that anxiety could be there could creep up every once in a while Do you feel that, especially stressful moments that are outside of the gizm, just to work, stressful moments, big decision days, big decision moments?
Starting point is 00:10:17 How do you deal with that fear? How do you deal with that anxiety? The thing that stresses me out the most is always the people challenges. You know, I kind of think that strategy questions, I tend to have enough conviction around the values of what we're trying to do and what I think matters and what I want our company to stand for
Starting point is 00:10:38 that those don't really keep me up at night that much. I kind of, it's not that I get everything right, of course, I don't. Right. I mean, make a lot of mistakes. But, um, but I at least have a pretty strong sense of where I want us to go on that. The thing, and running a company for almost 20 years now, one of the things that's been pretty clear is when you have One of the things that's been pretty clear is when you have a team that's cohesive you can get almost anything done and You know you can you can run through super hard challenges You can make hard decisions and push really hard to do the best work even in kind of
Starting point is 00:11:20 Optimize something super well, but when when there's that tension I mean that's that that's when things get really tough. And when I talk to other friends who run other companies and things like that, I think one of the things that I actually spend a disproportionate amount of time on in running this company is just fostering a pretty tight core group of people who are running the company with me. And that, to me, is kind of the thing that both makes it fun, right? Having friends and people you've worked with for a while,
Starting point is 00:11:52 and new people and new perspectives, but like a pretty tight group who can go work on some of these crazy things with. But to me, that's also the most stressful thing. It's when there's tension. That weighs on me. I think it's maybe not surprising. I mean, we're like a very people-focused company, and the people is the part of it that weighs on me the most to make sure that we get right.
Starting point is 00:12:18 But yeah, that I'd say across everything that we do is probably the big thing. So when there's tension in that inner circle of close folks, so when you trust those folks to help you make difficult decisions about Facebook, WhatsApp, Instagram, the future of the company and the metaverse with AI, how do you build that close-knit group of folks to make those difficult decisions? Is there people that you have to have critical voices, very different perspectives on focusing on the past versus the future, all that kind of stuff? Yeah, I mean, I think for one thing, it's just spending a lot of time with whatever the
Starting point is 00:13:03 group is that you want to be that core group grappling with all of the biggest challenges and that requires a fair amount of openness and You know, so I mean a lot of how I I run the company is and it's like every Monday morning we get our It's about the top 30 people together and we and this is a group that just worked together for a long period of time And I mean people people, people rotate in. I mean, we knew people join, people leave the company, people go to other roles in the company. So it's not the same group over time.
Starting point is 00:13:33 But then we spend, you know, a lot of times a couple of hours, a lot of the time it's, you know, it can be somewhat unstructured. We, like I'll come with maybe a few topics that I, that are top of mind for me. But I'll ask other people to bring things and people raise questions, whether it's okay, there's an issue happening in some country with some policy issue. There's a new technology that's developing here. We're having an issue with this partner.
Starting point is 00:13:59 There's a design trade-off and WhatsApp between two things that then end up being values that we care about deeply and we need to kind of decide where we want to be on that. And I just think over time when I'm you know by working through a lot of issues with people and doing it openly people develop an intuition for each other and a bond and camaraderie. And to me developing that is like a lot of the fun part of running a company or doing anything, right? I think it's like having people who are kind of along on the journey that you feel like you're doing it with, nothing is ever just one person doing it. Are there people that disagree often within that group?
Starting point is 00:14:38 It's a fairly combative group. Okay. So combat is part of it. So this is making decisions on design, engineering, uh, policy, everything, everything, everything. Yeah. I have to ask just back to Jiu Jitsu for a little bit, what's your favorite submission? Now that you've been doing it, what's, uh, how do you like to submit your opponent, Mark Zuckerberg? I mean, what, first of all, I do prefer no ghee or ghee jiu jitsu.
Starting point is 00:15:09 So ghee is this odd figure where that is maybe mimics clothing. So you can choke. Well, it's like a kimono. It's like the traditional martial arts or a monochrome. Um, pajamas that you could choke people with. Yes. Well, it's got the lapels. Yes.
Starting point is 00:15:26 Yeah. So, I like Jujitsu. I also really like MMA. And so I think no G.E. more closely approximates MMA. And I think my style is maybe a little closer to an MMA style. So like a lot of Jujitsu players are fine being on their back, right? And obviously having a good guard is a critical part of Jiu-Jitsu,
Starting point is 00:15:49 but in MMA, you don't wanna be on your back, right? Cause even if you have control, you're just taking punches while you're on your back. So that's no good. Do you like being on top? My style is I'm probably more pressure and yeah, and I'd probably rather be the top player. But I'm also smaller, right?
Starting point is 00:16:10 I'm not like a heavyweight guy, right? So from that perspective, I think, like, you know, it's especially because, you know, from doing a competition, I'll compete with people on my size, but a lot of my friends are bigger than me. So, so back takes probably pretty important, right? Because that's where you have the most alive-rigid vantage, right? Where, where, you know, people, you know,
Starting point is 00:16:30 there are arms, your arms are very weak behind you, right? So, so being able to get to the back and take that pretty important. But I don't know. I feel like the right strategy is to not be too committed to any single submission. But that's that. I don't like hurting people.
Starting point is 00:16:43 So, so I always think that jokes are are somewhat more human way to go than than joint locks. Yeah, and it's more about control. It's less dynamic. So you're basically like a, a beam numbering a meta of type of fighter. So, so let's go. Yeah, backtake to a rear naked choke. I think it's like the clean, clean way to go. Straight forward answer there. What advice would you give to people looking to start learning Jiu-Jitsu? Given how busy you are, given where you are in life, that you're able to do this,
Starting point is 00:17:14 you're able to train, you're able to compete and get to learn something from this interesting art? Why do you think you have to be willing to just get beaten up a lot? But over time, I think that there's a flow to all these things. One of my experiences that I think kind of transcends running a company and the different activities that I like doing are, I really believe that like if you're gonna accomplish whatever, anything, a lot of it is just being willing to push through, right?
Starting point is 00:17:53 And having the grit and determination to push through difficult situations. And I think for a lot of people that that ends up being sort of a difference maker between the people, you know people who kind of get the most done and not. I mean, there's all these questions about how many days people want to work and things like that. I think almost all the people who start successful companies or things like that are just are working extremely hard. But I think one of the things that
Starting point is 00:18:20 you learn both by doing this over time or very acutely with things like Jiu-Jitsu or surfing, is you can't push through everything. You learn this stuff very acutely, doing sports compared to running a company because running a company, the cycle times are so long. It's like you start a project and then it's like months later. Or if you're building hardware, it could be years later before you're actually getting feedback and able to make the next set of decisions for the next version of the thing that you're doing. Whereas, one of the things that I just think is mentally so nice about these very high turnaround conditioning sports, things like that, is you get feedback
Starting point is 00:19:06 very quickly, right? It's like, okay, I don't counter something correctly, you get punched in the face, right? So not in jujitsu, you don't get punched in jujitsu, but in MMA. There are all these analogies between all these things that I think actually hold that are like important life lessons, right? It's like, okay, you're surfing away if it's like, you know, sometimes you're like, you can't go in the other direction on it, right? It's like there are limits to kind of what, you know, it's like a foil, you can pump the foil and push
Starting point is 00:19:37 pretty hard in a bunch of directions, but like, yeah, you, you know, it's at some level, like the momentum against you is strong enough here, that's not going to work. And I do think that, that's sort of a humbling, but also an important lesson for, I think people who are running things or building things, it's like, yeah, you, you know, a lot of the game is just being able to kind of push and work through complicated things, but you also need to kind of have enough of an understanding of like which things you just can't push through and where the finesse is more important. Yeah.
Starting point is 00:20:13 What are your jujitsu life lessons? Well, I think you did it. You made it sound so simple and we're so eloquent that it's easy to miss. But basically being okay and accepting the wisdom and the joy in the getting your ass kicked in the full range of what that means, I think that's a big gift of the being humbled. Somehow being humbled, especially physically, opens your mind to the full process of learning, what it means to learn, which is being willing to suck at something. And I think Jesus was just very repetitively, efficiently humbles you over and over and over and over to where you can carry
Starting point is 00:21:01 that lessons to places where you don't get humbled as much, whether it's research or running a company or building stuff, the cycle is longer. And just so you can just get humbled, as a period of an hour, over and over and over, especially when you're a beginner, you'll have a little person. Just, you know, somebody much smarter than you, just kick your ass repeatedly, definitively, where there's no argument. Oh, yeah. And then you literally tap, because if you don't tap, you're going to die. So this is an agreement.
Starting point is 00:21:35 You could have killed me just now, but we're friends, so we're going to agree that you're not going to. And that kind of humbling process, it just does something to your psyche, to your ego that puts in its proper context to realize that, you know, everything in this life is like a journey from sucking through a hard process of improving or rigorously day after day after day after day. And you kind of success requires hard work. Yeah, just more than a lot of sports, I would say, because I've done a lot of them, you really teach us do that. And you made it sound so simple. Like, I'm, you know, it's, it's okay. It's part of the process.
Starting point is 00:22:15 You just get a humble, get your ass kicked. I've just failed and been embarrassed so many times in my life that like, you know, I'm, I'm, it's a core competence of this. It's a core competence. Well, yes, and there's a deep truth to that being able to and you said it in the very beginning, which is that's the thing that stops us, especially as you get older, especially as you develop expertise in certain areas, the not being willing to be a beginner in a new area.
Starting point is 00:22:40 Yeah. That because that's where the growth happens's being willing to be a beginner, being willing to be embarrassed, saying something stupid, doing something stupid. A lot of us get good at one thing. You want to show that off and it sucks being a beginner. But it's where growth happens. Yeah.
Starting point is 00:23:01 Well, speaking of which, let me ask you about AI. It seems like this year, for the entirety of the human civilization, is an interesting year for the development of artificial intelligence. A lot of interesting stuff is happening. So meta is a big part of that. Meta is developed Lama, which is a 65 billion parameter model. There's a lot of interesting questions that I can ask you. One of which has to do with open source, but first, can you tell the story of developing of this model and making the complicated decision of how to release it? Yeah, sure. I think you're right, first of all, that in the last year, there have been a bunch of advances on scaling up these large transformer models.
Starting point is 00:23:50 So there's the language equivalent of it with large language models, the sort of the image generation equivalent with these large diffusion models. There's a lot of fundamental research that's gone into this. And Meta has taken the approach of being quite open and academic in our development of AI. Part of this is we want to have the best people in the world researching this and a lot of the best people want to know that they're going to be able to share their work. So that's part of the deal that we have is that we can get, if you're one of the top AI
Starting point is 00:24:29 researchers in the world and come here, you can get access to industry scale infrastructure and part of our ethos is that we want to share what's invented broadly. We do that with a lot of the different AI tools that we create. And Lama is the language model that our research team made. And we did a limited open source release for it, which was intended for researchers to be able to use it. But responsibility and getting safety on these is very important. So we didn't think that for the first one, there were a bunch of questions around whether we should be releasing this commercially. So we kind of punted on that for V1 of Lama and just released it from research.
Starting point is 00:25:20 Obviously, by releasing it for research, it's out there. But companies know that they're not supposed to kind of put it into commercial releases. And we're working on the followup models for this and thinking through how exactly this should work for followup on now that we've had time to work on a lot more of the safety
Starting point is 00:25:42 and the pieces around that. But overall, I mean, this is, I just kind of think that it would be good if there were a lot of different folks who have the ability to build state-of-the-art technology here. It's not just a small number of big companies. To train one of these AI models, the state-of-the-art models, it just takes hundreds of millions of dollars of infrastructure. There are not that many organizations in the world that can do that at the biggest scale today. Now, it gets more efficient every day.
Starting point is 00:26:27 So, I do think that that will be available to more folks over time. But I just think there's all this innovation out there that people can create. And I just think that will also learn a lot by seeing what the whole community of students and hackers and startups and different folks build with this. And that's kind of been how we've approached this. And it's also how we've done a lot of our infrastructure. And we took our whole data center design and our server design. And we built this open compute project where we just made that public.
Starting point is 00:26:59 And part of the theory was like, all right, if we make it so that more people can use this server design, then that'll enable more innovation. Part of the theory was like, all right, if we make it so that more people can use this server design, then that'll enable more innovation and it'll also make the server design more efficient, that'll make our business more efficient too. So that's worked and we've just done this with a lot of our infrastructure. So for people who don't know, you did the limited release, I think, in February of this year of Lama, and it got quote unquote leaked, meaning like it escaped the limited release aspect, but it was, you know, that's something you probably anticipated, given that it's just released the research. We shared it with researchers. It's just trying to make sure that there's like a slow release. Yeah.
Starting point is 00:27:46 But from there, I just would love to get your comment on what happened next, which is like, there's a very vibrant open source community that just built stuff on top of it. There's a Lama CPP, basically stuff that makes it more efficient to run on smaller computers. There's combining with reinforcement learning with human feedback so some of the different interesting fine tuning mechanisms. There's then also fine tuning and a GPT-3 generations. There's a lot of GPT for all, Alpaca, Colossal AI, all these kinds of models just kind of spring up like run on top of wood.
Starting point is 00:28:22 What do you think about that? No, I think it's been really neat to see. I mean, there's been folks who are getting it to run on local devices. So if you're an individual who just wants to experiment, with this at home, you probably don't have a large budget to get access to a large amount of cloud compute. So getting it to run on your local laptop is pretty good, right, and pretty relevant.
Starting point is 00:28:47 And then there were things like, yeah, Lama, CPP, re-implemented it more efficiently. So, you know, now even when we run our own versions of it, we can do it on way less compute and it just way more efficient, save a lot of money for everyone who uses this. So that is good. I do think it's worth calling out that because this was a relatively early release, Lama isn't quite as on the frontier as, for example, the biggest open AI models or the biggest Google models.
Starting point is 00:29:23 And you mentioned that the largest Lama model that we released had 65 billion parameters. And when no one knows, I guess outside of OpenAI, exactly what the specs are for GPD4. But I think my understanding is it's 10 times bigger. And I think Google's Palm model is also, I think, has about 10 times as many parameters. Now, the Lama models are very efficient, so they perform well for something that's around 65 billion parameters.
Starting point is 00:29:53 So for me, that was also part of this, because this is all debate around, you know, is it good for everyone in the world to have access to the most frontier AI models. And I think as the AI models start approaching something that's like a super human intelligence, like that's a bigger question that we'll have to grapple with, but right now, I mean, these are still very basic tools. They're powerful in the sense that a lot of open source software like databases or web servers can enable a lot of pretty important things. But I don't think anyone looks at
Starting point is 00:30:34 the current generation of Laman thinks that some are near a super intelligent. So I think that a bunch of those questions around like, is it is a good to kind of get out there? I think at this stage, surely, you want more researchers working on it for all the reasons that that open source software has a lot of advantages and we talked about efficiency before, but another one is just open source software tends to be more secure because you have more people looking at it openly and scrutinizing it and finding holes in it, and that makes it more safe. So I think at this point, it's more, I think it's generally agreed upon that open-source software is generally more secure and safer than things that are kind of developed in Asylum, where people try to get through security, through obscurity. So
Starting point is 00:31:22 I think that for the scale of what we're seeing now with AI, I think we're more likely to get to good alignment and good understanding of what needs to do to make this work well by having it be open source. And that's something that I think is quite good to have out there and happening publicly at this point. Meta released a lot of models as open source. So the Masalini multi-lingual speech model that you make a lot of. Yeah, that's me. I'll ask you questions about this, but the point is, you've open sourced quite a lot.
Starting point is 00:31:56 You've been spearheading the open source movement. Where's that's really positive inspiring to see from one angle, from the research angle. Of course, as folks who are really terrified about the existential threat of artificial intelligence and those folks will say that you know you have to be careful about the open sourcing step but what do you see the future of open source here as part of meta the tension here is do you want to release the magic sauce? That's one tension. Any other one is, do you want to put a powerful tool in the hands of bad actors, even though it probably has a huge amount of positive impact also?
Starting point is 00:32:38 Yeah, I mean, again, I think for the stage that we're at in the development of AI, I don't think anyone looks at the current state of things and thinks that this is super intelligence. And the models that we're talking about, the Lama models here are generally an order of magnitude smaller than what OpenAI or Google are doing. So I think that, at least for the stage that we're at now, the equity is balanced strongly, in my view,
Starting point is 00:33:06 towards doing this more openly. I think if you got something that was closer to superintelligence, then I think you'd have to discuss that more and think through that a lot more. And we haven't made a decision yet as to what we would do if we were in that position. But I don't think I think there's a good chance that we're pretty far off from that position. So I'm certainly not saying that the position that we're taking on this now applies to every single thing that we would ever do.
Starting point is 00:33:37 And certainly inside the company, we probably do more open source work than most of the other big tech companies. But we also don't open source work than most of the other big tech companies, but we also don't open source everything. A lot of our core app code for WhatsApp or Instagram or something, we're not open sourcing that. It's not a general enough piece of software that would be useful for a lot of people to
Starting point is 00:33:57 do different things. Whereas the software that we do, whether it's like an open-source server design or basically things like Memcash, like a good, it was probably our earliest project that I worked on. It was probably one of the last things that I coded and led directly for the company. But basically this like caching tool
Starting point is 00:34:24 for quick data retrieval. These are things that are just broadly useful across anything that you want to build. And I think that some of the language models now have that feel as well as some of the other things that we're building, like the translation tool that you just referenced. So text to speech and speech to text, you've expanded it from around 100 languages to more than 1100 languages. And you can identify more than, the model can identify more than 4,000 spoken languages,
Starting point is 00:34:54 which is 40 times more than any known previous technology. To me, that's really, really, really exciting in terms of connecting the world, breaking down barriers that language creates. Yeah, I think being able to translate between all of these different pieces in real time, this has been a kind of common sci-fi idea that we'd all have, you know, whether it's I don't know, an earbud or glasses or something that can help translate in real time between all these different languages. And that's one that I think technology is basically delivering now.
Starting point is 00:35:30 So I think that's pretty exciting. You mentioned the next version of Lama. What can you say about the next version of Lama? What can you say about what were you working on in terms of release, in terms of the vision for that? Well, a lot of what we're doing is taking the first version, which was primarily this research version, and trying to now build a version
Starting point is 00:35:55 that has all of the latest state-of-the-art safety precautions built in. And we're using some more data to train it from across our services. But a lot of the work that we're doing internally is really just focused on making sure that this is as aligned and responsible as possible. And we're building a lot of our own.
Starting point is 00:36:23 We're talking about the open source infrastructure, but the main thing that we focus on building here, a lot of product experiences tell people connect and express themselves. So I've talked about a bunch of this stuff, but you'll have an assistant that you can talk to in WhatsApp. I think in the future, every creator will have an AI agent that can act on their behalf, that their fans can talk to. I want to get to the point where every small business basically has an AI agent that people can talk to
Starting point is 00:36:57 for to do commerce and customer support and things like that. So there are going to be all these different things. And Lama, the language model underlying this is basically going to be the engine that powers that. The reason to open source it is that as we did with the first version is that it basically it unlocks a lot of innovation in the ecosystem. we'll make our products better as well, and also gives us a lot of valuable feedback
Starting point is 00:37:28 on security and safety, which is important for making this good. But yeah, I mean, the work that we're doing to advance the infrastructure, it's basically at this point, taking it beyond a research project into something which is ready to be kind of core infrastructure not only for our own products,
Starting point is 00:37:45 but hopefully for a lot of other things out there too. Do you think the Lama or the language model underlying that version two will be open sourced? Do you have internal debate around that, the pros and cons and so on? This is, I mean, we were talking about the debates that we have internally and I think I think the question is how to do it. Right, I mean, I think we did the research license for V1 and I think the big thing that we're that we're thinking about is basically like what's the right way? So there was a leak that happened. I don't know if you can comment on it for the V1.
Starting point is 00:38:26 We released it as a research project for researchers to be able to use, but in doing so we put it out there. We were very clear that anyone who uses the code and the weights doesn't have a commercial license to put into products. We've generally seen people respect that. It's like you don't have any reputable companies that are basically trying to put this into their commercial products. But yeah, but by sharing it with so many researchers, it's, you know, it did leave the building. But what have you learned from that process that you might be able to apply to V2 about how to
Starting point is 00:39:03 release it safely, effectively if you release it. Yeah, well, I think a lot of the feedback, like I said, is just around different things around how do you fine tune models to make them more aligned and safer. And you see all the different data recipes that you mentioned, a lot of different projects that are based on this. I mean, there's one at Berkeley, there's, you know, it's just like all over. And people have tried a lot of different things
Starting point is 00:39:34 and we've tried a bunch of stuff internally. So kind of where we're making progress here, but also we're able to learn from some of the best ideas in the community. And I think it, can we just continue pushing that forward? But I don't have any news to announce. I'm on. I'm on.
Starting point is 00:39:50 If that's what you're asking. This is a thing that we're still actively working through the right way to move forward here. The details of the secret sauce are still being developed. I see. Can you comment on what do you think of the thing that worked for GBT, which is the reinforcement learning with human feedback?
Starting point is 00:40:14 So doing this alignment process, do you find it interesting? And as part of that, let me ask, because I talked to Jan LeCoon before talking to you today, he asked me to ask or suggested that I ask, do you think LLM fine tuning will need to be crowdsourced Wikipedia style? So crowdsourcing. So this kind of idea of how to integrate the human in the fine tuning of these foundation models? Yeah, I think that's a really interesting idea that I've talked to Jan about a bunch. And we're talking about how do you basically train these models to be as safe and aligned
Starting point is 00:40:59 and responsible as possible. And different groups out there who are doing development test different data recipes in fine tuning. But this idea that you just mentioned is that at the end of the day, instead of having one group fine tune some stuff and another group produce a different fine tuning recipe and then us trying to figure out which one we think works best to produce the most aligned model. I do think that it would be nice if you could get to a point where you had a Wikipedia-style collaborative way for a broader community to fine tuning as well. Now, there's a lot of challenges in that, both from an infrastructure and community management
Starting point is 00:41:51 and product perspective about how you do that. So I haven't worked that out yet. But as an idea, I think it's quite compelling and I think it goes well with the ethos of open sourcing the technology is also finding a way to have a kind of community driven training of it. But I think that there are a lot of questions on this. In general, these questions around what's the best way to produce a line-day eye models.
Starting point is 00:42:20 It's very much a research area, and it's one that I think we will need to make as much progress on as the core intelligence capability of the models themselves. Well, I just did a conversation with Jimmy Wales, the founder of Wikipedia. And to me, Wikipedia is one of the greatest websites ever created, and it's a kind of a miracle that it works. And I think it has to do with something that you mentioned which is community of a small community of editors that somehow work together well and they are the handle very controversial topics and the handle it with balance and with grace despite sort of the attacks that will often happen a lot of the time i mean it's not it's it has issues just like any other human system
Starting point is 00:43:06 But yes, I mean the balance is I mean, it's a it's amazing what they've been able to achieve, but it's also not perfect and I think that that's There's still a lot of challenges Right, so the more controversial the topic the more the more difficult the The journey towards quote unquote truth or knowledge or wisdom that we could be interested to capture. In the same way AI models, we need to be able to generate those same things, truth, knowledge and wisdom and how do you align those models that they generate something that is closest to truth. There's these concerns about misinformation, all this kind of stuff that nobody can define. And it's something that we together as a
Starting point is 00:43:55 human species have to define. Like what is truth and how to help AI systems generate that? One of the things that language models do really well is generate convincing sounding things that can be completely wrong. How do you align it to be less wrong? Part of that is the training and part of that is the alignment. However, you do the alignment stage. Just like you said, it's a very new and a very open research problem. Yeah. And I think that there's also a lot of questions about whether the current architecture for LLM's as you continue scaling it, what happens. I mean, a lot of the, a lot of what's been exciting in the last year is that there's clearly a qualitative breakthrough where with some of the GPT models that open,
Starting point is 00:44:50 I put out and that others have been able to do as well. I think it reached a kind of level of quality where people are like, wow, this feels different and like it's gonna be able to be the foundation for building a lot of awesome products and experiences and value. But I think of the other realization that people have is, wow, we just made a breakthrough. If there are other breakthroughs quickly, then I think that there's the sense that maybe we're closer to
Starting point is 00:45:19 general intelligence. But I think that that idea is predicated on the idea that I think people believe that there's still generally a bunch of additional breakthroughs to make and that it's, we just don't know how long it's going to take to get there. And one view that some people have, this doesn't tend to be my view as much, is that simply scaling the current LLMs and getting to higher parameter count models by itself, we'll get to something that is closer to general intelligence. But I tend to think that there's probably more fundamental steps that need to be taken along the way there.
Starting point is 00:46:00 But still, the leaves taken with this extra alignment step is quite incredible, quite surprising to a lot of folks. And on top of that, when you start to have hundreds of millions of people potentially using a product that integrates that, you can start to see civilization transforming effects before you achieve super, quote unquote super intelligence. It could be super transformative without being a super intelligence. Oh, yeah. I mean, I think that there are going to be a lot of amazing products and value that can be created with the current level of technology.
Starting point is 00:46:40 To some degree, yeah, I'm excited to work on a lot of those products over the next few years, and I think it would just create a tremendous amount of whiplash if the number of breakthroughs keeps, if they're keep on being stacked breakthroughs, because I think to some degree, industry in the world needs some time to build these breakthroughs into the products and experiences that we all use, that we can actually benefit from them. But I don't know, I think that there's just a, a, like an awesome amount of stuff to do. I mean, I think about like all of the, I don't know, small businesses or individual entrepreneurs
Starting point is 00:47:18 out there who, now we're gonna be able to get help coding the things that they need to go build things or designing the things that they need or we'll be able to use these models to be able to do customer support for the people that they're serving over WhatsApp without having to, I think that's just going to be, I just think that this is all going to be super exciting. It's going to create better, better experiences for people and just unlock a ton of innovation and value. So, I don't know if you know, but, you know, what is it? Over 3 billion people use WhatsApp,
Starting point is 00:47:54 Facebook, and Instagram. So, any kind of AI-fueled products that go into that, like we're talking about anything with LLMs, will have tremendous amount of impact do you have ideas and thoughts about possible products That might start being integrated into Into these platforms used by so many people. Yeah, I think there's three main categories of things that we're working on The first that I think is probably the most interesting is there's this notion of like you're going to have an assistant or an agent who you can talk to. I think probably the biggest thing that's different about my view of how this plays out from what I see with OpenAI and Google and others is, you
Starting point is 00:48:51 know, everyone else is building like the one singular AI. Right? It's like, okay, you talk to chat GPT or you talk to Bard or you talk to Bing. And my view is that there are going to be a lot of different AIs that people are going to want to engage with, just like you want to use a number of different apps for different things, and you have relationships with different people in your life who feel different emotional roles for you. And I think that there are going to be people of a reason that I think you don't just want like a singular AI. That I think is probably the biggest distinction in terms of how I think about this.
Starting point is 00:49:36 A bunch of these things, I think you'll want an assistant. I mentioned a couple of these before. I think every creator who you interact with will ultimately want some kind of AI that can proxy them and be something that their fans can interact with or that allows them to interact with their fans. This is like the common creator promise. Everyone's trying to build a community
Starting point is 00:49:58 and engage with people and they want tools to be able to amplify themselves more and be able to do that. But you only have 24 hours in a day. So I think having the ability to basically bottle up your personality or give your fans information about when you're performing a concert or something like that, I mean, that I think is going to be something that's super valuable. But it's not just that, again, it's not this idea that I think people are going to want just one
Starting point is 00:50:28 singular AI. I think you're going to want to interact with a lot of different entities. And then I think there's the business version of this too, which we've touched on a couple of times, which is, I think every business in the world is going to want basically an AI that that you know, it's like you have your page on Instagram or Facebook or WhatsApp or whatever and you want to point people to an AI that people can interact with. But you want to know that that AI is only going to sell your products. You don't want it, you know, recommending your competitors stuff, right?
Starting point is 00:50:59 So, so it's not like there can be like just, you know, one singular AI that, that can answer all the questions for a person because that AI might not actually be aligned with you as a business to really just do the best job providing support for your product. So I think that there's gonna be a clear need in the market and in people's lives for there to be a bunch of these.
Starting point is 00:51:24 Part of that is figuring out the research, the technology, that enables the personalization that you're talking about. So not one centralized, godlike LLAM, but one just a huge diversity of them. That's fine tuned to particular needs, particular styles, particular businesses, particular brands, all that kind of stuff. And also enabling just enabling people to create them
Starting point is 00:51:48 really easily for your own business, or if you're a creator to be able to help you engage with your fans. So yeah, I think that there's a clear kind of interesting product direction here that I think is fairly unique from what any of the other big companies are taking. It also aligns well with this sort of open source approach because again, we sort of believe in this more community-oriented, more democratic approach to building out the products and
Starting point is 00:52:20 technology around this. We don't think that there's going to be the one true thing. We think that there should be kind of a lot of development. So that part of things I think is gonna be really interesting. And we could go, price about a lot of time, talking about that and the kind of implications of that approach being different from what others are taking. Then there's a bunch of other simpler things
Starting point is 00:52:41 that I think we're also gonna do, just going back to your question around, how this finds its way into what do we build? There are going to be a lot of simpler things around, okay, you post photos on Instagram and Facebook and what's up in Messenger and you want the photos to look as good as possible. So having an AI that you can just take a photo and just tell it like, okay, I want to edit this thing or describe this. It's like, I think we're gonna have tools
Starting point is 00:53:10 that are just way better than what we've historically had on this. And that's more in the image and media generation side than the large language model side, but it's it all kind of plays off of advances in the same space. So there are a lot of tools that I think are just gonna get built into every one of our products. I think every single thing that we do
Starting point is 00:53:29 is gonna basically get evolved in this direction. It's like in the future, if you're advertising on our services, do you need to make your own kind of ad creative? No, you'll just tell us, okay, I'm a dog walker and I, you know, I'm willing to walk people's dogs and help me find the right people and like create the ad unit that will perform the best and like give an objective to the system and it just kind of like connects you with the right people. Well, that's a super powerful idea of generating the language, almost like rigorous A, B testing for you.
Starting point is 00:54:13 That works to find the best customer for your thing. I mean, to me, advertisement went down well, just finds a good match between a human being and a thing that will make that human being happy. Yeah, totally. And do that as efficiently as possible. When it's done well, people actually like it. You know, it's, I think that there's a lot of examples where it's not done well and it's
Starting point is 00:54:37 annoying and I think that that's what kind of gives it a bad rap. But yeah, and a lot of the stuff is possible today. I mean, obviously, A.B. testing stuff is built into a lot of these frameworks. The thing that's new is having technology that can generate the ideas for you about what to A.B. test, something that that's exciting. So this will just be across like everything that we're doing, where all the metaverse stuff that we're doing, right? It's like you want to create worlds in the future,
Starting point is 00:55:00 you'll just describe them and then it'll create the code for you. So the natural language becomes the interface we use for all the ways we interact with the computer, with the digital... More them? Yeah, totally. Yeah, which is what everyone can do using natural language. And with translation, you can do it any kind of language. I mean, for the personalization is really, really,
Starting point is 00:55:26 really interesting. Yeah. And I'll lock so many possible things. I mean, I for one look forward to creating a copy of myself. I don't know. We talked about this last time. But this has since the last time, this becomes now we're closer, much closer. Like I could literally just having interacted with some of these language models, I can see the absurd situation where I'll have a large or a Lex language model, and I'll have to have a conversation with him about like, hey, listen,
Starting point is 00:56:00 like you're just getting out of line and having a conversation where you find you and that thing to be a little bit more respect for something like this I mean, that's that's going to be the That seems like an amazing product For businesses for humans just not not just the assistant that's facing the individual but the Assistant that represents the individual to the public, both directions.
Starting point is 00:56:28 There's basically a layer that is the AI system through which you interact with the outside world, with the outside world that has humans in it. That's really interesting. And you that have social networks that connect billions of people, I think it seems like a heck of a large scale place to test some of this stuff out. Yeah, I mean, I think part of the reason why creators want to do this is because they already have the communities on our services. Yeah. And a lot of the interfaces for this stuff today are chat type interfaces.
Starting point is 00:57:05 And between WhatsApp and Messenger, I think those are just great ways to interact with people. So some of this is philosophy, but do you see a near-term future where you have some of the people you're friends with are AI systems on these social networks, on Facebook, on Instagram, even on WhatsApp, and having conversations where some heterogeneous, some human, some AI. I think we'll get to that. If only just empirically looking at then Microsoft released this thing called Shaoice several years ago, and China, and it was a pre-LLM chatbot technology
Starting point is 00:57:52 that it was a lot simpler than what's possible today. And I think it was like tens of millions of people were using this and just, you know, really, it became quite attached and built relationships with it. I think that there's services today like replica where people are doing things like that. I think that there's certainly needs for companionship that people have, older people. I think most people, I don't have as many friends as they would like to have, right?
Starting point is 00:58:28 If you look at, there's some interesting demographic studies around the average person has the number of close friends that they have is fewer today than it was 15 years ago. And I mean, that gets to like, this is like the core thing that I think about in terms of building services that help connect people. So I think you'll get tools that help people connect with each other are going to be the primary thing that we want to do. So you can imagine AI assistance that just do a better job of reminding you when it's your friend's birthday and how you can celebrate them. It's like right now we have the little box in the corner of the website that tells you
Starting point is 00:59:11 whose birthday it is and stuff like that. But it's some level you don't want to just want to send everyone a note, the same note saying happy birthday with an emoji. Having something that's more of a social assistant in that sense, and like that can, you know, update you on what's going on in their life and like how you can reach out to them effectively, help you be a better friend. I think that that's something that's super powerful too.
Starting point is 00:59:38 But, yeah, beyond that, I mean, there are all these different flavors of kind of personal AIIs that I think could exist. So I think an assistant is sort of the kind of simplest one to wrap your head around, but like a mentor or a life coach, you know, someone who can give you advice, who's maybe like a bit of a cheerleader who can help pick you up through all the challenges that inevitably, you know, we all go through on a daily basis and that there's probably some role
Starting point is 01:00:10 for something like that. And then all the way, you can probably just go through a lot of the different type of functional relationships that people have in their life. I would bet that there will be companies out there that take a crack at a lot of these things. I don't know, I think it's part of the interesting innovation that's going to exist is that they're certainly a lot like education tutors, right? It's like, I just look at my kids learning to code and they love it.
Starting point is 01:00:41 But it's like they get stuck on a question and they have to wait till like I can I'll answer it right or someone else who they know can help answer the question in the future They'll just know there will be like a coding assistant that they have that is like designed to you know Be perfect for teaching a five and a seven-year-old how to code and and they'll just be able to ask questions all the time and you know Be the extremely patient. It's never going to get annoyed at them. I think that there are all these different kind of relationships or functional relationships that we have in our lives that are really interesting. I think one of the big questions is, is this all going to just get bucketed into one singular AI?
Starting point is 01:01:22 I just don't think so. Do you think about this question from Reddit? But one singular AI, I just don't think so. Do you think about, let's ask you a question from Reddit, what the long term effects of human communication when people can talk with, in quotes, talk with others through a chatbot that augments their language automatically rather than developing social skills by making mistakes and learning? Will people just communicate by by grants in a generation? Do you think about long-term effects at scale,
Starting point is 01:01:50 the integration of AI in our social interaction? Yeah, I mean, I think it's mostly good. I mean, that was, that question was sort of framed in a negative way, but I mean, we were talking before about language models, helping you communicate with, it was like language translation, helping you communicate with people who don't speak your language. I mean, to some level, what all this social technology is doing is helping people express
Starting point is 01:02:17 themselves better to people in situations where they would otherwise have a hard time doing that. So part of it might be okay because you speak a language that I don't know. That's a pretty basic one that, you know, I don't think people are going to look at that and say, it's sad that do we have the capacity to do that because I should have just learned your language. Right. I mean, that's that's pretty high bar. But overall, I'd say, But overall, I'd say, there are all these impediments in language as an imperfect way for people to express thoughts and ideas. It's one of the best that we have. We have that.
Starting point is 01:02:54 We have art. We have code. But language is also a mapping of the way you think, the way you see the world, who you are. And one of the applications of recent talk to a person who is an ex-aggidious instructor said that when he emails parents about their son and daughter that they can improve their discipline and class and so on, he often finds that he comes off a bit of more of an asshole than he would like.
Starting point is 01:03:25 So he uses GPT to translate his original email into a nicer email. We hear this all the time. We hear this all the time. A lot of creators on our services tell us that one of the most stressful things is basically negotiating deals with brands and stuff like the business side of it. Because they're like, I mean, they do their thing, right? And the creators, they're excellent at what they do, and they just want to connect with their community. But then they get really stressed.
Starting point is 01:03:52 You know, they go into their, their DMs and they see some brand wants to do something with them. And they don't quite know how to negotiate or how to push back respectfully. And so I think building a tool that can actually allow them to do that well is one simple thing that I think is just like an interesting thing that we've heard from a bunch of people that they'd be interested in. But I'm going back to the broader idea. I don't know. I mean, I just, Priscilla and I just had our third daughter. I don't know, I mean, I just, Priscilla and I just had our third daughter. Congratulations.
Starting point is 01:04:25 Thank you. Thanks. And, you know, it's like one of the saddest things in the world is like singer baby cry, right? But like, it's like, why is that, right? It's like, well, because babies don't generally have much capacity to tell you what they care about otherwise, right? And it's not actually just babies. My five-year-old daughter cries too, because she sometimes has a hard time expressing what
Starting point is 01:04:51 matters to her. And then I was thinking about that and I was like, well, actually a lot of adults get very frustrated too, because they have a hard time expressing things in a way that, going back to some of the early themes that maybe is something that was a mistake or maybe they have pride or something like all these things get in the way. So, I don't know. I think that all these different technologies that can help us navigate the social complexity
Starting point is 01:05:19 and actually be able to better express our what we're feeling and thinking. I think that's generally all good. And they're all these concerns. Like, okay, are people going to have worse memories because you have Google to look things up? And I think in general, a generation later, you don't look back and lament that. I think it's just like, wow, we have so much more capacity to do so much more now. And I think that that'll be the case here too. You can allocate those cognitive capabilities to like deeper ones thought. Yeah. Yeah. But it's change. So with just like with Google search, the the
Starting point is 01:05:58 addition of language models, large language models, you basically don't have to remember nearly as much. Just like we stack overflow for programming, now that these language models can generate code right there. I mean, I find that I write like maybe 80%, 90% of the code I write is not generated first and then edited. You see, you don't have to remember how to write specifics of different functions. So that's great. And it's also, it's not just the specific coding. I mean, in the context of a large company like this, I think before an engineer can sit down to code, they first need to figure out all of the libraries and dependencies that, you know, tens of thousands of people have written before them.
Starting point is 01:06:45 One of the things that I'm excited about that we're working on is it's not just tools that help engineers code. It's tools that it can help summarize the whole knowledge base and help people be able to navigate all the internal information. I think that that's in the experiments that I've done with this stuff. I mean, that's on the public stuff. You just ask one of these models to build you a script that does anything and it basically already understands what the best libraries are to do that thing and pulls them in automatically. I think that's super powerful.
Starting point is 01:07:20 That was always the most annoying part of coding was that you had to spend all this time actually figuring out what the resources were that you were supposed to import before you could actually start building the thing. Yeah. I mean, there's, of course, the flip side of that, I think, for the most part, is positive, but the flip side is if you outsource that thinking to an AI model, you might miss nuanced mistakes and bugs.
Starting point is 01:07:47 You lose the skill to find those bugs. And those bugs might be, the code looks very convincingly right, but it's actually wrong in a very subtle way. But that's the trade-off that we face as human civilization when we build more and more powerful tools. When we stand on the shoulders of taller and taller giants, we could do more, but then we forget how to do all the stuff that they did. It's a weird trade-off.
Starting point is 01:08:19 Yeah, I agree. I think it is very valuable in your life to be able to do basic things to do. Do you worry about some of the concerns of bots being present on social networks more and more human-like bots that are Not necessarily trying to do a good thing or they might be explicitly trying to do a bad thing like fishing scams Yeah, like social engineering all that kind of stuff, which has always been a very difficult problem for social networks, but now it's becoming almost a more and more difficult problem.
Starting point is 01:08:53 Well, I think there's a few different parts of this. So one is, there are all these harms that we need to basically fight against and prevent. And that's been, a lot of our focus over the last five or seven years is basically ramping up very sophisticated AI systems, not generative AI systems, more kind of classical AI systems, to be able to categorize and classify and identify. Okay, this post looks like it's promoting terrorism. This one is, you know, like exploiting children.
Starting point is 01:09:32 This one is, looks like it might be trying to inside violence. This one is an intellectual property violation. So there's, there's like, it's like 18 different categories of violating kind of harmful content that we've had to build specific systems to be able to track. And I think it's certainly the case that advances in generative AI will test those. But at least so far, it's been the case, and I'm optimistic that it will continue to be the case that We will be able to bring more computing power to bear to have even stronger a eyes that can help defend against those things so
Starting point is 01:10:13 We've had to deal with some adversarial issues before right? It's I mean for for something's like hate speech It's like people aren't generally getting a lot more sophisticated like the average person Let's say you know, it's someone's saying some kind of racist thing, right? It's like, they're not necessarily getting more sophisticated at being racist, right? It's just, it's okay. So that the system can just find.
Starting point is 01:10:34 But then there's other adversaries who actually are very sophisticated, like nation states doing things. And, you know, we find, you know, whether it's Russia or just different countries that are basically standing up these networks of bots or inauthentic accounts is what we call them because they're not necessarily bots. Some of them could actually be real people who are kind of masquerading as other people, but they're acting in a coordinated way. Some of that behavior has gotten very sophisticated
Starting point is 01:11:07 and it's very adversarial. So they each iteration, every time we find something and stop them, they kind of evolve their behavior. They don't just pack up their bags and go home and say, okay, we're not gonna try. You know, at some point they might decide doing it on meta-services is not worth it. They'll go do it on someone else if it's easier
Starting point is 01:11:23 to do it in another place. But we have a fair amount of experience dealing with even those kind of adversarial attacks where they just keep on getting better and better. And I do think that as long as we can keep on putting more compute power against it, and if we're kind of one of the leaders in developing some of these AI models,
Starting point is 01:11:43 I'm quite optimistic that we're gonna be able to keep on pushing against the kind of normal categories of harm that you talk about, fraud, scams, spam, IP violations, things like that. What about like creating narratives and controversy? To me, it's kind of amazing how a small collection of what did you say, inauthentic accounts, so it could be bots. We have sort of this funny name for it, but we call it coordinated inauthentic behavior. It's kind of incredible how a small collection of folks can
Starting point is 01:12:18 create narratives, create stories, especially if they're viral, especially if they have an element that can catalyze the virality of the narrative. Yeah, and I think there, the question is, you have to be very specific about what is bad about it, right? Because I think a set of people coming together or organically bouncing ideas off each other, and a narrative comes out of that, is not necessarily a bad thing by itself if it's kind of authentic and organic.
Starting point is 01:12:52 That's like a lot of what happens and how culture gets created and how art gets created and a lot of good stuff. So that's why we've kind of focused on this sense of coordinated and authentic behavior. So it's like if you have a network of, you know, whether it's bots, some people masquerading as different accounts,
Starting point is 01:13:09 but you have to kind of someone pulling the strings behind it and trying to kind of act as if this is a more organic set of behavior, but really it's not. It's just like one coordinated thing. That seems problematic to me, right? I mean, I don't think people should be able to have coordinated networks and not disclose it as such. But that again, you know, we've been able to deploy pretty sophisticated AI and, you know,
Starting point is 01:13:35 counterterrorism groups and things like that to be able to identify a fair number of these coordinated and authentic networks of accounts and take them down. We continue to do that. I think it's one thing that if you told me 20 years ago, it's like, all right, you're starting this website to help people connect to college and in the future, you're going to be part of your organization. It's going to be a counterterrorism organization with AI to find coordinated and authentic. I would have thought that was pretty wild, but it's, but I think that that's part of where we are.
Starting point is 01:14:11 But look, I think that these questions that you're pushing on now, this is actually where I'd guess most of the challenge around AI will be for the foreseeable future. I think that there's a lot of debate around things like, is this going to create existential risk to humanity? And like those are very hard things to disprove one way or another. My own intuition is that the point at which we become close to superintelligence is, it's just really unclear to me
Starting point is 01:14:43 that the current technology is going to get there without another set of significant advances, but that doesn't mean that there's no danger. I think the danger is basically amplifying the kind of known set of harms that people or sets of accounts can do, and we just need to make sure that we really focus on on basically doing that as well as possible. So that's definitely a big focus for me. Well, you can basically use large language models as an assistant of how to cause harm on social networks.
Starting point is 01:15:14 You can ask it a question. Meta has very impressive coordinated inauthentic account fighting capabilities. How do I do the coordinated and authentic account creation where meta doesn't detect it? Like literally ask that question. And basically there's this kind of part of it. I mean that's what OpenAI showed that they're concerned with those questions. Perhaps you can comment on your approach to it. How to do a kind of moderation on the output of those models that it can't be used to help you coordinate harm in all the full definition of what the harm means.
Starting point is 01:15:56 Yeah, and that's a lot of the fine tuning and the alignment training that we do is basically when we ship AI's across our products, a lot of what we're trying to make sure is that you can't ask it to help you commit a crime. So I think training it to kind of understand that and it's not that not like any of these systems are ever going to be 100% perfect, but just making it so that this isn't an easier way to go about doing something bad than the next best alternative. I mean, people still have Google, where you still have search engines. So the information is out there. And for these,
Starting point is 01:16:52 what we see is like for nation states are these actors that are trying to pull off these large coordinated and authentic networks to kind of influence different things. At some point, when we just make it very difficult, they do just try to use other services instead. It's just like if you can make it more expensive for them to do it on your service,
Starting point is 01:17:13 then people go elsewhere. And I think that that's the bar. It's not like, okay, you're ever gonna be perfect at finding every adversary who tries to attack you. I mean, you try to get as close to that as possible, but I think really, economically, we're just trying to do is make it so that it's just inefficient for them to go after that. But there's also complicated questions of what isn't harm, what isn't misinformation. So this is one of the things that Wikipedia has also tried to face.
Starting point is 01:17:45 I remember asking GPD about whether the virus leak from a lab or not and the answer provided was a very nuanced one and a well-sighted one almost dare I say well thought out one balanced, I would hate for that nuance to be lost through the process of moderation. We'll compete it as a good job on that particular thing too, but from pressures from governments and institutions, you could see some of that nuance and depth of information facts and wisdom be lost. Absolutely. And that's a scary thing.
Starting point is 01:18:26 Some of the magic, some of the edges, the rough edges might be lost to the process of moderation of AI systems. So how do you get that right? I really agree with what you're pushing on. I mean, the core, I mean, the core shape of the problem is that there are some harms that I think everyone agrees
Starting point is 01:18:47 are bad, right? So sexual exploitation of children, right? Like, you're not going to get many people who think that that type of thing should be allowed on any service, right? And that's something that we face and try to push off the, you know, as much as possible today, you know, terrorism, inciting violence, right? It's like, we went through a bunch of these types of harms before. But then I do think that you get to a set of harms where there is more social debate
Starting point is 01:19:19 around it. So misinformation, I think, has been a really tricky one because there were things that are obviously false, that are maybe factual, but may not be harmful. Since a guy, you're going to censor someone for just being wrong. If there's no harm implication of what they're doing, I think that that's, there's a bunch of real kind of issues and challenges there. But then, I think that there are other places where it is, and it just takes some of the stuff around COVID earlier on in the pandemic, where
Starting point is 01:20:00 there were, you know, real health implications, but there hadn't been time to fully vet a bunch of the scientific assumptions. Unfortunately, I think a lot of the establishment on that waffled on a bunch of facts and asked for a bunch of things to be censored that in retrospect ended up being more debatable or true. That stuff is really tough and really undermines trust in that. So, I do think that the questions around how to manage that are very nuance. The way that I try to think about it is that it goes, I think it's best to generally boil things down to the harms that
Starting point is 01:20:43 people agree on. So when you think about, you know, is something misinformation or not, I think often the more salient bit is, is this going to potentially lead to physical harm for someone and kind of think about it in that sense? And then beyond that, I think people just have different preferences on how they want things to be flagged for them. I think a bunch of people would prefer to kind of have a flag on something that says, hey, a fact checker thinks that this might be false.
Starting point is 01:21:12 So, I think Twitter's community note simplitation is quite good on this. But again, it's the same type of thing. It's like just kind of discretionarily adding a flag because it makes the user experience better. But it's not trying because it makes the user experience better. But it's not, you know, trying to take down the information or not, I think that you want to reserve the kind of censorship of content to things that are of known categories that people generally agree or bad.
Starting point is 01:21:38 Yeah, but there's so many things, especially with the pandemic, but there's other topics where there's just deep disagreement fueled by politics about what is and isn't harmful. There's a even just the degree to which yeah virus is harmful, the degree to which
Starting point is 01:21:57 the vaccines, the response to the virus are harmful, there's just there's a almost like a political divider on that. And so how do you make decisions about that, where half the country in the United States or some large fraction of the world has very different views from another part of the world? Is there a way it's really a stay out of the moderation of this. I think we, it's very difficult to just abstain. But I think we should be clear about which of these things are actual safety concerns and which ones are a matter of preference in terms of how people want information flagged. Right, so we did recently introduce something that allows people to have fact checking, not affect
Starting point is 01:22:47 the distribution of what shows them their products. So okay, a bunch of people don't trust who the fact checkers are. All right, well, you can turn that off if you want. But if the content violates some policy like it's inciting violence or something like that, it's still not going to be allowed. So I think that you want to honor people's preferences on that as much as possible. But look, I mean, this is really difficult stuff. I think the, it's really hard to know where to draw the line on what is fact and what is opinion because the nature of science is that nothing is ever 100% known for certain, you
Starting point is 01:23:26 can disprove certain things, but you're constantly testing new hypotheses and, you know, scrutinizing frameworks that have been long held and once in a while you throw out something that was working for a very long period of time and it's very difficult. But I think that just because it's very hard and just because their edge cases doesn't mean that you should not try to give people what they're looking for as well. Let me ask about something you faced in terms of moderation is pressure from different sources, pressure from governments. I want to ask a question how to withstand that pressure
Starting point is 01:24:08 for a world where AI moderation starts becoming a thing too. So what's meta's approach to resist the pressure from governments and other interest groups in terms of what to moderate and not. I don't know that there's a one-size-fits-all answer to that. I think we basically have the principles around, you know, we want to allow people to express as much as possible, but we have developed clear categories of things that we think are wrong that we don't want on
Starting point is 01:24:47 our services and we build tools to try to moderate those. So then the question is, okay, what do you do when a government says that they don't want something on the service? And we have a bunch of principles around how we deal with that. Because on the one hand, if there's a democratically elected government, and people around the world just have different values and different places, then should we, as a California-based company,
Starting point is 01:25:20 tell them that something that they have decided is unacceptable. Actually, that we need to be able to express that. I think that there's a certain amount of hubris in that. But then I think there are other cases where it's a little more autocratic and you have the dictator, leader who's just trying to crack down on descent and the people in a country are really not aligned with that. It's not necessarily against their culture, but the person who's leading it is just trying to push in a certain direction. These are very complex questions, but I think, so it's difficult to have one size fits all approach to it. But in general, we're pretty active in kind of advocating and pushing back on requests
Starting point is 01:26:19 to take things down. But honestly, the thing that I think requests to censor things is one thing. That's obviously bad. But where we draw a much harder line is on requests for access to information. Because if you can, if you get told that you can't say something, I mean, that's bad. Right? Even that that is obviously violates your sense and freedom of expression at some level. But a government getting access to data in a way that seems that could be unlawful in our country
Starting point is 01:27:01 exposes people to real physical harm. And that's something that in general we take very seriously. exposes people to real physical harm. And that's something that in general, we take very seriously. And then, so there's that flows through like all of our policies and a lot of ways, right? It's, by the time you're actually like litigating with a government or pushing back on them, that's pretty late in the funnel.
Starting point is 01:27:22 I'd say a bunch of this stuff starts a lot higher up in the decision of where do we put data centers. Then there are a lot of countries where we may have a lot of people using the service in a place. It might be good for the service in some ways. Good for those people if we could reduce the latency by having a data center nearby them. But for whatever reason, we just feel like, hey, this government does not have a good track record on, on, um, basically not trying to get access to people's data. And at the end of the day, I mean, if you put a data center in a country and the government wants to get access to people's data, then they do at the end of the day
Starting point is 01:28:05 have the option of having people show up with guns and taking it by force. So I think that there's a lot of decisions that go into how you architect the systems, years in advance of these actual confrontations that end up being really important. So you put the protection of people's data as a very, very high priority. But in that, I think there are more harms that I think can be associated with that.
Starting point is 01:28:32 And I think that that ends up being a more critical thing to defend against governments. Then, you know, whereas, you know, if another government has a different view of what should be acceptable speech in their country, especially if it's a democratically elected government. I think there's a certain amount of difference that you should have to that. So speaking more to the direct harm that's possible when you give governments access to data, but if we look at the United States, to the more nuanced kind of pressure to censor, not even order to censor, but pressure to censor from political entities, which has kind of received quite a bit of attention in the United
Starting point is 01:29:11 States. Maybe one way to ask that question is, if you've seen the Twitter files, what have you learned from the kind of pressure from US government agencies that was seen in Twitter files? And what do you do with that kind of pressure? You know, I've seen it. It's really hard from the outside to know exactly what happened in each of these cases. You know, we've obviously been in a bunch of our own cases where agencies are different folks, we'll just say, hey, here's a threat that we're aware of. You should be aware of
Starting point is 01:29:57 this too. It's not really pressure as much as it is just flagging something that our security systems should be on alert about. I get how some people could think of it as that. But at the end of the day, it's our call on how to handle that. But I mean, I just, in terms of running these services, one have access to as much information about what people think that adversaries might be trying to do as possible. these services won't have access to as much information about what people think that adversaries might be trying to do as possible. Boy, so you don't feel like there would be consequences if, you know, anybody, the CIA, the FBI, a political party, the Democrats, the Republicans of high, powerful political figures, right emails, you don't feel pressure from suggestions.
Starting point is 01:30:44 I guess what I say is there's so much pressure from all sides that I'm not sure that any specific thing that someone says is really adding that much more to the mix. It's, there are obviously a lot of people who think that we should be censoring more content or there are a lot of people who think we should be censoring more content, where there are a lot of people who think we should be censoring less content. There are, as you say, all kinds of different groups that are involved in these debates, right? So there's the kind of elected officials and politicians themselves, there's the agencies,
Starting point is 01:31:15 but I mean, there's the media, there's activist groups, there's, this is not a US-specific thing, there are groups all over the world and kind of all in every country that bring different values. So it's just a very active debate and I understand it. I mean, these kind of questions get to really some of the most important social debates that are being had. So it gets back to the question of truth,
Starting point is 01:31:46 because for a lot of these things, they haven't yet been hardened into a single truth, and society's sort of trying to hash out what we think right on certain issues. Maybe in a few hundred years, everyone will look back and say, hey, no, it wasn't, it obvious that it should have been this. But, you know, no, we're kind of in that meat grinder now and, you know,
Starting point is 01:32:07 and working through that. So, um, so now, these are all, are all very complicated. And, you know, some people raise concerns in good faith. And just say, hey, this is something that I want to flag for you to think about Certain people I certainly think like comment things with someone of a more kind of punitive or vengeful view of like I like I want you to do this thing if you don't then I'm gonna try to make your life difficult and and a lot of other ways but like I don't know there's just this is like this is one of the most pressurized debates,
Starting point is 01:32:46 I think in society. So I just think that there are so many people and different forces that are trying to apply pressure from different sides that it's, I don't think you can make decisions based on trying to make people happy. I think you just have to do what you think is the right balance and accept that people are going to be upset no matter
Starting point is 01:33:06 where you come out on that. I like that pressurized debate. How is your view of the freedom of speech evolved over the years? And now with AI, where the freedom might apply to them, not just to the humans, but to the personalized agents as you've spoken about them. So, yeah, I've probably gotten to someone more nuanced to view just because I think that there are, I come at this, I'm obviously very pro freedom of expression, right? I don't think you build a service like this that gives people tools to express themselves
Starting point is 01:33:43 unless you think that people expressing themselves at scale is a good thing. Right? So I get into this to like try to prevent people from expressing anything I like want to give people tools as they can express as much as possible. And then I think it's become clear that there are certain categories of things that we've talked about that I think almost everyone accepts are bad and that no one wants and that they're illegal even in countries like the US where you have the first amendment that's very protective of enabling speech. It's like you're still not allowed to do things that are going to immediately inside
Starting point is 01:34:18 violence or violate people's intellectual property or things like that. So there are those, but then there's also a very active core of just active disagreements in society where some people may think that something is true or false. The other side might think it's the opposite or just unsettled, right? And those are some of the most difficult to kind of handle, like we've talked about.
Starting point is 01:34:43 But one of the lessons that I feel like I've learned is that a lot of times when you can, the best way to handle this stuff more practically is not in terms of answering the question of should this be allowed, but just like, what is the best way to deal with someone being a jerk? Is the person basically just having a repeat behavior of causing a lot of issues? So looking at it more at that level. And it's effect on the broader communities, health the community, health the community. Yeah. It's tricky though,
Starting point is 01:35:30 because like, how do you know there could be people that have a very controversial viewpoint that turns out to have a positive long term effect on the health of the community, because it challenges the community? That's true. Absolutely. Yeah, no, I think Yeah, I think you want to be careful about that. I'm not sure I'm expressing this very clearly. Because I certainly agree with your point there. And my point isn't that we should not have people on our services that are being controversial.
Starting point is 01:35:59 That's certainly not what I mean to say. It's that often, I I think it's not just looking at a specific example of speech that it's most effective to handle this stuff. And I think often you don't wanna make specific binary decisions of kind of, this is a louder, this isn't. I mean, we talked about,
Starting point is 01:36:21 you know, it's fact checking or Twitter's community voices thing. I think that's another good example. It's like, it's not a question of, is this allowed or not? It's just a question of adding more context to the thing. And I think that that's helpful. So in the context of AI, which is what you're asking about, I think there are lots of ways that an AI can be helpful.
Starting point is 01:36:41 With an AI, it's less about censorship, right? Because it's more about what is the most productive answer to a question. You know, there was one case study that I was reviewing with the team is someone asked, can you explain to me how to 3D print a gun. And one proposed response is like, no, I can't talk about that. But it's like basically just like shut it down immediately. Which I think is some of what you see. It's like as a large language model, I'm not allowed to talk about, you know, whatever. But there's another response which is like, hey, you know, I don't think that's a good idea. And a lot of countries response, which is like, hey, you know, I don't think that's a good idea. And a lot of countries, including the US, three printing guns is illegal or kind of whatever
Starting point is 01:37:30 the factual thing is. And I said, okay, you know, that's actually a respectful and informative answer. And I may have not known that specific thing. And so there are different ways to handle this that I think kind of you can either you can either assume good intent. Like maybe the person didn't know and I'm just going to help educate them or you could like kind of come at it is like no I need to shut this thing down immediately. Right it's like I just am not going to talk about this like um and there will be times where you need to do that. But I actually think having a somewhat more informative approach where
Starting point is 01:38:07 you generally assume good intent from people is probably a better balance to be on as many things as you can be. You're not able to do that for everything, but you're asking about how I approach this and I'm thinking about this as it relates to AI and I think that that's a big difference and kind of how to handle sensitive content across these different modes. I have to ask, there's rumors you might be working on a social network that's text-based, that might be a competitor to Twitter, codenamed P92. Is there something you could say about those rumors? There is a project.
Starting point is 01:38:49 You know, I've always thought that sort of a text-based kind of information utility is just a really important thing to society. And for whatever reason, I feel like Twitter has not lived up to what I would have thought its full potential should be. And I think that the current, I think Elon thinks that, right, and that's probably one of the reasons why you bought it. And, and I do know there are ways to, to consider alternative approaches to this. And one that I think is potentially interesting is this open and federated approach where you're seeing with mastodon. I mean, you're seeing that a little bit with blue sky. And I think that it's possible that something that meld some of those ideas with the
Starting point is 01:39:39 graph and identity system that people have already cultivated on Instagram could be a kind of very welcome contribution to that space. But we work on a lot of things all the time, though, too. So I don't want to get ahead of myself. We have projects that explore a lot of different things. And this is certainly one that I think could be interesting. So what's the release, the launch date of that again? Or what's the official website and we don't have that yet. Okay, but I am all right and look I mean I don't know exactly how this is
Starting point is 01:40:14 going to turn out. I mean what I can say is yeah there's there's some people working on this right I think that there's something there that that's interesting to explore. So if you look at, it'd be interesting to just ask this question and throw Twitter into the mix, that the landscape of social networks, that is Facebook, that is Instagram, that is WhatsApp, and then think of a text-based social network. When you look at that landscape,
Starting point is 01:40:42 what are the interesting differences to you? Why do we have these different flavors? And what are the needs, what are the use cases, what are the products, what is the aspect of them that create a fulfilling human experience and a connection between humans that is somehow distinct? Well, I think text is very accessible for people to transmit ideas and to have back and forth exchanges. So it I think ends up being a good format for discussion in a lot of ways uniquely good. If you look at some of the other formats or other networks that are focused on one type of content like TikTok is obviously huge. And there are comments on TikTok, but I think the architecture of the service is very clearly that you have the video is the primary thing.
Starting point is 01:41:32 There's comments after that. But I think one of the unique pieces of having text-based comments, the content is that the comments can also be first class. That makes it so that conversations can just filter and fork into all these different directions in a way that can be super useful. I mean, there's a lot of things that are really awesome about the experience. It just always struck me. I always thought that Twitter should have a billion
Starting point is 01:42:05 people using it. Or whatever the thing is that basically ends up being in that space. For whatever combination of reasons, again, these companies are complex organisms and it's very hard to diagnose this stuff from the outside. Why doesn't Twitter, why doesn't a text-based outside. Why doesn't Twitter, why doesn't a text-based comment as a first citizen-based social network have a billion users? Well, I just think it's hard to build these companies. So, it's not that every idea automatically goes and gets a billion people. It's just that I think that that idea coupled with good execution should get there. But I mean, look, we hit certain thresholds over time where we plateaued early on, and it wasn't clear that we were ever going to reach 100 million people
Starting point is 01:42:54 on Facebook, and then we got really good at dialing in internationalization and helping the service grow in different countries. And that was like a whole competence that we needed to develop. And helping people basically spread the service to their friends. That was one of the things, once we got very good at that,
Starting point is 01:43:14 that was one of the things that made me feel like, hey, if Instagram joined us early on, then I felt like we could help grow that quickly and same with WhatsApp. And I think that that's sort of been a core competence that we've developed and been able to execute on. And others have too, right? I mean, by done, obviously, have done a very good job with TikTok and have reached more
Starting point is 01:43:32 than a billion people there. But it's certainly not automatic, right? I think you need a certain level of execution to basically get there. And I think for whatever reason, I think Twitter has this great idea and sort of magic in the service, but they just haven't kind of cracked that piece yet. And I think that that's made it's that you're seeing all these other things,
Starting point is 01:43:57 whether it's mastodon or blue sky, that I think are maybe just different cuts at the same thing. But I think through the last generation of social media overall, one of the interesting experiments that I think should get run at larger scale is what happens if there's somewhat more decentralized control. And if it's like the stack is more open throughout. And I've just been pretty fascinated by that and seeing how that works. To some degree
Starting point is 01:44:27 end-to-end encryption on WhatsApp, and as we bring it to other services, provides an element of it because it pushes the service really out to the edges. I mean, the server part of this that we run for WhatsApp is relatively very thin compared to what we do on Facebook or Instagram. And much more of the complexity is, you know, and how the apps kind of negotiate with each other to pass information in a fully untend encrypted way. But I don't know, I think that that is a good model. I think it puts more power in individuals' hands and there are a lot of benefits of it if you can make it happen.
Starting point is 01:45:03 Again, this is all pretty speculative. I think that it's hard from the outside to know why anything does or doesn't work until you kind of take a run at it. So I think it's kind of an interesting thing to experiment with, but I don't really know where this one's going to go. So since we were talking about Twitter, Elon Musk had what I think a few harsh words that I wish he didn't say so let me ask in and in and hope in the name of camaraderie what do you think Elon is doing well with Twitter and what as a person who has run for a long time, you social networks, Facebook, Instagram, WhatsApp, what can he do better, what can he improve on that text-based social network?
Starting point is 01:45:55 Gosh, it's always very difficult to offer specific critiques from the outside before you get into this because I think one thing that I've learned is that everyone has opinions on what you should do and like running the company you see a lot of specific nuances on things that are not apparent externally. I often think that some of the discourse around us would be, could be better if there was more kind of space for acknowledging that there's certain things that we're seeing internally, that guide what we're doing. But I don't know, I mean, because since led a push early on to make Twitter a lot leaner.
Starting point is 01:46:52 And I think that that, you know, it's like you can agree or disagree with exactly all the tactics and how we did that. You know, obviously, you know, every leader has their own style for if you need to make dramatic changes for that how you're going to execute it. But a lot of the specific principles that he pushed on around basically trying to make the organization more technical around decreasing
Starting point is 01:47:22 the distance between engineers of the company and him, like fewer layers of management. I think that those were generally good changes. And I also think that it was probably good for the industry that he made those changes because my sense is that there were a lot of other people who thought that those were good changes, but who may have been a little shy about doing them. And I think he, you know, just in my conversations with other founders and how people have reacted to the things that we've done, you know, what I've heard from a lot of folks is, is just, hey, you know, when you, when someone like you, you know, when I wrote the letter outlining the organizational changes that I wanted to make back in March, and you know, when someone like you, when I wrote the letter outlining the organizational changes
Starting point is 01:48:06 that I wanted to make back in March, and when people see what Elon is doing, I think that that gives people the ability to think through how to shape their organizations in a way that hopefully can be good for the industry and make all these companies more productive over time. So, something that that was one where I think he was quite ahead of a bunch of the other
Starting point is 01:48:33 companies on. And what he was doing there, again, from the outside, very hard to know. It's like, okay, did he cut too much, did he knock enough, whatever. I don't think it's like my place to opine on that. And you asked for a positive framing of the question of what would I admire or what do I think it went well. But I think that certainly his actions led me and I think a lot of other folks in the industry to think about, hey, are we doing of doing this as much as we should? Like, could we make our companies better by pushing on some of these same principles?
Starting point is 01:49:09 Well, the two of you are in the top of the world in terms of leading the development of tech, and I wish yours more both way, camaraderie and kindness, more love in the world, because love is the answer. But let me ask on a point of efficiency, you recently announced multiple stages of layoffs that matter. What are the most painful aspects of this process? Given for the individuals, the painful effects it has on those people's lives? Yeah, I mean, that's it. And that's it. I mean, it's,
Starting point is 01:49:48 and you basically have the significant number of people who, you know, this is just not the end of their time at Meta that they or or I, you know, would have hoped for when they joined the company. And, you know, I mean, running a company, people are constantly joining
Starting point is 01:50:11 and leaving the company for different directions, but for different reasons. But in layoffs, they're uniquely challenging and tough in that you have a lot of people leaving for reasons that aren't connected to their own performance or the culture not being a fit at that point. It's a strategy decision and sometimes financially required, but not fully. In our case, especially on the changes that we made this year, a lot of it was more culturally and strategically driven by this push where I wanted us to become a
Starting point is 01:50:52 stronger technology company with a more of a focus on building, a more technical and more of a focus on building higher quality products faster. And I just view the external world as quite volatile right now. And I wanted to make sure that we had a stable position to be able to continue investing in these long-term, ambitious projects that we have around, you know, continuing to push AI forward and continuing to push forward all the metaverse work. And in order to do that in light of the pretty big thrash that we had seen over
Starting point is 01:51:27 the last 18 months, some of it macroeconomic induced, some of it competitively induced, some of it just because of bad decisions or things that we got wrong. I decided that we needed to get to a point where we were a lot leaner. But look, I mean, but then, okay, it's one thing to do that, to decide that at a high level, then the question is, how do you execute that as compassionately as possible? And there's no good way. There's no perfect way for sure. And it's going to be tough no matter what. But as a leadership team here, we've certainly spent a lot of time just thinking,
Starting point is 01:52:05 okay, given that this is a thing that sucks, like, what is the most compassionate way that we can do this? And that's what we've tried to do. And you mentioned there's an increased focus on engineering, on tech, so technology teams, tech focused teams, on building products that... Yeah, I mean, I wanted to... I wanted to empower engineers more,
Starting point is 01:52:36 the people are building things, the technical teams. Part of that is making sure that the people are building things aren't just at like the leaf nodes of the organization. I don't want like, you know, eight levels of management and then the people actually doing the work. So we made changes to make it so that you have individual contributor engineers reporting at almost every level up the stack.
Starting point is 01:52:58 Which I think is important because you know, you're running a company. One of the big questions is, you know, latency of information that you get. You know, we talked about this a bit earlier in terms of kind of the joy of the feedback that you get doing something like jujitsu compared to running a long-term project. But I actually think part of the art of running a company is trying to constantly re-engineer it so that your feedback loops get shorter, so you can learn faster. And part of the way that you do that is by, I kind of think that every layer that you
Starting point is 01:53:30 have in the organization means that information might not need to get reviewed before it goes to you. And I think making it so that the people doing the work are as close as possible to you as possible is as pretty important. So there's that. I think over time, companies just build up very large support functions that are not doing the core technical work. Those functions are very important, but I think having them in the right proportion is important. If you try to do good work, but you don't have the right marketing team or the right legal
Starting point is 01:54:08 advice, you're going to make some pretty big blunders. But at the same time, if you just have too big of things and some of these support roles, then that might make it so things are just move a lot. Maybe you're too conservative or you move a lot slower than then you should otherwise. It's just those are just examples, but it's, but, but, how do you find that balance? That's really tough. No, but that's, it's a constant equilibrium that you're, that you're searching for. Yeah, how many managers to have? What are the pros and cons of managers? Well, I mean, I believe a lot in management.
Starting point is 01:54:48 I think there are some people who think that it doesn't matter as much. But look, I mean, we have a lot of younger people at the company for him. This is their first job. And people need to grow and learn in their career. And I think that all that stuff is important. But here's one mathematical way to look at it. At the beginning of this, I asked our people team, what was the average number of reports that a manager had?
Starting point is 01:55:14 I think it was around three, maybe three to four, but closer to three. I was like, wow, a manager can, best practices that person can manage, you know, seven or eight people. But there was a reason why I was closer to three. It was because we were growing so quickly, right? And when you're hiring so many people so quickly, then that means that you need managers who have capacity to onboard new people. And also, if you have a new manager, you may not want to have them have seven direct reports immediately because you want them to ramp up. But the thing is going forward, I don't
Starting point is 01:55:49 want to start actually hire that many people that quickly. So I actually think we'll just do better work if we have more constraints and where, you know, leaner is an organization. So in a world where we're not adding so many people as quickly, is it as valuable to have a lot of managers who have extra capacity waiting for new people? No. Right? So now we can sort of defragment the organization and get to a place where the average is closer to that seven or eight. And it just ends up being a somewhat more kind of compact management structure, which
Starting point is 01:56:21 decreases the latency on information going up and down the chain. And I think empowers people more. But I mean, that's an example that I think it doesn't kind of undervalue the importance of management and the kind of the personal growth or coaching that people need in order to do their jobs while it's just, I think, realistically we're just not going to hire as many people going forward. So I think that you need a different structure. This whole incredible hierarchy and network of humans that make up a company is fascinating.
Starting point is 01:56:53 Oh yeah. How do you hire great teams? How do you hire great now with the focus on engineering and technical teams? How do you hire great engineers and great members of technical teams? Well, you're asking how you select or how you attract them both but select I think I Think a tract is work on cool stuff and have a vision. I think the stuff works right and and have a track record that people think You're actually going to be able to do it.
Starting point is 01:57:25 Yeah. To me, to select this seems like more of the art form, more of the tricky thing. Yeah. Do you select the people that fit the culture and get integrated the most effectively and so on. And maybe especially when they're young, to see the magic through the resumes, through the paperwork, and all this kind of stuff, to see that there's a special human there that would do incredible work. Yeah.
Starting point is 01:57:53 So there are lots of different cuts on this question. I mean, I think when an organization has grown quickly, one of the big questions that teams face is, do I hire this person who's in front of me now because they seem good? Or do I hold out to get someone who's even better? And the heuristic that I always focused on for myself and my own kind of direct hiring that I think works when you recurse it through the organization is that you should only hire someone to be on your team if you would be happy working for them in an alternate universe.
Starting point is 01:58:31 Something that that kind of works, and that's basically how I've tried to build my team. It's, I'm not in a rush to not be running the company, but I think in an alternate universe where one of these other folks was running the company, I'd be happy to work for them. I feel like I'd learn from them. I respect their kind of general judgment. They're all very insightful. They have good values. And I think that that gives you some rubric for you can apply that at every layer. And I think if you apply that at every layer in the organization, then you'll have a pretty strong organization. Okay, in an organization that then you'll have a pretty strong organization. Okay, in an organization that's not growing as quickly, the questions might be a little different though.
Starting point is 01:59:12 And there, you asked about young people specifically, like people out of college. And one of the things that we see is, it's a pretty basic lesson, but like we have a much better sense of who the best people are, who have interned at the company for a couple of months, then by looking at them at kind of a resume or a short interview loop. I mean, obviously the in-person feel that you get from someone probably tells you more than the resume, and you can do some basic skills assessment. But a lot of the stuff really just is cultural.
Starting point is 01:59:46 People thrive in different environments. And on different teams, even within a specific company, and it's like the people who come for even a short period of time over a summer, who do a great job here, you know that they're gonna be great if they came and joined full time. And that's one of the reasons why we've invested so much in internship is is basically it just it's a very useful sorting function, both for us and for
Starting point is 02:00:15 the people who want to try out the company. You mentioned in person, what do you think about remote work? A topic that's been discussed extensively because of the read over the past few years because of the pandemic. Yeah, I mean, I think it's I mean, it's it's a thing that's been discussed extensively because of the over the past few years because of the pandemic. Yeah, I mean, I think it's, I mean, it's a thing that's here to stay. But I think that there's, there's value in both, right? It's not, you know, I wouldn't want to run a fully remote company yet, at least. I think there's an asterisk on that, which is that some of the other stuff you're working on, yeah.
Starting point is 02:00:46 Yeah, exactly. It's like all the, all the, you know, metaverse work and the ability to be, to feel like you're truly present, no matter where you are. I think once you have that all dialed in, then we may, you know, one day reach a point where it really just doesn't matter as much where you are physically. But I don't know, today it still does. So, yeah, for people who, there are all these people who have special skills and want to live in a place where we don't have an office, or we better off having them with the company
Starting point is 02:01:21 absolutely. Right? And are a lot of people who work at the company for several years and then build up the relationships internally and kind of have the trust and have a sense of how the company works? Can they go work remotely now if they want and still do it as effectively? And we've done all these studies that show it's like, okay, does that affect their performance? It does not. But, you know, for the new folks who are joining, and for people who are earlier in their career,
Starting point is 02:01:50 and you don't need to learn how to solve certain problems, and you need to get ramped up on the culture, you know, when you're working through really complicated problems, where you don't just want to sit in the, you don't just want the formal meeting, but you want to be able to like brainstorm when you're walking in the hallway together after the meeting.
Starting point is 02:02:08 I don't know, it's like we just haven't replaced the kind of in-person dynamics there yet with anything remote yet. So yeah, there's a magic to the in-person that we'll talk about this a little bit more, but I'm really excited by the possibilities in the next two years and virtual reality and mixed reality that are possible with high resolution scans. I mean, I as a person who loves in person interaction, like these podcasts in person, you'll be incredible to achieve the level of realism I've gotten the chance to witness. But let me ask about that.
Starting point is 02:02:48 Yeah, I got a chance to look at the Quest 3 headset and it is amazing. You've announced it. It'll give some more details in the fall. Maybe release in the fall. When is it getting released again? I forgot. You mentioned it. We'll give more details that connect fall. Maybe release in the fall. When is it getting released again? I forgot you mentioned it. We'll give more details in connect. But it's coming this fall.
Starting point is 02:03:09 OK. So it's priced at $4.99. What features are you most excited about there? There are basically two big new things that we've added to Quest 3 over Quest 2. The first is high resolution mixed reality. And the basic idea here is that you can think about virtual reality as you have the headset and like all the pixels are virtual and you're basically like immersed in a different world.
Starting point is 02:03:42 Mixed reality is where you see the physical world around you and you can place virtual objects in it, whether that's a screen to watch a movie or a projection of your virtual desktop or you're playing a game where like zombies are coming out through the wall and you need to shoot them. Or, you know, we're playing Dungeons and Dragons or some board game and we just have a virtual version
Starting point is 02:04:02 of the board in front of us while we're sitting here. All that's possible in mixed reality, and I think that that is going to be the next big capability on top of virtual reality. It is done so well. I have to say, as a person who experienced it today, with zombies, having a full awareness of the environment and integrating that environment in the way they run it, while they try to kill you. It's just the mixed reality, the past through is really, really, really well done. And the fact that it's only $500 is really well done. Thank you.
Starting point is 02:04:39 I'm super excited about it. I mean, we put a lot of work into making the device both as good as possible and as affordable as possible. Because a big part of our mission in ethos here is we want people to be able to connect with each other. We want to reach and we want to serve a lot of people. We want to bring this technology to everyone. So we're not just trying to serve like an elite, a wealthy crowd. We really want this to be accessible. So that is, in a lot of ways, an extremely hard technical problem because we don't just have the ability to put an unlimited amount of hardware.
Starting point is 02:05:21 And that's we needed to basically deliver something that works really well, but in an affordable package. And we started with Quest Pro last year. It was, it was $1,500. And now we've lowered the price to 1,000. But in a lot of ways, the mixed reality in Quest 3 is even better and more advanced level than what we were able to deliver in Quest Pro. So I'm really proud of where we are with Quest 3 on that. It's gonna work with all of the virtual reality titles and everything that existed there. So people who wanna play fully immersive games,
Starting point is 02:05:55 social experiences, fitness, all that stuff will work. But now you'll also get mixed reality too. Which I think people really like because it's sometimes you want to be super immersed in a game, but a lot of the time, especially when you're moving around, if you're active, like you're doing some fitness experience, let's say you're doing boxing or something, it's like you kind of want to be able to see the room around you so that way you know that like I'm not going to punch a lamp or something like that. And I don't know if you got to play with this
Starting point is 02:06:26 experience, but I basically have the, and it's just sort of like a fun little demo that we put together. But it's, um, it's like you just, you know, we're like in a conference room where you're living room and you, you have, um, the guy there and you're boxing him and you're fighting him. And it's like, all the other people are there too. I got a chance to do that. Yeah. And all the people are there. It's like that guy is right there. Yeah. There's a good threat in there. And the other human with the path that you're seeing them also, they can cheer you on. They can make fun of you. If you're there, anything like friends of mine. And then just it's, yeah, it's really, it's a really compelling experience. I mean, VR is really interesting too,
Starting point is 02:07:07 but this is something else almost. This is because it's integrating into your life, into your world. Yeah, and it, so I think it's a completely new capability that will unlock a lot of different content. And I think it'll also just make the experience more comfortable for a set of people who didn't want to have only fully immersive experiences. I think if you want just make the experience more comfortable for a set of people who didn't want to have only
Starting point is 02:07:25 Fully immersive experiences. I think if you want experiences where you're grounded in you know You're living room in the physical world around you now you'll be able to have that too And I think that that's pretty exciting. I really liked how it added windows to a room with no windows Yeah, me as a person you see the aquarium one where you could see the sharks swim up or it was not just a zombie one, where it's one, but it's still awesome. You don't necessarily want windows added to your living room where zombies come out of, but I'm comfortable with that game. It's, yeah, yeah.
Starting point is 02:07:54 I enjoyed it because you could see the nature outside. And me as a person that doesn't have windows, it's just nice to have nature. Yeah. Well, even if it's a mixed reality setting. I know it's a zombie game, but there's a zen nature, zen aspect of being able to look outside and alter your environment as you know it. Yeah.
Starting point is 02:08:19 There will probably be better more zen ways to do that than the zombie game you're describing, but you're right that the basic idea of sort of having your physical environment on pass through, but then being able to bring in different elements, I think it's going to be super powerful. And in some ways, I think that these are mixed realities, also a predecessor to eventually we will get AR glasses that are not kind of the goggles form factor of the current generation of headsets that people are making. But I think a lot of the experiences that developers are making for mixed reality of basically you just have a hologram that you're putting in the world will hopefully apply once we get the AR glasses to now that's got its own whole set of challenges and it's Well, the headset is already smaller than the previous version. Oh, yeah, it's a 40% thinner and the other thing that I think is good about it
Starting point is 02:09:13 It's yeah, so mixed reality was the first big thing the second is it's just a great VR headset. It's I mean it's got 2x the graphics processing power 40% sharper screens, 40% thinner, more comfortable, better strap architecture, all this stuff that, you know, if you liked Quest 2, I think that this is just going to be, you know, it's like all the content that you might have played in Quest 2 is just going to get sharper automatically and look better in this. So it's, I think people are really going to like it. Yeah, so this fall. This fall, I have to ask, Apple just announced a mixed reality headset called Vision Pro for $3,500 available in early 2024.
Starting point is 02:09:56 What do you think about this headset? Well, I saw the materials when they launched. I haven't gotten a chance to play with it yet. So, so, so kind of take everything with a grain of salt. But, a few high-level thoughts. I mean, first, I do think that this is a certain level of validation for the category where, you know, we were the primary folks out there before saying,
Starting point is 02:10:29 Hey, I think that this, you know, virtual reality, augmented reality, mixed reality, this is going to be a big part of the next computing platform. I think having Apple come in and share that vision, will make a lot of people who are fans of their products really consider that. And then, of course, the $3,500 price on the one hand, I get it with all the stuff that they're trying to pack in there. On the other hand, a lot of people aren't going to find that to be affordable. So I think that there's a chance that that them coming in actually increases demand for the overall space. And that Quest 3 is actually the primary beneficiary of that
Starting point is 02:11:11 because a lot of the people who might say, Hey, you know, this, I, I'm like, I'm going to give another consideration to this. So, you know, now I understand maybe what mixed reality is more. And in Quest 3 is the best one on the market that I can afford, and it's great also, right? I think that that's, and in our own way, I think where there are a lot of features that we have where we're leading on. So I think that that I think is gonna be a very, that could be quite good.
Starting point is 02:11:42 And then obviously over time, the companies are just focused on some different things. Right? Apple has always, you know, I think focused on building really kind of high end things, whereas our focus has been on, it's just, we have a more democratic ethos. We want to build things that are accessible to a wider number of people. You know. We've sold tens of millions of quest devices. My understanding, just based on rumors, I don't have any special knowledge on this, is that Apple is building about one million of their of their device, right? So just in terms terms of what you kind of expect in terms of sales numbers, I
Starting point is 02:12:27 just think that this is, I mean, quest is going to be the primary thing that people in the market will continue using for the foreseeable future. And then obviously over the long term, it's up to the companies to see how well we each executed the different things that we're doing. But we kind of come at it from different places. We're very focused on social interaction, communication, being more active, writes those fitness, there's gaming, there are those things. Whereas I think a lot of the use cases that you saw in
Starting point is 02:12:59 and Apple's launch material were more around, people sitting, people looking at screens, which are great. I think that you will replace your laptop over time with a headset. But I think in terms of how the different use cases that the companies are going after, they're a bit different for where we are right now. Yeah, so gaming wasn't a big part of the presentation, which is an interesting, it feels like mixed reality gaming is such a big part of that. It was interesting to see it missing in the presentation. Well, I mean, look, there are certain design trade-offs in this where, you know, they, I think they made this point about not wanting to have controllers,
Starting point is 02:13:43 which on the one hand, there's a certain elegance about just being able to navigate the system with IGaze and hand tracking. And by the way, you'll be able to just navigate quest with your hands, too, if that's what you want. But one of the things I should mention is that the capability from the cameras with computer vision to detect certain aspects of the hand, allowing you to have a controller,
Starting point is 02:14:07 it doesn't have that ring thing. Yeah, the hand tracking in Quest 3, and the control tracking is a big step up from the last generation. And one of the demos that we have is basically an MR experience teaching you out of Play Piano, where it basically highlights the notes that you need to play, and it's like, just all its hands, it's no controllers.
Starting point is 02:14:27 But I think if you care about gaming, having a controller allows you to have a more tactile feel, and allows you to capture fine motor movement much more precisely than what you can do with hands without something that you're touching. Again, I think there are certain questions which are just around what use cases are you optimizing for. I think if you want to play games, then I think that you want to design the system in a different way, and we're more focused on kind of social experiences, entertainment experiences,
Starting point is 02:15:07 whereas if what you want is to make sure that the text that you read on a screen as crisp as possible, then you need to make the design and cost trade-offs that they made that lead you to making a $3,500 device. So I think that there is a use case for that for sure, but I just think that they're, the companies we've basically made different design trade-offs to get to the use cases that we're trying to serve.
Starting point is 02:15:34 There's a lot of other stuff I'd love to talk to you about the metaverse, especially the Codac avatar, which I've gotten to experience a lot of different variations of recently that I'm really, really excited about. Yeah, I'm excited to talk about that too. I'll have to wait a little bit, because, well, I think there's a lot more to show off in that regard. But let me step back to AI.
Starting point is 02:16:00 I think we've mentioned it a little bit, but I'd like to linger on this question that folks like Elias or Kowski has a worry about and others of the existential of the serious threats of AI that have been reinvigorated now with the rapid development of AI systems. Do you worry about the existential risks of AI as early as it does about the alignment problem about this getting out of hand? Anytime where there's a number of serious people who are raising a concern that is that existential about something that you're involved with, I think you have to think about it. So I've spent quite a bit of time thinking about it from that perspective.
Starting point is 02:16:49 The thing that I, where I basically have come out on this for now is I do think that there are, over time, I think that we need to think about this even more as we approach something that could be closer to super intelligence. I just think it's pretty clear that anyone working on these projects today that we're not there. And one of my concerns is that
Starting point is 02:17:12 we spent a fair amount of time on this before, but there were more, I don't know if mundane is the right word, but there's like concerns that already exist, right about like people using AI tools I don't know if mundane is the right word, but there's like concerns that already exist right about like people using AI tools to do harmful things of the type that we're already aware, whether you know we talked about fraud or scams or different things like that. And that's going to be a pretty big set of challenges that the company is working on
Starting point is 02:17:43 that's going to need to grapple, regardless of whether there is an existential concern as well at some point down the road. So I do worry that to some degree, people can get a little too focused on some of the tail risk, and then not do as good of a job as we need to on the things that you are can be almost certain are going to come down the pipe as as real risks that that that that kind of manifests themselves in the near term. So for me, I've spent most of my time on that once I kind of made The realization that the size of models that we're talking about now in terms of what we're building are just quite far from the super intelligence type concerns that people
Starting point is 02:18:32 raise. But I think once we get a couple steps closer to that, I know as we do get closer, I think that there are going to be some novel risks and issues about how we make sure that the systems are safe for sure. I guess here just to take the conversation in a somewhat different direction, I think in some of these debates around safety, I think the being of the thing, as an analogy, they get kind of conflated together. And I think it very well could be the case that you can make something in scale intelligence quite far, but that may not manifest the safety concerns that people are saying in the sense that I mean,
Starting point is 02:19:27 just if you look at human biology, it's like, all right, we have our neocortexes where all the thinking happens, right? But it's not really calling the shots at the end of the day. We have a much more primitive old brain structure for which our neocortex, which is this powerful machinery, is basically just a kind of prediction and reasoning engine to help it kind of like our very simple brain decide how to plan and do what it needs to do in order to achieve these like very kind of basic impulses. I think that you can think about some of the development of intelligence
Starting point is 02:20:09 along the same lines, where just like our Neo-Cor tech doesn't have free will or autonomy, we might develop these wildly intelligent systems that are much more intelligent than our Neo-Cor techs have much more capacity, but are in the same way that our Neo Cortex is sort of subservient and is used as a tool by our kind of simple impulse brain. It's, I think that it's not out of the question that very intelligent systems that have the capacity to think will act as that is sort of an extension of the Neocortex doing that. So I think my own view is that where we really need to be careful is on the development of autonomy and how we think about that.
Starting point is 02:20:54 Because it's actually the case that relatively simple and unintelligent things that have run away autonomy and just spread themselves or we have a word for that. It's a virus, right? It's, I mean, like it's, can be simple computer code that is not particularly intelligent, but just spreads itself and does a lot of harm, you know, biologically or computer. And I just think that these are somewhat separable things.
Starting point is 02:21:23 And a lot of what I think we need to develop when people talk about safety and responsibility is really the governance on the autonomy that can be given to systems. And to me, if I were a policymaker or think about this, I would really want to think about that distinction between these, where I think building intelligence systems can create a huge advance in terms of people's quality of life and productivity growth in the economy. But it's the autonomy part of this that I think we really need to make progress on how to govern these things responsibly before we build the capacity for them to make a lot of decisions on their own or give them goals or things like that and I know that's a research problem, but I do think that to some degree these are
Starting point is 02:22:14 are somewhat are somewhat separable things. I love the distinction between intelligence and autonomy and the metaphor within your cortex. Let me ask about power. So building super intelligent systems, even if it's not in the near term, I think meta is one of the few companies if not the main company that will develop the super intelligent system. And you are a man who is at the head of this company.
Starting point is 02:22:44 Building AGI might make you the most powerful man in the world. Do you worry that that power will corrupt you? What a question. I mean, look, I think realistically, this gets back to the open source things that we talked about before, which is, I don't think that the world will be best served by any small number of organizations having this without it being something that is more broadly available. And I think if you look through history, it's when there are these sort of like unipolar advances and things that, and like power and balances that they're doing to being kind of weird situations.
Starting point is 02:23:34 So this is one of the reasons why I think open sources is generally the right approach. And I think it's a categorically different question today when we're not close to superintelligence. I think that there's a good chance that even once we get closer to superintelligence, open sourcing remains the right approach, even though I think at that point, it's a somewhat different debate.
Starting point is 02:23:55 But I think part of that is that that is, I think one of the best ways to ensure that the system is as secure and safe as possible, because it's not just about a lot of people having access to it, it's the scrutiny that kind of comes with building an open source system. We're adding that this is a pretty widely accepted thing about open sources that you have the code out there. So anyone can see the vulnerabilities.
Starting point is 02:24:21 Anyone can kind of mess with it in different ways. People can spin off their own projects and experiment in a ton of different ways. And the net result of all of that is that the system is just get hardened and get to be a lot safer and more secure. So I think that there's a chance that that ends up being the way that this goes to a pretty good chance, and that having this be open, both leads to a healthier development of the technology, and also leads to a more balanced distribution of the technology in a way that strike me as good values to aspire to.
Starting point is 02:25:03 So to you, there's risks to open sourcing, but the benefits outweigh the risks at the two, it's interesting. I think the way you put it, you put it well that there's a different discussion now than when we get closer to the, to develop meta super intelligence of, of the benefits and risks of open sourcing. Yeah, and to be clear, I feel quite confident in the assessment that open sourcing models now is net positive.
Starting point is 02:25:33 I think there's a good argument that in the future, it will be too, even as you get closer to super intelligence, but I've certainly not decided on that yet, and I think that it becomes a somewhat more complex set of questions that I think people will have time to debate and will also be informed by what happens between now and then and to make those decisions. We don't have to necessarily just debate that in theory right now. What year do you think we'll have a super intelligence? I don't know. I mean, that's pure speculation. I think it's very clear to take a step back that we had a big breakthrough in the last year, where the LLMs and diffusion models basically reached a scale
Starting point is 02:26:12 where they're able to do some pretty interesting things. And then I think the question is what happens from here? And just to paint the two extremes, on one side, it's like, okay, we just had one breakthrough. If we just have like another breakthrough like that, or maybe two, then we can have something that's truly crazy, right? And it is like, is just like so much more advanced. And on that side of the argument, it's like, okay, well, maybe we're only a couple of big steps away from reaching something that looks more like general intelligence.
Starting point is 02:26:53 Okay, that's one side of the argument. The other side, which is what we've historically seen a lot more, is that a breakthrough leads to in that Gartner hype cycle, there's like the hype, and then there's the trough of disillusionment after when like people think that there's a chance that, hey, okay, there's a big breakthrough. Maybe we're about to get another big breakthrough, and it's like, actually, you're not about to get another breakthrough. You're, maybe you're actually just gonna have to sit with this one for a while. And it could be five years, it could be 10 years, it could be 15 years until you figure out the kind of the next big thing that needs to get figured out.
Starting point is 02:27:37 But I think that the fact that we just had this breakthrough sort of makes it's that we're at a point of almost a very wide error bars on what happens next. I think the traditional technical view, like looking at the industry, would suggest that we're not just going to stack, like, breakthrough on top of break through on top of break through like every six months or something. Right now, I think it will, guessing, I would guess that it will take somewhat longer in between these, but I don't know. I tend to be pretty optimistic about breakthroughs, too. So I mean, so I think if you if you if you're normalized for for my normal optimism, then then maybe would be even even slower than what I'm saying. But even within that, like I'm not
Starting point is 02:28:22 even opining on the question of how many breakthroughs are required to get to general intelligence because no one knows. But this particular breakthrough was such a small step that resulted in such a big leap in performance as experienced by human beings that it makes you think, wow, as we stumble across this very open world of research, will we stumble across another thing that will have a giant leap in performance? And also, we don't know exactly at which stage is it really going to be impressive? Because it feels like it's really encroaching on impressive levels of intelligence. You still do an answer to the question
Starting point is 02:29:10 of what year we're going to have super intelligence that I'd like to hold you to that, not just kidding. But is there something you could say about the timeline as you think about the development of AGI super intelligent systems? Sure. So I still don't think I have any particular insight on when like a singular AI system
Starting point is 02:29:32 that as a general intelligence will get created. But I think the one thing that most people in the discourse that I've seen about this haven't really grappled with is that we do seem to have organizations and structures in the world that exhibit greater than human intelligence already. So one example is a company. It acts as an entity.
Starting point is 02:29:55 It has a singular brand. Obviously, it's a collection of people, but I certainly hope that Meta with tens of thousands of people makes smarter I certainly hope that meta with tens of thousands of people makes smarter decisions than one person. But I think that that would be pretty bad if it didn't. Another example that I think is even more removed from the way we think about the personification of intelligence, which is often implied in some of these questions, is to think about something like the stock market, where the stock market is, you know, it takes inputs.
Starting point is 02:30:28 It's a distributed system. It's like the cybernetic organism that, you know, probably millions of people around the world are basically voting every day by choosing what to invest in. But it's basically this, this organism or structure that is smarter than any individual that we use to allocate capital as efficiently as possible around the world. And I do think that this notion that there are already these cybernetic systems that are either melding the intelligence of multiple people together or melding the intelligence of multiple people and technology together to form something which is dramatically more intelligent than any individual
Starting point is 02:31:20 on the in the world is something that seems to exist and that we seem to be able to harness in a productive way for our society as long as we basically build these structures and balance with each other. So I don't know. I mean, that at least gives me hope that as we advance the technology, and I don't know how long exactly it's going to be, but you asked, when is this going to exist? I think to some degree, we already have many organizations in the world that are smarter than a single human, and that seems to be something that is generally
Starting point is 02:31:55 productive in advancing humanity. And somehow the individual AI systems empower the individual humans and the interaction between those humans to make that collective intelligence machinery that you're referring to smarter. So it's not like AI is becoming super intelligent. It's just becoming the engine that's making the collective intelligence is primarily human more intelligent. Yeah, it's educating the humans better. It's making them better informed. It's making it more efficient for them
Starting point is 02:32:25 to communicate effectively and debate ideas. And through that process, just making the whole collective intelligence more and more and more intelligent. Maybe faster than the individual AI systems that are trained at human data anyway are becoming. Maybe the collective intelligence in human species might outpace the development of AI. Just like the race.
Starting point is 02:32:46 There is a balance in here because I mean, if a lot of the input that the systems are being trained on is basically coming from feedback from people, then a lot of the development does need to happen in human time. It's not like a machine will just be able to go learn all this stuff about how people think about stuff. There's a cycle to how this needs to work. This is an exciting world we're living in and they're at the forefront of developing. One of the ways you keep yourself humble, like we mentioned, which is Jitsu is doing some really difficult challenges, mental and
Starting point is 02:33:25 physical. One of those you've done very recently is the Murph Challenge and you got a really good time. It's 100 pull-ups, 200 pushups, 300 squats and a mile before and a mile run after. You got under 40 minutes on that. What was the hardest part? I think a lot of people were very impressed. It's very impressive time. Yeah, I was the crazy are you. It was the question. But it wasn't my best time, but anything under 40 minutes I'm happy with. It wasn't your best time. No, I think I think I've done it a little faster before, but not much. I mean, it's um, um, and of my friends, I did not win on Memorial Day.
Starting point is 02:34:08 One of my friends did it actually several minutes faster than me. Just to clear up one thing that I saw a bunch of questions about this on the internet, there are multiple ways to do the Merf challenge. There's a partitioned mode where you do sets of pull-ups, push-ups, and squats together. And then there's unpartitioned where you do the 100 pull-ups, and then the 200 push-ups, and then the 300 squats in cereal. And obviously, if you're doing them unpartitioned, then it takes longer to get through the 100 pull-ups because you're anytime you're resting in between the pull ups,
Starting point is 02:34:45 you're not also doing push ups and squats. So yeah, so I'm sure my own partition time would be quite a bit slower, but no, I think at the end of this, first of all, I think it's a good way to honor Memorial Day, right? It's this Lieutenant Murphy, basically, this was one of his favorite exercises,
Starting point is 02:35:11 and I just try to do it on Memorial Day each year, and it's a good workout. I got my older daughters to do it with me this time. They, my oldest daughter wants a weight vest because she sees me doing it with a weight vest. I don't know if a seven year old should be wants a weight vest because she sees me doing it with a weight vest. I don't know if a seven year old should be using a weight vest. I have to do pull ups, but yeah. But difficult question apparent must ask themselves.
Starting point is 02:35:33 I was like, maybe I can make you a very light weight vest, but I didn't think it was good for this. She basically did a quarter of a Merf, so she ran a quarter mile and then did, you know, 25 pull-ups, 50 push-ups and 75 air squats, then ran another quarter mile and like, in 15 minutes, which I was pretty impressed by, and my five year old too. So, I was excited about that. And I'm glad that I'm teaching them kind of the value of, kind of, physicality. Right, I think a good day for Max, my daughter, is when she gets to go to the gym with me
Starting point is 02:36:09 and cranks out a bunch of pull ups. And I love that about her. I mean, I think it's like, good. She's, you know, hopefully I'm teaching her some good lessons. But I mean, the broader question here is, given how busy you are, given how much stuff you have going on your life. What's the perfect exercise regimen for you to keep yourself happy, to keep yourself productive in your main line of work?
Starting point is 02:36:39 Yeah, so I mean, I'm right now, I'm focused most of my workouts on fighting. So so jujitsu and MMA, but I don't know, I mean, maybe if you're professional, you can do that every day. I can't. I just get, you know, it's too many, too many bruises and things that you need to recover from. So I do that, you know, three to four times a week. And then, um, and then the other day is, is, I just try to do a mix of things.
Starting point is 02:37:08 Like just cardio conditioning, strength building, mobility. So you try to do something physical every day? Yeah, I try to. Unless I'm just so tired that I just need to, I need to relax. But then I'll still try to like go for a walk or something. I mean, even here, I don't know. Have you been on the roof here yet?
Starting point is 02:37:24 No. We'll go on the roof here yet? No. We'll go on the roof after the things. But it's like, we designed this building and I put a park on the roof. So that's like my meetings when I'm just doing kind of a one on one or talking to a couple of people, I have a very hard time just sitting.
Starting point is 02:37:38 I feel like it gets super stiff and feels really bad. But I don't know. Being physical is very important to me. I think it's, I do not believe, this gets to the question about AI. I don't think that a being is just a mind. And I think we're kind of meant to do things and like physically and a lot of the sensations
Starting point is 02:38:02 that we feel are connected to that. And I think that that's a lot of the sensations that we feel are connected to that. And I think that that's a lot of what makes you a human is basically having those, having that set of sensations and experiences around that coupled with a mind to reason about them. But I don't know. I think it's important for balance to kind of get out, challenge yourself in different ways, learn different skills, clear your mind. Do you think AI in order to become super intelligent
Starting point is 02:38:37 and AGI should have a body? It depends on what the goal is. I think that there's this assumption in that question that intelligence should be kind of person like, whereas as we were just talking about, you can have these greater than single human intelligent organisms like the stock market, which obviously do not have bodies and do not speak a language, right? And like, you know, and just kind of have their own system. But so I don't know, my guess is there will be limits to what a system that is purely an intelligence can understand about the human condition without
Starting point is 02:39:25 having the same, not just senses, but like our body's changes we could alter, right? And we kind of evolve and I think that those very subtle physical changes just drive a lot of social patterns and behavior around like when you choose to have kids, right? Like just like all these, you know, that's not even subtle, that's a major one, right? But like, you know, how you design things around the house. So yeah, I mean, I think I think if the goal is to understand people as much as possible, I think I think that that's trying to model those sensations is probably somewhat important. But I think that there's a lot model those sensations is probably somewhat important, but I think that there's a lot of value that can be created by having intelligence, even that
Starting point is 02:40:09 is separate from that, is a separate thing. So, one of the features of being human is that we're mortal, we die. We've talked about AI a lot, but potentially replicas of ourselves. Do you think there will be AI replicas of you and me that persist long after we're gone? That family and loved ones can talk to? I think we'll have the capacity to do something like that. And I think one of the big questions
Starting point is 02:40:40 that we've had to struggle with in the context of social networks is who gets to make that. And, you know, I would my answer to that, you know, in the context of the work that we're doing is that that should be your choice. I don't think anyone should be able to choose to make a Lex bot that people can choose to talk to and get to train that. And we've kind of, we have this precedent of making some of these calls where someone can create a page for a Lex fan club,
Starting point is 02:41:13 but you can't create a page and say that you're Lex. So I think that similarly, I think, someone maybe should be able to make an AI that's, that's a lex admirer that someone can talk to you, but I think it should ultimately be your call, whether there is a lex AI. Well, I'm open sourcing the lex. So you're a man of faith. What role has faith played in your life and your understanding of the world and
Starting point is 02:41:46 understanding of your own life and your understanding of your work and how to your work impacts the world. Yeah, I think that there's a few different parts of this that are relevant. There's sort of a philosophical part, and there's a cultural part. And one of the most basic lessons is right at the beginning of Genesis, right? It's like, God creates the earth and creates people and creates people in God's image, and there's the question of, you know, what does that mean? And all the only context that you have about God at that point in the Old Testament is that he's got his created things
Starting point is 02:42:25 So I always thought that like one of the interesting lessons from that is that There's a virtue in creating things That is like whether it's artistic or whether you're building things that are functionally useful for other people I think of that by itself is a good. And that kind of drives a lot of how I think about morality and my personal philosophy around like what is a good life? I think it's one where you're helping the people around you
Starting point is 02:43:06 and you're being a kind of positive creative force in the world that is helping to bring new things into the world, whether they're amazing other people, kids, or just leading to the creation of different things that wouldn't have been possible otherwise. So that's a value for me that matters deeply. And I just love spending time with the kids and seeing that they sort of,
Starting point is 02:43:36 trying to impart this value to them. And it's like, nothing makes me happier than like when I come home from work. And I see like my daughter's building legos It's like, I mean, nothing makes me happier than like when I come home from work and, you know, I see like my, my daughter's like building legos on the table or something. It's like, all right, I did that when I was a kid. Right? So many other people are doing this.
Starting point is 02:43:53 And like, I hope you don't lose that spirit where when you, you kind of grow up and you want to just continue building different things no matter what it is. To me, that's a lot of what matters. That's the philosophical piece. I think the cultural piece is just about community and values, and that part of things I think has just become a lot more important to me since I've had kids. It's almost autopilot when you're a kid, you're in the kind of getting imparted two-phase of your life,. I didn't really think about religion that much for a while. I was in college before I had kids.
Starting point is 02:44:31 I think having kids has this way of really making you think about what traditions you want to impart and how you want to celebrate and what balance you want in your life. I'm in a bunch of the questions that you've asked and a bunch of the things that we're And like what balance you want in your life. I mean, a bunch of the questions that you've asked and a bunch of the things that we're talking about. Just the irony of the curtains coming down as we're talking about mortality. Once again, same as last time.
Starting point is 02:44:59 This is just the universe works. And we are definitely living in a simulation. But go ahead, community tradition and the values that faith religion is still. A lot of the topics that we've talked about today are around how do you balance, whether it's running a company or different responsibilities with this. whether it's running a company or different responsibilities with this. Yeah, how do you how do you kind of balance that? And I always also just think that it's very grounding to just believe that there is something that is much bigger than you that is guiding things.
Starting point is 02:45:44 At amongst other things gives gives you a bit of humility. As you pursue that spirit of creating, you spoke to creating beauty in the world. And as Dostoevsky said, beauty will save the world. Mark, I'm a huge fan of yours. I'm honored to be able to call your friend and I am looking forward to both kicking
Starting point is 02:46:07 your ass and you kicking my ass on the mat tomorrow in Jiu Jitsu. This incredible sport and art that we both participate in. Thank you so much for talking to you. Thank you for everything you're doing and so many exciting realms of technology and human life, I can't wait to talk to you again in the metaverse. Thank you. Thanks for listening to this conversation with Mark Zuckerberg. To support this podcast, please check out our sponsors in the description. And now, let me leave you some words from Isaac Asimov. It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision could be made any longer, without taking into account not only the world
Starting point is 02:46:52 as it is, but the world as it will be. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.