All-In with Chamath, Jason, Sacks & Friedberg - JD Vance's AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant

Episode Date: February 15, 2025

(0:00) The Besties intro Naval Ravikant! (9:07) Naval reflects on his thoughtful tweets and reputation (14:17) Unique views on parenting (23:20) Sacks joins to talk AI: JD Vance's speech in Paris, Tec...hno-Optimists vs Doomers (1:11:06) Tariffs and the US economic experiment (1:21:15) Thomson Reuters wins first major AI copyright decision on behalf of rights holders (1:35:35) Chamath's dinner with Bryan Johnson, sleep hacks (1:45:09) Tulsi Gabbard, RFK Jr. confirmed Follow Naval: https://x.com/naval Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://x.com/naval/status/1002103360646823936 https://x.com/CollinRugg/status/1889349078657716680 https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence https://www.cnn.com/2021/06/09/politics/kamala-harris-foreign-trip/index.html https://www.cnbc.com/2025/02/11/anduril-to-take-over-microsofts-22-billion-us-army-headset-program.html https://x.com/JDVance/status/1889640434793910659 https://www.youtube.com/watch?v=QCNYhuISzxg https://www.wired.com/story/thomson-reuters-ai-copyright-lawsuit https://admin.bakerlaw.com/wp-content/uploads/2023/11/ECF-1-Complaint.pdf https://www.youtube.com/watch?v=7xTGNNLPyMI https://polymarket.com/event/which-trump-picks-will-be-confirmed?tid=1739471077488

Transcript
Discussion (0)
Starting point is 00:00:00 Great job, Naval, you rocked it. Maybe I should have said this on air, but that was literally the most fun podcast I've ever recorded. Oh, that's on air, cut that in. Yeah, put it in the show, put it in the show. I had my theory on why you were number one, but now I have the realization.
Starting point is 00:00:13 What's the actual reason? You know us for a long time. Yeah, what was your theory, what's the reality? My theory was that my problem with going on podcasts is usually the person I'm talking to is not that interesting. They're just asking the same questions and they're dialing it in and they're not that
Starting point is 00:00:25 interesting. It's not like we're having a peer level actual conversation. That's why I wanted to do AirChat and Clubhouse and things like that because you can actually have a conversation. Oh, I see. Right? And what you guys have very uniquely is four people, you know, of whom at least three are intelligent.
Starting point is 00:00:40 I'm kidding. How could you say that Sax isn't here? How did you? Yeah. What? Sax isn't even. How did you? Yeah. Sax isn't even here and you say that in the world? That is so cold. That's the best.
Starting point is 00:00:48 Right. Of the three are intelligent and all of you get along and you can have an ongoing conversation. That's a very high hit rate. Normally in a podcast, you only get one interesting person and now you've got three, maybe four, right? Okay. So that to me was why all of this is successful. Who are you talking to is number four. We don't know. Children remain mysterious forever. Of the four, the problem is if you get people together to talk,
Starting point is 00:01:15 two is a good conversation, three possibly, four is the max. That's why the dinner table at a restaurant, four top. You don't do five or six because then it splits into multiple conversations. So you had four people who were capable of talking, right? That I thought was a secret, but there's another secret. The secret, the other secret is you guys are having fun. You're talking over each other, you're making fun of each other, you're actually having fun. So that's
Starting point is 00:01:39 why I'm saying this is the most fun podcast I've ever been on. That's why you'll be successful. Welcome back anytime, Naval. Thanks, bro. Welcome back. Keep it fun. Absolutely. Keep it fun, guys. Thanks for having me. 1-88-3-Smarcas.
Starting point is 00:01:50 Can't believe you'd say that about Saks. He's not even here to defend himself. Sorry, David. We open source it to the fans and they've just gone crazy. I'll be back. Queen of Kenwa. All right, everybody, welcome back to the number one podcast in the world. We're really excited today. Back again. You're Sultan of
Starting point is 00:02:25 Science, David Friedberg. What do you got going on there? Friedberg? What's in the background? Everybody wants to know. I used to play I used to play a lot of a game called sim earth on my Macintosh LC way back in the day. That tracks. Yeah, that tracks. And of course, with us again, your chairman, did you play growing up?. Cal? Actually, I'm kind of curious. Did you ever play video games? Let's see. Andrea, Alison, Susan. I mean, it was like a lot of cute girls.
Starting point is 00:02:55 I was out dating girls. Freeberg. Yeah. I was not on my Apple Toon playing Civilization. Let me find one of those pictures. Whoa, whoa, don't get me in trouble, man. The 80s were good to me in Brooklyn. Rejection, the video game. Yes, you have three lives, reject it, reject it. It's a numbers game, Chamath, as you know,
Starting point is 00:03:16 as you well know, it is a numbers game. Nick, go ahead, pull up Rico Suave here. Oh no, what is this one? Oh, instead of playing video, here I am. No, that's in the 80s. That's fat Jacob. That's fat Jacob. You're in the game.
Starting point is 00:03:30 Nick, help out your uncle. Yeah, here he is out slaying. Help out your uncle with the Vin J. Calhoun. You know what he was slaying in there? A snack. Yeah. You want pre-azepic and post-azepic, right? Correct, and weightlifting.
Starting point is 00:03:44 Beef jerky. Lace potatoes chips. Go find my Leonardo DiCaprio picture, please, and replace my fat J-How picture with that. Thank you. Oh God, I was fat. Man, plus 40 pounds is a lot heavier than I am. It's no joke.
Starting point is 00:03:58 It's no joke. 40 pounds is a lot. It's no joke. There's so many great emo photos of me. I'm proud of you. Thank you, my man. Thank you. Thank you. If you want a good can you get through the intros please so we can start come on quick. How you doing brother? How you doing? Chairman dictator. You're good. You get to get to get to get to get to get to get
Starting point is 00:04:16 to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get
Starting point is 00:04:24 to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get to get day. Today for the first time on the all in podcast the iron fist of Angel list, the Zen like mage of the early stage. He has such a way with words. He's the Socrates of nerds. Please welcome my guy. Namaste, Naval. How you doing the intros are back. That is the best intro I've ever gotten. I didn't think I didn't I didn't think you could do that. That was amazing. That's your superpower right there. Lock it in. Quit venture capital. Just do that. Absolutely. That's actually you know what, interestingly, number one podcast in the world, like someone said,
Starting point is 00:04:53 I mean, that's what I'm manifesting. It's getting close. We've been in the top 10. So I mean, the weekends are good for all in this. This one will hit number one. This one will go viral. I think it could. If you have some really great pithy insights, we might go right to the top. I just got to do a Sieg Heil and it'll go viral. Oh no. Are you going to send us your heart? My heart goes out to you. My heart, I end here at the heart. I don't send it out.
Starting point is 00:05:21 I keep it right here. I put both hands on the heart and I hold it nice and steady. I hold it out. I keep it right here. I put both hands on the heart and I hold it. Nice and steady. I hold it in. It's sending out to you but just not explicitly. All right. For those of you don't know, Neva was an entrepreneur, he kicked a bit of ass, he got his ass kicked. And then he started venture hacks. And he started emailing folks and saying, you know, 20 15 20 years ago, maybe 15. Here are some deals in Silicon Valley went around, he started writing 50k 100k checks, he hit a bunch of home runs. And he turned venture hacks into angel lists.
Starting point is 00:05:55 And then he has invested in a ton of great startups, maybe give us some of the greatest hits there. Yeah, Twitter, Uber notion, bunch of others, Postmates, Udemy, a lot of unicorns, bunch of coming. It's actually a lot of deals at this point. But honestly, I'm not necessarily proud of being an investor. Investor to me is a side job.
Starting point is 00:06:18 It's a hobby. So I do startups. How do you define yourself? I don't. I mean, I guess these days I would say more like building things. You know, every so-called career is an evolution, right? And all of you guys are independent and you kind of do what you're most interested in, right? That's the point of making money. So you can just do what you want. So these days I'm really into building and
Starting point is 00:06:41 crafting products. So I built one recently called AirChat. It kind of didn't work. I'm still proud of what I built and got to work with an incredible team. And now I'm building a new product. And this time I'm going into hardware. And I'm just building something that I really want. I'm not ready to talk about it yet. And you funded all of yourself, Naval?
Starting point is 00:06:59 Partially, I bring investors along. Last time they got their money back. Previous times they've made money. Next time, hopefully they'll make a lot of money. It's good to bring your friends along. I'll be honest. I love that you said, I love the product, but it didn't work. Not enough people say that. Yeah, I know. I built a product that I loved, that I was proud of, but it didn't catch fire. And it was a social product, so it had to catch fire for it to work. So I found the team great homes. They all got paid. The investors that I brought in got their money back.
Starting point is 00:07:27 And I learned a ton, which I'm leveraging into the new thing. But the new thing is much harder. The new thing is hardware and software and network. What did you learn building in 2024 and 2025 that you didn't know maybe before then? The main thing was actually just the craft, the craft of pixel by pixel designing a software product and launching it. I guess the main thing I took away that was a learning was that I really enjoyed building products and that I wanted to build something even harder and something even more real.
Starting point is 00:08:00 And I think like a lot of us, I'm inspired by Elon and all the incredible work he's done. So I don't want to build things that are easy. I want to build things that are hard and interesting. And I want to take on more technical risk and less market risk. This is the classic VC learning, right? Which is you want to build something that if people get it, if you can deliver it, you know people will want it. And it's just hard to build as opposed to you build it and you don't know if they want it.
Starting point is 00:08:25 So that's a learning. AirChat was a lot of fun. For those of you who don't know, it was kind of like a social media network where you could ask a question and then people could respond. And it was like an audio based Twitter. Would you say that was the best way to describe it? Audio Twitter, asynchronous AI transcripts and all kinds of AI to make it easier for you, translation. Really good way for kind of trying to make podcasting type conversations more accessible to everybody. Because honestly, one of the reasons I don't go on podcasts, I don't like being intermediated,
Starting point is 00:08:58 so to speak, right? Where you sit there and someone interviews you and then you go back and forth and you go through the same old things. I just want to talk to people. I want peer relationships, kind of like you guys have running here. Nival, what happened when you went through that phase, there was a period where it just seemed like
Starting point is 00:09:13 something had gone on in your life and you just knew the answers. You were just so grounded. It's not to say that you're not grounded now, but you're less active posting and writing. But there was this period where I think all of us were like, all right, what does Naval think? Oh, really?
Starting point is 00:09:28 Oh, okay, that's news to me. I would say it would be like the late teens, the early twenties. Jason, you can correct me if I'm getting the dates wrong, but it's in that moment where like these Naval-isms and this sort of philosophy really started to, I think people had a tremendous respect for how you were thinking about things.
Starting point is 00:09:44 I was just curious, like, were you going through something in that moment? Yeah, yeah, yeah. That's right. No, very insightful. I've been on Twitter since 2007 because I was an early investor, but I never really tweeted and get featured. I had no audience. I was just doing the usual techie guy thing talking to each other.
Starting point is 00:10:02 And then I started AngelList in 2010. The original thing about matching investors to startups didn't scale. It was just an email list that exploded early on, but then just didn't scale. So we didn't have a business. And I was trying to figure out the business. And at the same time, I got a letter from the Securities Exchange Commission saying, oh, you're acting as an unlicensed broker dealer. And I'm like, what?
Starting point is 00:10:23 I'm not making any money. I'm not, I'm just making intros. I'm not taking anything. It's just a public service. But even then, they were coming after me. So, I was in it and I'd raised a bunch of money from investors. So, I was in a very high stress period of my life. Now, looking back, it's almost comical that I was stressed over it. But at the time, it all felt very real. The weight of everything was on my shoulders, expectations, people, money, regulators. And I eventually went to DC and got the law changed to legalize what we do, which ironically enabled a whole bunch of other things like ICOs and incubator days and so on, demo days.
Starting point is 00:10:55 But in that process, I was in a very high stress period of my life and I just started tweeting whatever I was going through, whatever realizations I was happening. It's only in stress that you sort of are forced to grow. And so, whatever internal growth I was going through, I just started tweeting it, not thinking much of it. And it was a mix of... There are three things that I kind of always kind of are running through. One is I love science, you know, I'm an amateur, love physics, let's just leave it at that.
Starting point is 00:11:21 I love reading a lot of philosophy and thinking deeply about it. And I like making money, right? Truth, love and money. That's my joke on my Twitter bio. Those are the three things that I keep coming back to. And so I just started tweeting about all of them. And I think before that, the expectation was that someone like me should just be talking about money, stay in your lane, and people had been playing it very safe. And so I think the combination of the three sort of caught people's attentions because every person thinks about everything.
Starting point is 00:11:52 We don't just stay in our lane in real life. We're dealing with our relationships. We're dealing with our relationship with the universe. We're dealing with what we know to be true and with science and how we make decisions and how we figure things out. And we're also dealing with the practical, we make decisions and how we figure things out. And we're also dealing with the practical everyday material
Starting point is 00:12:07 things of how to deal with our spouses or girlfriends or wives or husbands and how to make money and how to deal with our children. So I'm just tweeting about everything. I just got interested in everything I'm tweeting about it. And a lot of it, my best stuff was just notes to self. It's like, hey, don't forget this. Don't forget that.
Starting point is 00:12:24 How to get rich. Remember that one? How to get rich. That was a banger. That was a super banger. That was like one hey, don't forget this. How to get rich. Remember that one? How to get rich. That was just like one of the first threads. And that one went super viral. Yeah, that was a banger. Yeah. Yeah. Yeah, I think that is still the most viral thread ever on Twitter. I like timeless things. I like philosophy. I like things that still apply in the future.
Starting point is 00:12:40 I like compound interest, if you will, in ideas. Obviously, recently, X has become so addictive that we're all checking it every day. And Elon's built the perfect for you. He's built TikTok for nerds. And we're all in it. But normally, I try to ignore the news. Obviously, last year, things got real. We all had to pay a lot of attention to the news.
Starting point is 00:13:01 But I just like to tweet timeless things. I don't know. I mean, people pay attention. Sometimes they like what I write. Sometimes they go non-linear on me. But yeah, the how to get rich feed storm was a big one. Is it problematic when people now meet you because the hype versus the reality, there's like, it's discordant now,
Starting point is 00:13:17 because people, if they absorb this content, they expect to see some- Grew? Causy, dainty, yeah, floating in the air. You know what I mean? Yes. Yeah, like many of you have stopped drinking, to see some causey deity floating in the air. You know what I mean? Yeah, like many of you have stopped drinking, but I used to like have the occasional glass of wine.
Starting point is 00:13:30 And there was a moment there where I went and met with an information reporter back when I used to meet with reporters. And she said, where are we going to meet? So I said, oh, let's meet at the wine merchant and we'll get over the last one. She's like, what you drink? Like it was like a big deal for her.
Starting point is 00:13:44 Oh my god, I'm so disappointed. She's like, what you drink? It was a big deal for her. Oh, no, no, no. I'm so disappointed. I was like, I'm an entrepreneur. Most of them are alcoholics or psychedelics or whatever it takes to manage. Yeah, ketamine. Yeah, ketamine in the hot tub. Yeah.
Starting point is 00:13:56 Right. Yeah, when they say I'm on therapy, you know what that's code for. Yeah. So yes, it is highly distorted. Plant medicine. Yeah, I'm almost reminded of that line in The Matrix where that agent is about to like shoot one of the Matrix characters and says only human, right? So that's
Starting point is 00:14:13 kind of what I want to say to everybody like only human. Yeah, yeah, yeah. You did a recently a podcast with Tim Ferriss on parenting. This one's out there. I love this. I bought the book from this guy. Yeah. Just give a brief overview of this philosophy of parenting. Oh, I didn't listen to this after at this time. This is gonna love this. This spoke to me, but it was a little crazy. Yeah. So I'm a big fan of David Deutsch. David Deutsch, I think is basically the smartest living human. He's a scientist who's very good at quantum computation.
Starting point is 00:14:45 And he's written a couple of great books, but it's about the intersection of the greatest theories that we have today, the theories with the most reach. And those are epistemology, the theory of knowledge, evolution, quantum physics, and computation. This is the beginning of infinity guy. That's the book that you always reference. Correct, yes.
Starting point is 00:15:04 The Fabric of Reality is another book. I've spent a fair bit of time with him, done some podcasts with him, hired and worked with people around him. And I'm just really impressed because it's like the framework that's made me smarter, I feel like, because we're all fighting aging. Our brains are getting slower and we're always trying to have better ideas. So as you age, you should have wisdom. That's your substitute for the raw horsepower of intelligence going down.
Starting point is 00:15:25 And so scientific wisdom I take from David, not take, but I learned from David. And one of the things that he pioneered is called taking children seriously. And it's this idea that you should take your children seriously like adults. You should always give them the same freedom that you would give an adult. If you wouldn't speak that way with your spouse, if you wouldn't force your spouse to do something, don't force a child to do something. And it's only through the latent threat of physical violence, hey, I can control you, I can make you go to your room, I can take your dinner away or whatever, that you intimidate children. And it resonated with me because I grew up very, very free. My father wasn't around when
Starting point is 00:16:04 I was young. My mother didn't have the bandwidth to watch us all the time. She had other things to do. And so, I kind of was making my own decisions from an extremely young age. From the age of five, nobody was telling me what to do. And from the age of nine, I was telling everybody what to do. So, I've used to that. And I've been homeschooling my own kids.
Starting point is 00:16:22 So, the philosophy resonated. And I found this guy, Aaron Stupel, on AirChat. And he was an incredible expository of the philosophy. He lives his life with it 99% as extreme as one can go. So his kids can eat all the ice cream they want and all the Snickers bars they want. They can play on the iPad all they want. They don't have to go to school if they don't feel like it. They dress how they want.
Starting point is 00:16:42 They don't have to do anything they don't want to do. Everything is a discussion, negotiation, explanation, just like you would with a roommate or an adult living in your house. And it's kind of insane and extreme. But I live my own home life in that arc, in that direction. And I'm a very free person. I don't have an office to go to. I try really not to maintain a calendar. If I can't remember it, I don't want to do it. I don't send my kids to go to, I try really not to maintain a calendar. If I can't remember it, I don't want to do it. I don't send my kids to school. I really try not to coerce them.
Starting point is 00:17:09 And so obviously that's the extreme model, but I was sorry, sorry, sorry. Hold on a second. So your kids, if they, if they were like, I want Haagen-Dazs and it's 9 PM. You're like, okay. Two nights ago I did this. I ordered the Haagen-Dazs. It wasn't Haagen-Dazs with a 9 p.m. You're like, okay. Two nights ago I did this. I ordered the Hagen Dazs. It wasn't Hagen Dazs, it was a different brand. I ordered it.
Starting point is 00:17:29 I'm just gonna go through a couple of examples. We do a dash of ice cream at 9 p.m. and we all ate ice cream. Yeah, so they're like, dad, I want- And they're happy. They're happy kids. I wanna be on my iPad. I'm playing Fortnite.
Starting point is 00:17:38 Leave me alone. I'll go to sleep when I want. You're like, okay. My oldest probably plays iPad nine hours a day. Okay, so then your other kid pees in their pants because they're too lazy to walk to the bathroom. They don't do that because they don't like pee in their pants.
Starting point is 00:17:53 No, I understand, but I'm just saying there's a spectrum of all of these things, right? Yeah. And your point of view is 100% of it is allowed and you have no judgements. No, that's not where I am. That's where Aaron is. My rules are a little different.
Starting point is 00:18:06 My rules are they got to do one hour of math or programming plus two hours of reading every single day. And the moment they've done that, they're free creatures. And everything else is a negotiation. We have to persuade them. It's a persuasion, I should say, not even a negotiation. And even the hour of math and two hours of reading, really you get 15 to 30 minutes of math,
Starting point is 00:18:26 maybe an hour if you're lucky, and you get half an hour to two hours of reading. And what do you think the long-term consequences of that are? And then also, what is the long-term consequences, let's say, on health if they're making decisions you know are just not good, like the ice cream thing at 9 p.m.?
Starting point is 00:18:43 How do you manage that in your mind? I think whatever age you're at, whatever part you're at in life, you're still always struggling with your own habits. I think all of us, for example, still eat food and feel guilty or wanna eat something that we shouldn't be eating, and we're still always evolving our diets,
Starting point is 00:18:58 and kids are the same. So my oldest has already, he passed on the ice cream last time, and he said, I wanna eat healthier, because finally I managed to get through to him and persuade him that he should be healthier. My younger kids will eat it, but they'll eat a limited amount. My middle kid will sometimes eat some.
Starting point is 00:19:12 Okay. So you're not, so if they say something, you'll enable it, but then you'll guide, you'll be like, Hey, listen, like, this is not the choice I would make. I don't think, but if you want it, I do it. Yeah. I'll try it, but you also have to be careful where you don't want to intimidate them and you don't want to be so overbearing that then they just view dad as like controlling.
Starting point is 00:19:29 I find this so fascinating. And so what do you think happens to these kids? Like, I'm sure you have a vision of what they'll be like when they're fully formed adults. Like, what is that vision? I try not to, they're going to be who they're going to be. This is kind of how I grew up. I kind of did what I wanted.
Starting point is 00:19:44 to they're gonna be who they're gonna be. This is kind of how I grew up. I kind of did what I wanted. I would rather they have agency than turn out exactly the way I want because agency is the hardest thing right? Having control over your own life, making your own decisions and I want them to be happy. I have a very happy household. What is the Plato, what's Plato's goal? Eudaimonia? Right like the happy one. Like the fulfillment, this constant, is, what's Plato's goal? Eudaimonia, right? Like the- Eudaimonia, yeah, the happy one, Aristotle. Like the fulfillment, this constant, is that what you want for them? I don't really want anything for them.
Starting point is 00:20:13 I just want them to be free and their best selves. God damn. Chamath is worrying about details. He's got like 17 kids now, I don't know if you know, but Chamath has got like a whole bunch of things. I love this interview because the guy made a really interesting point, which was they're going to have to make these decisions at some point. They're going to have to learn the pros and cons, the upside, the downside to all these
Starting point is 00:20:39 things, eating iPad and the quicker you get them to have agency to make these decisions for themselves with knowledge to ask questions, the more security they have. downside to all these things, eating, iPad, and the quicker you get them to have agency to make these decisions for themselves with knowledge to ask questions, the more secure it will be. I found it a fascinating discussion. I like cause and effect, especially in teenagers. Now that I have a teenager, it's really good for them to learn, hey, you know, if you don't do your homework, you have a problem. And then you got to solve that problem. How are we going to solve that problem? So
Starting point is 00:21:05 I like to present it as what's your plan? Anytime they have a problem, eight year old kids, 15, or kids, I just say, what's your plan to solve this? And then I like to hear their plan. And let me know if you want to brainstorm it. But I thought it was a very interesting, super interesting discussion. I would say overall, my kids are very happy. The household is very happy. Everybody household is very happy. Everybody gets along.
Starting point is 00:21:26 Everybody loves each other. Some of them are way ahead of their peers. Nobody is behind in anything that matters. Nobody seems unhealthy in any obvious way. No one has abridged eating habits. I haven't even found really an abridged behavior that's out of line. So it's all good. Self-correcting.
Starting point is 00:21:45 It's like a... I worry a lot about this iPad situation. I see my kids on an iPad and it's almost like, unless they're doing an interactive project, if they end up watching... Says the guy who has a video game theme in the background. Well, that was interactive, right? And who probably grew up playing video games nonstop
Starting point is 00:22:04 and probably spends nine hours a day on his screen, just called a phone. So yeah, it's the same thing, man. Well, I mean, I feel like watch, but do they watch shows, Nivau? No, no, there's a hypocrisy to picking up your phone and then saying to your kid, no, you can't use your iPad. I grew up playing games non-stop and video games
Starting point is 00:22:22 when I was older, and I was an avid gamer until just a few years ago. Well, no, I mean, I'm not criticizing the iPad. I was obviously on a computer since I was four years old, so I totally get it. And I think the question for me is like, but I didn't have the ability to play a 30-minute show and then play the next 30-minute show and the next 30-minute show and then sit there for two hours and just have a show playing the whole time. I was, you know, interacting on the computer and doing stuff and building stuff,
Starting point is 00:22:51 which was a little different for me from a use case perspective. We did used to control their YouTube access, although now we don't do that. The only thing I ask them is that they put on captions when they're watching YouTube so it helps their reading. They learn to read faster. That's a good tip. Yeah, I like that one. I will say that one of my kids is really into YouTube, the other two are not.
Starting point is 00:23:12 Like they just got over it. And to the extent that they use YouTube, it's mostly because they're looking up videos on their favorite games. They want to know how to be better at a game. All right, let's keep moving through this docket. We have David Sachs with us here. So David, give us your philosophy of parenting. Okay, next moving through this docket. We have David sacks with us here. So David, give us your philosophy of parenting. Okay, next item on the docket. Let's go.
Starting point is 00:23:30 So that's the real issues. parenting show, the parenting show. I asked David, what's your parenting philosophy? He said, Oh, well, I set up their trust four years ago. So he's good. Trust to set up everything's good. I set up their trust four years ago. So he's good. Trust is set up. Everything's good. Check. Grat. You're all set, guys. Let me know how it works out. All right.
Starting point is 00:23:54 Speaking of working out, we've got a vice president who isn't cuckoo for Cocoa Puffs, and who actually understands what AI is. JD Vance gave a great speech. I watched it myself. He talked about AI in Paris. This was on Tuesday at the AI Action Summit, whatever that is. And he gave a 15-minute banger of a speech. He talked about overregulating AI and America's intention to dominate this. And we happen to have with us, Naval, the czar, the czar of AI. So before I go into all the details about the speech, I don't want to steal your thunder. Sachs, this, this speech had a lot of verbiage, a lot of ideas that I've heard before that
Starting point is 00:24:36 maybe we've all talked about, maybe tell us a little bit about how this all came together and how proud you are. I mean, gosh, having a vice president who understands AI is just, it's mind blowing. He could speak on a topic that's topical credibly. This was an awesome moment for America, I think. What are you implying there, J. Cal? I'm implying you might've workshopped it with him. No.
Starting point is 00:25:00 Or that he's smart, both of those things. The vice president wrote the speech or at least directed all of it So the ideas came from him. I'm not gonna take any credit whatsoever for this. Okay. Well, it was on point Maybe you could talk about it was on point. I think it was a very well crafted and well delivered speech He made four main points About the Trump administration's approach to AI. He's going to ensure this is point one that American AI approach to AI, he's going to ensure this is point one that American AI continues to be the gold standard. Fantastic check
Starting point is 00:25:26 to he says that the administration understands that excessive regulation could kill AI just as it's taking off. And he did this in front of all the EU elites who love regulation, did it on their home court. And then he said, number three, AI must remain free from ideological bias, as we've talked about here on this program. And then number four, AI must remain free from ideological bias, as we've talked about here on this program. And then number four, the White House, he said, will, quote, maintain a pro worker growth
Starting point is 00:25:52 path for AI so that it can be a potent tool for job creation in the US. So what are your thoughts on the four major bullet points and his speech here in Paris? Well, I think that the vice president, you knew he was going to deliver an important speech as soon as he got up there and said that I'm here to talk about not AI safety, but AI opportunity. And to understand what a bracing statement that was, and really almost like a shot across the bow,
Starting point is 00:26:22 you have to understand the history and context of these events. For the last couple of years, the last couple of these events have been exclusively focused on AI safety. The last in-person event was in the UK at Bletchley Park, and the whole conference was devoted to AI safety. Similarly, the European AI regulation obviously is completely preoccupied with safety and trying to regulate away safety risks before they happen. Similarly, you had the Biden EO, which was based around safety. And then you have just the whole media coverage around AI, which is preoccupied with all the risks from AI. So to have the vice president get up there and say right off the bat that
Starting point is 00:27:01 there are other things to talk about in respect to AI besides safety risks that actually there are huge opportunities there was a breath of fresh air and like I said kind of a shot across the bow and yeah you could almost see some of the Eurocrats they needed their fainting couches after that. Eurocrats. Trudeau looks like his dog just died. So I think that was just a really important statement right off the bat to set the context for the speech, which
Starting point is 00:27:30 is AI is a huge opportunity for all of us. Because really, that point just has not been made enough. And it's true, there are risks. But when you look at the media coverage and when you look at the dialogue that the regulators have had around this, they never talk about the opportunities. It's always just around the wrist. So I think that was a very important corrective. And then like you said, he went on to say that the United States has to win
Starting point is 00:27:53 this AI race, we want to be the gold standard, we want to dominate. That was my favorite part. Yeah. And by the way, that language about dominating AI and winning the global race, that is in President Trump's executive order from week one. So this is very much elaborating on the official policy of this administration. And the vice president then went on to say that he specified how we would do that, right? We have to win some of these key building block technologies. We want to win in chips, we want to win in AI models, we want to win in applications.
Starting point is 00:28:22 He said we need to build, we need to unlock energy for these companies. And then most of all, we just need to be supportive towards them as opposed to regulating them to death. And he had a lot to say about the risk of overregulation, how often it's big companies that want regulation. He warned about regulatory capture, which our friend Bill Gurley would like. And he said that so basically having less regulation can actually be more fair, can create a more level playing field for small companies as well as big companies.
Starting point is 00:28:52 And then he said to the Europeans that we want you to be partners with us. We want to lead the world, but we want you to be our partners and benefit from this technology that we're going to take the lead in creating, but you also have to be a good partner to us. And then he specifically called out the overregulation that Europeans have been engaged in. He mentioned the Digital Services Act, which has acted as like a speed trap for American companies. It's American companies who've been overregulated and fined by these European regulations because the truth of the matter is that it's American technology companies that are winning the race.
Starting point is 00:29:30 When Europe passes these owners' regulations, they fall most of all on American companies. He's basically saying, we need you to rebalance and correct this because it's not fair and it's not smart policy, and it's not not gonna help us collectively win this AI race. And that kind of brings me just to the last point is I don't think he mentioned China by name, but clearly he talked about adversarial countries who are using AI to control their populations, to engage in censorship and thought control.
Starting point is 00:29:59 And he basically painted a picture where it's like, yeah, you could go work with them or you could work with us. And we have hundreds of years of shared history together, we believe in things like free speech, hopefully, and we want you to work with us. But if you are going to work with us, then you have to cooperate and we have to create a reasonable regulatory regime. And of all, did you see the speech and your thoughts just
Starting point is 00:30:22 generally on JD Vance and having somebody like this, you know, representing us and wanting to win American exceptionalism? Very surprising, very impressive. I thought he was polite, optimistic, and just very forward looking. Just it's what you would expect an entrepreneur or a smart investor to say. So I was very impressed. I think the idea that America should win, great. I think that we should not regulate.
Starting point is 00:30:47 I also agree with, I'm not an AI doomer. I don't think AI is gonna end the world. That's a separate conversation, but there's just a religion that comes along in many faces, which is that, oh, climate change is gonna end the world. AI is gonna end the world. Asteroid is gonna end the world. COVID-19 is gonna end the world.
Starting point is 00:31:02 And it just has a way of fixating your attention, right? It captures everybody's attention at once. It's a very seductive thing. And I think in the case of AI, it's really been overplayed by incentive bias, motivated reasoning by the companies who are ahead and they want to pull up the ladder behind them. I think they genuinely believe it. I think they genuinely believe that there's safety risks, but I think they're motivated to believe in those safety risks and then they pass that along.
Starting point is 00:31:24 But it's kind of a weird position because they have to say, Oh, it's so dangerous that you shouldn't just let open source go at it and you should let just a few of us work with you on it. But it's not so dangerous that a private company can't own the whole thing. Right. Cause it was truly the Manhattan project. If they were building nuclear weapons, you wouldn't want one company to own that. Sam Altman's famously said that AI will capture the light cone of all future value.
Starting point is 00:31:48 In other words, like all value ever created at the speed of light from here will be captured by AI. So if that's true, then I think open source AI really matters and little tech AI really matters. The problem is that the nature of training these models is highly centralized. They benefit from supercomputer clustered compute. So it's not clear how any decentralized model can compete. So to me, the real issue boils down to is how do you push AI forward while not having just a very small number of players control the entire thing? And we thought we had that solution with the original OpenAI, which was a nonprofit and
Starting point is 00:32:23 was supposed to do it for humanity. But now because of they want to incentivize the team and they want to raise money, they have to privatize at least a part of it. Well, that's not clear to me why they need to privatize the whole thing. Like why do you need to buy out the nonprofit portion? You could leave a nonprofit portion and you could have the private portion for the incentives. But I think that the real challenge is how do you keep AI from naturally centralizing because all the economics and technology underneath are centralizing in nature.
Starting point is 00:32:50 If you really think you're going to create God, do you want to put God on a leash with one entity controlling God? That to me is the real fear. I'm not scared of AI. I'm scared of what a very small number of people who control AI do to the rest of us for our own good Because that's how it always works. So well said probably should go with the Greek model having many gods and heroes as well free Berg You heard the JD Vance speech. I assume. What are your thoughts on over regulation and
Starting point is 00:33:18 Maybe to Neval's point one person owning this versus open source. I think that there's this kind of big definition of social balance right now on what I would call techno-optimism and techno-pessimism. Generally, people sort of fall into one of those two camps. Generally speaking, techno-optimists, I would say, are folks that believe that accelerating outcomes with AI, with automation, with bioengineering, manufacturing, semiconductors, quantum computing, nuclear energy, et cetera, will usher in this era of abundance by creating leverage, which is what technology gives us. Technology will make things cheaper, and it will be deflationary, and it will give everyone more, so it creates abundance.
Starting point is 00:34:05 The challenge is that people who already have a lot worry more about the exposure to the downside than they desire the upside. And so, you know, the techno pessimists are generally like the EU and large parts, frankly, of the United States are worried about the loss of X, the loss of jobs, the loss of this, the loss of that, whereas countries like China and India are more excited about the opportunity to create wealth, the opportunity to create leverage, the opportunity to create abundance for their
Starting point is 00:34:36 people. You know, GDP per capita in the EU, $60,000 a year, GDP per capita in the United States, like $82,000, but GDP per capita in India is $2,500 and China is $12,000 a year, GDP per capita in the United States, like $82,000. But GDP per capita in India is $2,500 and China is $12,600. There's a greater incentive in those countries to manifest upside than there is for the United States and the EU who are more worried about manifesting downside. And so it is a very difficult kind of social battle that's underway. I do think like over time, those governments and those countries and those social systems
Starting point is 00:35:09 that embrace these technologies are going to become more capitalist. And they're going to require less government control and intervention in job creation, the economy, payments to people, and so on. And the countries that are more techno pessimistic are unfortunately going to find themselves asking for greater government control, government intervention in markets, governments creating jobs, government making payments to people, governments effectively running the economy. My personal view, obviously, is that I'm a very strong advocate for technology acceleration, because I think in nearly every case in human history, when a new technology has emerged, we've largely found ourselves assuming that the technology works in the framework of today or of yesteryear.
Starting point is 00:35:52 The automobile came along and no one envisioned that everyone in the United States would own an automobile and therefore you would need to create all of these new industries like mechanics and car dealerships, roads, all the people servicing building roads and all the people servicing building roads and all the other industry that emerged. Motels. And it's very hard for us to sit here today and say, okay, AI is going to destroy jobs. What's it going to create and be right?
Starting point is 00:36:15 I think we're very likely going to be wrong whatever estimations we give. The area that I think is most underestimated is the large technical projects that seem technically infeasible today that AI can unlock. For example, habitation in the oceans. It's very difficult for us to envision creating cities underwater and creating cities in the oceans or creating cities on the moon or creating cities on Mars or finding new places to live.
Starting point is 00:36:39 Those are like technically, people might argue, oh, that sounds stupid, I don't wanna go do that. But at the end of the day, human civilization will drive us to wanna do that. But at the end of the day, like human civilization will drive us to want to do that. But those technically are very hard to pull off today. But AI can unlock a new set of industries to enable those transitions.
Starting point is 00:36:53 So I think we really get it wrong when we try and assume the technology as a transplant for last year or last century. And then we kind of become techno pessimists because we're worried about losing what we have. Are you a techno pessimist? Are you optimist? Because you bring up the downside
Starting point is 00:37:08 of an awful lot here on the program, but you are working every day in a very optimistic way to breed, you know, better strawberries and potatoes for folks. So you're a little bit of a... No, I have no techno pessimism whatsoever. I try and point out why the other side is acting the way they are.
Starting point is 00:37:22 Got it, okay. Putting it in full context. And what I'm trying to highlight is I think that that framework is wrong. I think that that framework of trying to transplant new technology to the old way of things operating is the wrong way to think about it. And it creates this, you know, because of this manifestation about worrying about downside, it creates this fear that creates regulation like we see in the EU. And as a result, China's GDP will scale while the EUs will stagnate if that's
Starting point is 00:37:46 where they go. That's my assessment, or my opinion of what will happen. And Chamath, you want to wrap this up for us? What are your thoughts on JD? I'll give you two. Okay, the first is I would say, this is a really interesting moment where I would call this the tale of two vice presidents. Very early in the Biden administration, Kamala was dispatched on
Starting point is 00:38:06 an equally important topic at that time, which was illegal immigration, and she went to Mexico and Guatemala. And so you actually have a really interesting A-B test here. You have both vice presidents dealing with what were in that moment incredibly important issues. And I think that JD was focused, he was precise, he was ambitious. And even the part of the press that was very supportive of Kamala couldn't find a lot of very positive things to say about her. And the feedback was, it was meandering, she was ducking questions, she didn't answer the questions that she was asked very well. And it's so interesting because it's a bit of a microcosm then to what happened over these next four years and her campaign,
Starting point is 00:38:54 quite honestly, which you could have taken that window of that feedback and unfortunately for her, it just continued to be very consistent. So that was one observation I had because I heard him give the speech, I heard her, and I had this kind of moment where I was like, wow, two totally different people. The second is on the substance of what JD said. I said this on Tucker and I'll just simplify all of this into a very basic framework, which is, if you want a country to thrive,
Starting point is 00:39:24 it needs to have economic supremacy. And it needs to have military supremacy. In the absence of those two things, societies crumble. And the only thing that underpins those two things is technological supremacy. And we see this today. So on Thursday, what happened with Microsoft, they had a $24 billion contract with the United States Army to deliver some whiz bang thing. And they realized that they couldn't deliver it. And so what did they do? They went to
Starting point is 00:39:56 Anderil. Now, why did they go to Anderil? Because Anderil has the technological supremacy to actually execute. A few weeks ago, we saw some attempts at technological supremacy to actually execute. A few weeks ago, we saw some attempts at technological supremacy from the Chinese with respect to DeepSeek. So I think that this is a very simple existential battle. Those who can harness and govern the things that are technologically superior will win
Starting point is 00:40:21 and it will drive economic vibrancy and military supremacy, which then creates safe, strong societies. That's it. So from that perspective, JD nailed it. He saw the forest from the trees. He said exactly what I think needed to be said and put folks on notice that you're either on the ship or you're off the ship. And I think that that was really good. Yeah. And there was like a little secondary conversation that emerged sacks that I would love to engage you with if you're willing, which is this Civil War quote unquote between maybe Maga 1.0 Maga 2.0 techies were in the Maga party like
Starting point is 00:41:02 ourselves. And maybe the core MAGA folks, we can pull up the tweet here in JD's own word, and he's been engaging people in his own words, it's very clear that he's writing these tweets and distinct difference between other politicians in this administration, and they just tell you what they think. Here it is, I'll try and write something to address this in detail says JD Vance's tweet, but I think this civil war is overstated. Though yes, there are some real divergences between the populace, I would describe that as MAGA and the techies, but briefly in general, I dislike substituting American labor for cheap labor, my views on
Starting point is 00:41:40 immigration and offshoring flow from this. I like growth and productivity gains. And this informs my view on tech and regulation when it comes to AI. Specifically, the risks are number one overstated to your point, Neval, or too difficult to avoid. One of my many real concerns, for instance, is about consumer fraud. That's a valid reason to worry about safety. But the other problem is much worse. If a pure nation is six months ahead of the US on AI, again, I'll try and say more. And this is JD going right at, I think one of the more controversial topics, Sachs, that the administration is dealing with and has dealt with when it comes to immigration and tech, because these two things
Starting point is 00:42:22 are dovetailing each other. If we lose millions of driver jobs, which we will in the next 10 years, just like we lost millions of cashier jobs, well, that's going to impact how our nation and many of the voters look at the border and immigration, we might not be able to let as many people immigrate here. If we're losing millions of jobs to AI and self-driving cars. What are your thoughts on him engaging this directly, Sachs? Well, the first point he's making there is about wage pressure, right? Which is when you throw
Starting point is 00:42:54 open our borders or you throw open American markets to products that can be made in foreign countries by much cheaper labor that's not held to the same standards, the same minimum wage or the same union rules or the same safety standards that American labor is, and has a huge cost advantage, then you're creating wage pressure for American workers. And he's opposed to that. And I think that is an important point because I think the way that the media or neoliberals like to portray this argument is that somehow MAGA's resistance to unlimited immigration is somehow based on xenophobia or something like that. No, it's based on bread and butter, kitchen
Starting point is 00:43:30 table issues, which is if you have this ridiculous open border policy, it's inevitably going to create a lot of wage pressure for people at the bottom of the pyramid. So I think JD is making that argument. But, and this is point two, he's saying I'm not against productivity growth. So technology is good because it enables all of our workers to improve their productivity. And that should result in better wages because workers can produce more. The value of their labor goes up if they have more tools to be productive. So there's no contradiction there. And I think he's explaining why there isn't a contradiction. A point I would add, he doesn't make this point in that tweet, but I would add is that one of the problems that we've had over the last 30 years is that we have had tremendous
Starting point is 00:44:15 productivity growth in the US, but labor has not been able to capture it. All that benefit has basically gone to capital or to companies. And I think a big part of the reason why is because we've had this largely unrestricted immigration policy. So I think if you were to tamp down on immigration, if you were to stop the illegal immigration, then labor might be able to capture more of the benefits of productivity growth. And that would be a good thing.
Starting point is 00:44:39 It'd be a more equitable distribution of the gains from productivity and from technology. And that I think would help tamp down this growing conflict that you see between technologists and the rest of the country, or certainly the heartland of the country. Neville, this is a... Oh, you want to add anything else, David? Sorry. Well, I think just the final point he makes in that tweet is that he talks about how we
Starting point is 00:45:05 live in a world in which there are other countries that are competitive. And specifically, he doesn't mention China, but he says, we have a peer competitor. And it's going to be a much worse world if they end up being six months ahead of us on AI rather than six months behind. That is a really important point to keep in mind. I think that the whole Paris AI Summit took place against the backdrop of this recognition because just a few weeks ago we had deep sea and it's really clear that China is not a year behind us. They're hot on our heels or only maybe months behind us. And so if we hobble ourselves with unnecessary regulations,
Starting point is 00:45:39 if we make it more difficult for our AI companies to compete, that doesn't mean that China's going to follow suit and copy us. They're going to take advantage of that fact and our AI companies to compete. That doesn't mean that China's gonna follow suit and copy us. They're gonna take advantage of that fact and they're gonna win. All right, Naval, this seems to be one of the main issues of our time. Four of the five people on this podcast right now
Starting point is 00:45:54 are immigrants. So we have this amazing tradition in America. This is a country built by immigrants for immigrants. Do you think that should change now in the face of job destruction, which I know you've been tracking, self-driving pretty acutely, we both have an interest there, I think over the years,
Starting point is 00:46:13 you know, what's the solution here if we're gonna see a bunch of job displacement, which will happen for certain jobs, we all kind of know that, should we shut the border and not let the next Naval, Chamath, Sachs, and Friedberg into the country? Well, let me declare my biases upfront. I'm a first-generation immigrant.
Starting point is 00:46:31 I moved here when I was nine years old, rather than my parents did, and then I'm a naturalized citizen. So obviously, I'm in favor of some level of immigration. That said, I'm assimilated. I consider myself an American first and foremost. I bleed red, white'm assimilated. I consider myself an American first and foremost. I bleed red, white and blue. I believe in the Bill of Rights and the Constitution, first and second and fourth and all the proper amendments.
Starting point is 00:46:54 I get up there every July 4th and I deliberately defend the Second Amendment on Twitter, at which point half my followers go bananas. You know, because you're not supposed to. I'm supposed to be a good immigrant and carry the usual set of coherent leftist policies, globalist policies. So I think that legal high-skill immigration with room and time for assimilation makes sense. You want to have a brain drain on the best and brightest coming to the freest country
Starting point is 00:47:23 in the world to build technology and to help civilization move forward. And, you know, as Chamath was saying, economic power and military power is downstream of technology. In fact, even culture is downstream of technology. Look at what the birth control pill did, for example, to culture, or what the automobile did to culture, or what radio and television did to culture and then the internet. So technology drives everything. And if you look at wealth, wealth is a set of physical transformations that you can affect. And that's a combination of capital knowledge. And the bigger input to that is knowledge. And so the US has become the home of knowledge creation, thanks to bringing in the best and brightest. You could even argue deep seek. Part of the reason why we lost that is because a bunch of those kids, they studied in the US, but then we sent them back
Starting point is 00:48:08 home. So I think you absolutely- Is that actually accurate? They were- Yeah, yeah, yeah. Some, a few of them. Really? Oh my God, that's like exhibit A. Wow. I didn't know that. So I think you absolutely have to split skilled, assimilated immigration, which is a small set. And it has to be both. They have to both be skilled and they have to become Americans. That oath is not meaningless, right? It has to mean something. So, skilled, assimilated immigration, you have to separate that from just open borders, whoever can wander in, just come on in. That latter part makes no sense. If the Biden administration had only been letting in people with 150 IQs,
Starting point is 00:48:42 we wouldn't have this debate right now. Yeah, absolutely. The reason why we're having this debate is because they just opened the border and let millions and millions of people in. It was to their advantage to conflate legal and illegal immigration. So every time you would be like, well, we can't just open the border. You just say, well, what about Elon? What about this? And they would just parade.
Starting point is 00:49:00 If they were just letting in the Elons and the Jensens and... Freebergs. We wouldn't be having the same conversation today. The correlation between open borders and wage suppression is irrefutable. We know that data. And I think that the Democrats, for whatever logic, committed an incredible error in basically undermining their core cohort. I want to go back to what you said because I think it's super important. There is a new political calculus on the field and I agree with you. I think that the three cohorts of the future are the asset light working in middle class,
Starting point is 00:49:41 that's cohort number one. There are probably 100 to 150 million of those folks. Then there are patriotic business owners, and then there's leaders in innovation. Those are the three. And I think that what MAGA gets right is they found the middle ground that intersects those three cohorts of people. And so every time you see
Starting point is 00:50:05 this sort of left versus right dichotomy, it's totally miscast. And it sounds discordant to so many of us because that's not how any of us identify. Right? And I think that that's a very important observation because the policies that we adapt will need to reflect those three cohorts. What is the common ground amongst those three? And on that point, Naval is right. There's not a lot that those three would say is wrong with a very targeted form of extremely useful legal immigration of very, very, very smart people who agree to assimilate and be a part of America.
Starting point is 00:50:39 I mean, I'm so glad you said it the way you said it. Like I remember growing up where my parents would try to pretend that they were in Sri Lanka. And sometimes I would get so frustrated. I'm like, if you want to be in Sri Lanka, go back to Sri Lanka. I want to be Canadian because it was easier for me to make friends.
Starting point is 00:50:58 It was easier for me to have a life. I was trying my best. I wanted to be Canadian. And then when I moved to the United States 25 years ago, I wanted to be American. And then when I moved to the United States 25 years ago, I wanted to be American. And I feel that I'm American now, and I'm proud to be an American. And I think that's what you want.
Starting point is 00:51:12 You want people that embrace it. Doesn't mean that we can't dress up in a show or chemise every now and then. But the point is, what do you believe? And where is your loyalty? Freebird, we used to have this concept of in a melting pot of this assimilation and that was a good thing. Then it became cultural appropriation.
Starting point is 00:51:31 We kind of made a right turn here. Where do you stand on this recruiting the best and brightest and forcing them to assimilate making sure that they're down with Jason. Like find the people that care to be here. Yeah, let me re say that. I reject the premise this whole conversation. Wait, hold on. Look, I'm a I'm a first generation American who moved here when I was five and became a citizen when I was 10. And yes,
Starting point is 00:51:56 I'm fully American. And that's the only country I have any loyalty to. But I the the premise that I reject here is that somehow an AI conversation leads to an immigration conversation because millions of jobs are going to be lost. We don't know that. That's also true. I agree. You're making a huge assumption. I completely agree. That's buying into the doomerism that AI is going to wipe out millions of jobs. That is not evidence. I think it's going to create more jobs than any of us have any jobs been lost by AI. Let's be real. We've had AI for two and a half years and I think it's gonna create more jobs than any of us are doing. Have any jobs been lost by AI? Let's be real. We've had AI for two and a half years
Starting point is 00:52:26 and I think it's great, but so far it's a better search engine and it helps high school kids cheat on their essays. I mean, come on. You don't believe that self-driving is coming? Hold on a second, Sachs, you don't believe that millions- But hold on, those driver jobs weren't even there 10 years ago.
Starting point is 00:52:42 Uber came along and created all these driver jobs. DoorDash created all these driver jobs. So what technology does, yes, technology destroys jobs, but it replaces them with opportunities that are even better. And then either you can go capture that opportunity yourself or an entrepreneur will come along and create something that allows you
Starting point is 00:52:58 to capture those opportunities. AI is a productivity tool. It increases the productivity of a worker. It allows them to do more creative work and less repetitive work. As such, it makes them more valuable. Yes, there is some retraining involved, but not a lot. These are natural language computers. You can talk to them in plain English.
Starting point is 00:53:13 They talk back to you in plain English. But I think David is absolutely right. I think we will see job creation by AI that will be as fast or faster than job destruction. You saw this even with the internet. Like YouTube came along. Look at all these YouTube streamers and influencers. That didn't used to be a job. New jobs, really opportunities, because job is a wrong word.
Starting point is 00:53:32 Job implies someone else has to give it to me and sort of like they're handed out to zero-sum game. Forget all that. It's opportunities. After COVID, look at how many people are making money by working from home in mysterious little ways on the internet that you can't even quite grasp. Here's the way I categorize it, okay? Is that whenever you have a new technology,
Starting point is 00:53:53 you get productivity gains, you get some job disruption, meaning that part of your job may go away, but then you get other parts that are new and hopefully more elevated, more interesting. And then there is some job loss. I just think that the third category will follow the historical trend, which is that the first two categories are always bigger, and you end up with more net productivity and more net wealth creation.
Starting point is 00:54:17 And we've seen no evidence to date that that's not going to be the case. Now it's true that AI is about to get more powerful. You're going to see a whole new wave of what are called agents this year, agentic products are able to do more for you. But there's no evidence yet that those things are going to be completely unsupervised and replace people's jobs. I think that we have to see how this technology evolves. I think one of the mistakes of, let's call it the European approach, is assuming that you can predict the future with perfect accuracy or such good accuracy that you can create regulations today that are gonna avoid all these risks in the future.
Starting point is 00:54:52 And we just don't know enough yet to be able to do that. That's a false level of certainty. I agree with you. And the companies that are promulgating that view is what Naval said, those that have an economic vested interest in at least convincing the next incremental investor that this could be true because they want to make the claim
Starting point is 00:55:11 that all the money should go to them so they could hoover up all the economic gains. And that is the part of the cycle we're in. So if you actually stratify these reactions, there's the small startup companies in AI that believe there's a productivity leap to be had and that there's going to be prosperity. Everybody on the sidelines watching and then a few companies that have an
Starting point is 00:55:32 extremely vested interest in them being a gatekeeper because they need to raise the next 30 or 40 billion dollars trying to convince people that that's true. And if you view it through that lens, you're right, Sax. We have not accomplished anything yet that proves that this is going to be cataclysmically bad. And if anything view it through that lens, you're right, Sax. We have not accomplished anything yet that proves that this is gonna be cataclysmically bad. And if anything right now, history would tell you it's probably gonna be like the past,
Starting point is 00:55:51 which is generally productive and a creative society. Yeah, and just to bring it back to JD's speech, which is where we started, I think it was a quintessentially American speech in the sense that he said we should be optimistic about the opportunities here, which I think is basically right. And we want to lead, we want to take advantage of this, we don't want to hobble it, we don't even fully know what it's going to be yet.
Starting point is 00:56:18 We are going to center workers, we want to be pro-worker. And I think that if there are downsides for workers, then we can mitigate those things in the future. But it's too early to say that we know what the program should be. It's more about a statement of values at this point. Do you think it's too early, Freiburg, given optimists and all these robots being created, what we're seeing in self driving, you've talked about the ramp up with Waymo to actually say we will not see millions of jobs. And millions of people get displaced from those jobs. What do you think Freiburg? I'm curious your thoughts because that is the
Starting point is 00:56:54 counter argument. My experience in the workplace is that AI tools that are doing things that an analyst or knowledge worker was doing with many hours in the past is allowing them to do something in minutes. That doesn't mean that they spend the rest of the day doing nothing. What's great for our business and for other businesses like ours that can leverage AI tools is that those individuals can now do more. And so our throughput, our productivity as an organization has gone up and we can now create more things faster.
Starting point is 00:57:28 So whatever the product is that my company makes, we can now make more things more quickly. We can do more development. You're seeing it on the ground, correct, Adohala? And I'm seeing it on the ground. And I don't think that this like transplantation of how bad AI will be for jobs is the right framing as much as it is about an acceleration of productivity.
Starting point is 00:57:47 And this is why I go back to the point about GDP per capita and GDP growth. Countries, societies, areas that are interested, or industries that are interested in accelerating output, in accelerating productivity, the ability to make stuff and sell stuff, are going to rapidly embrace these tools because it allows them to do more with less. And I think that's what I really see on the ground. And then the second point I'll make is the one that I mentioned earlier, and I'll wrap up with a third point,
Starting point is 00:58:13 which is I think we're underestimating the new industries that will emerge drastically, dramatically. There is going to be so much new shit that we are not really thinking deeply about right now that we could do a whole another two hour brainstorming session on on what AI unlocks in terms of large scale projects that are traditionally or typically are today held back because of the constraints on the technical feasibility of these projects.
Starting point is 00:58:39 And that ranges from accelerating to new semiconductor technology to quantum computing to energy systems to transportation to habitation, et cetera, et cetera. There's all sorts of transformations in every industry that's possible as these tools come online. And that will spurn insane new industries. The most important point is the third one, which is we don't know the overlap of job loss
Starting point is 00:59:01 and job creation if there is one. And so the rate at which these new technologies impact and create new markets, but I think Naval is right. I think that what happens in capitalism and in free societies is that capital and people rush to fill the hole of new opportunities that emerge because of AI and that those grow more quickly than the old bubbles deflate. So if there's a deflationary effect in terms of job need in other industries, I think that the loss will happen slower
Starting point is 00:59:27 than the rush to take advantage of creating new things will happen on the other side. So my bet is probably on the order of, I think new things will be created faster than old things will be lost. I think- And actually, as a quick side note to that, the fastest way to help somebody get a job right now,
Starting point is 00:59:43 if you know somebody in the market who's looking for a job, the best thing you can do is say, hey, go download the AI tools and start talking to them. Just start using them in any way. And then you can walk into any employer in almost any field and say, hey, I understand AI, and they'll hire you on the spot. Exactly.
Starting point is 00:59:58 Naval, you and I watched this happen. We had a front row seat to it. Back in the day, when you were doing venture hacks, and I was doing open angel forum, we had to like fight to find five or 10 companies a month. Then the cost of writing these companies went down, they went down massively from 5 million to start a company to two, then to 250 K, then to 100 K. I think what we're seeing is like three things concurrently, you're
Starting point is 01:00:23 going to see all these jobs go away for automation, self driving cars, cashiers, etc. But we're going to also see static team size at places like Google, they're just not hiring because they're just having the existing bloated employee base, learn the tools. But I don't know if you're seeing this, the number of startups able to get a product to market with two or three people and get to a million in revenue is booming. What are you seeing in the startup landscape? Definitely what you're saying in that there's leverage, but at the same time, I think the more interesting part is that new startups are enabled that could not exist otherwise. My last startup, AirChat, could not have existed without AI because we needed the transcription and translation.
Starting point is 01:01:02 Even the current thing I'm working on is not an AI company, but it cannot exist without AI. It is relying on AI. Even at AngelList, we're significantly adopting AI. Everywhere you turn, it's more opportunity, more opportunity, more opportunity. People like to go on Twitter, or the artist formerly known as Twitter, and basically they like to exaggerate. Like, oh my God, we've hit AGI. Oh, my God, I just replaced my all
Starting point is 01:01:27 my mid level engineers. Oh, my God, I've stopped hiring. To me, that's like moronic. The two valid ones are the one man entrepreneur shows where there's like one guy or one gal, and they're like scaling up like crazy things to show or there are people who are embracing AI and be like, I need to hire and I need to hire anyone who can even spell AI, like anyone who's even used AI. Just come on in, come on in.
Starting point is 01:01:49 Again, I would say the easiest way to see that AI is not taking jobs or creating opportunities is go brush up on your AI, learn a little bit, watch a few videos, use the AI, tinker with it, and then go reapply for that job that rejected you and watch how they pull you in. In 2023, an economist named Richard Baldwin said, AI won't take your job.
Starting point is 01:02:08 It's someone using AI that will take your job because they know how to use it better than you. And that's kind of become a meme and you see it floating around X, but I think there's a lot of truth in that. As long as you remain adaptive and you keep learning and you learn how to take advantage of these tools, you should do better.
Starting point is 01:02:24 And if you wall yourself off from the technology and don't take advantage of it, that's when you put yourself at risk. Another way to think about it is these are natural language computers. So everyone who's intimidated by computers before should no longer be intimidated. You don't need to program anymore in some esoteric language
Starting point is 01:02:40 or learn some obscure mathematics to be able to use these. You can just talk to them and they talk back to you. That's magic. The new programming language is English. Chamath, you want to wrap us up here on this opportunity, slash displacement, slash chaos. I was going to say this before, but I'm pretty unconvinced anymore that you should bother even learning many of the hard sciences and maths that we used to as underpinnings.
Starting point is 01:03:08 Like I used to believe that the right thing to do was for everybody to go into engineering. I'm not necessarily as convinced as I used to be because I used to say, well, that's great first principles thinking, et cetera, et cetera. And you're going to get trained in a toolkit that will scale.
Starting point is 01:03:23 And I'm not sure that that's true. I think like you can use these agents and you can use deep research and all of a sudden they replace a lot of that skill. So what's left over it's creativity, it's judgment, it's history, it's psychology. It's all of these other sort of like software leadership communication that allow you to manipulate these models and constructive ways. Because when you think of like the prompt engineering that gets you to great answers, it's actually just thinking in totally different
Starting point is 01:03:48 orthogonal ways and non-linearly. So that's my last thought, which is it does open up the aperture, meaning for every smart mathematical genius, there's many, many, many other people who have high EQ. And all of a sudden this tool actually takes the skill away from the person with just a high IQ and says if you have these other skills now you can compete with me equally and I think that that's liberating for a lot of people. I'm in the camp of more opportunity You know I I got to watch the movie industry a whole bunch
Starting point is 01:04:20 when the digital cameras came out and more people started making Documentaries more people started making independent film shorts and then of course out and more people started making documentaries more people started making Independent film shorts and then of course the YouTube revolution people started making videos on YouTube or podcasts like this And if you look at what happened with like the special effects industry as well We need far fewer people to make a Star Wars movie To make a Star Wars series to make a Marvel series as we've seen now we can get the Mandalorian and Shoka and all these other series with smaller numbers of people and they look better than Obviously the original Star Wars series or even the prequels so there's gonna be so many more opportunities
Starting point is 01:04:57 We're now making more TV shows more series everything we wanted to see of every little character That's the same thing that's happening startups. I can't believe that there is an app now, Naval, called Slopes just for skiing. And there are 20 really good apps for just meditation. And there are 10 really good ones just for fasting. Like, we're going down this long tail of opportunity and there'll be plenty of million to $10 million businesses for us if people learn to use these tools. I love how that's the thing that tips you over. Which one? You get an extra Marvel movie or an extra Star Wars show so that tips you over. I think for a lot of people it feels great that. Yeah I may take over the world but I'm gonna get an extra Star Wars movie. I'll be entertained.
Starting point is 01:05:45 So I'm cool with it. Yeah, I mean, are you not entertained? One final point on this is, look, I mean, given the choice between the two categories of techno-optimists and techno-pessimists, I'm definitely in the optimist camp, and I think we should be. But I think there's actually a third category
Starting point is 01:05:59 that I would submit, which is techno-realist, which is technology is going to happen. Trying to stop it is like ordering the tides to stop. If we don't do it, somebody else will. China's going to do it or somebody else will do it. It's better for us to be in control of the technology to be the leader rather than passively waiting for it to happen to us. I just think that's always true.
Starting point is 01:06:25 It's better for businesses to be proactive and take the lead, disrupt themselves instead of waiting for someone else to do it. I think it's better for countries. I think you did see this theme a little bit. These are my own views. I don't want to ascribe them to the vice president, but you did see, I think, a hint of the techno-realism idea in his speech and in his tweet, which is look, AI is going to happen.
Starting point is 01:06:48 We might as well be the leader. If we don't, we could lose in a key category that has implications for national security, for our economy, for many things. So that's just not a world we want to live in. So I think a lot of this debate is sort of academic because whether you're an optimist or pessimist is sort of glass half empty half full. The question is just is it going to happen or not? And I think the answer is yes. So then we want to control it. It's just, you know, let's just boil it down. There's not a tremendous amount of choice
Starting point is 01:07:18 in this, I think. I would agree heavily with one point and I would just tweak another. The point I would agree with is that it's going to happen anyway and that's what Deep Seek proved. You can turn off the flow of chips to them and you can turn off the flow of talent. What do they do? They just get more efficient and they exported it back to us. They sent us back the best open source model when our guys were staying closed source for
Starting point is 01:07:38 safety reasons. Yeah, exactly. And I think Deep Seek... It's going to come right back to us. Safety of their equity. And I think Deep And the part where I try to tweak a little bit is the idea that we are going to win. By we, when you say America, the problem is that the best way to win is to be as open, as distributed, as innovative as possible. If this all ends up in the control of one company, they're actually going to be slower to innovate than if there's a dynamic system.
Starting point is 01:08:18 And that dynamic system, by its nature, will be open. It will leak to China. It will leak to India. But these things have powerful network effects. We know this about technology. Almost all technologies have network effects underneath. And so even if you are open, you're still going to win and you're still going to control the whole most of it. But Noval, you look at the internet. That was all true for the internet, right? The
Starting point is 01:08:36 internet's an open technology. It's based on terms of open source. But who's the more important internet? But who's the dominant companies? All the dominant companies are US companies because they were in the lead. Exactly. Exactly right, exactly right. Because we embrace the open internet. We embrace the open internet. That was different.
Starting point is 01:08:49 Yeah, so there will be benefits for all of humanity and I think the vice president's speech was really clear that, look, we want you guys to be on board, we wanna be good partners. However, there are definitely gonna be winners economically, militarily, and in order to be one of those winners,
Starting point is 01:09:02 you have to be a leader. Who's gonna get to AGI first? First of all, is it going to be an open source? Who's going to win? Is it going to be open source or closed source? Who's going to win the day? We're sitting here five, 10 years from now, and we're looking at the top three language models, which is going to be trouble for this. But I don't think we know how to build AGI, but that's a much longer. Okay, put AGI aside. Who's going to have the best model five years from now? I 100% agree with you. I just think it's a different thing. But what we're building are these incredible natural language computers.
Starting point is 01:09:29 And actually, David, in a very pithy way, summarized the two big use cases. It's search and it's homeworks. It's paperwork. It's really paperwork. And a lot of these jobs that we're talking about disappearing are actually paperwork jobs. They're paperwork shuffling.
Starting point is 01:09:43 These are made up jobs. Like the federal government is they're finding out through Doge, you know, a third of it is like people digging holes in spoons and another third are filling them back up. They're filling out paperwork and then burying it in a mine shaft. They're burying it in a mine shaft, Nyer Mountain.
Starting point is 01:09:56 Yeah, so I think a lot of these made up jobs are gonna stick around. And then they're gonna go down the mine shaft to get the paperwork when someone retires and bring it up. You know what, I'm gonna get them some thumb drives. We can increase the throughput of the elevator with some thumb drives. It would be incredible.
Starting point is 01:10:07 What we found out is that the DMV has been running the government for the last 70 years. It's been a compounding. That's really what's going on. The slots. The DMV is in charge. I mean, if the world ends in nuclear war, God forbid, the only thing that's left will be the cockroaches and then a bunch of like government documents. TPS reports.
Starting point is 01:10:25 TPS reports down in a mineshaft. Basically, yeah. Let's take a moment everybody to thank our czar. We miss him. We wish he could be here for the whole show. Thank you czar. Thank you to the czar. Good to see you guys.
Starting point is 01:10:42 We miss you. We miss you little buddy. I wish we could talk about Ukraine, but we're not allowed. Get back to work. We'll talk about it another time. I'll see you in the commissary. Thanks for the invite. Bye.
Starting point is 01:10:54 Oh man, I'm so excited. I'm Naval. Sax invited me to go to the military mass. I'm going to be in the commissary with Sax. No, we didn't, J. Kelly. You invited yourself. Be honest. I did. Yes, I did. I put it on his calendar. To keep the conversation moving, let me segue a point that came up that was really important
Starting point is 01:11:05 into tariffs. And the point is, even though the internet was open, the US won a lot of the internet. A lot of US companies won the internet. And they won that because we got there the firstest with the mostest, as they say in the military. And that matters because a lot of technology businesses have scale economies and network effects underneath. Even basic brand-based network effects, if you go back to the late 90s, early 2000s,
Starting point is 01:11:36 very few people would have predicted that we would have ended up with Amazon basically owning all of e-commerce. You would have thought it would have been a perfect competition and very spread out. And that applies to how we ended with Uber as basically one taxi service, or we end up with- Airbnb. Meta, Airbnb. It's just network effects, network effects, network effects rule the world around me. But when it comes to tariffs and when it comes to trade, we act like network effects don't exist. The classic Ricardian comparative advantage dogma says that you should produce what you're best at, I produce what I'm best at and we trade. And then even if you want to charge me more for it, if you want to impose tariffs for
Starting point is 01:12:11 me to ship to you, I should still keep tariffs down because I'm better off. You're just selling me stuff cheaply, great. Or if you want to subsidize your guys, great, you're selling me stuff cheaply. The problem is that is not how most modern businesses work. Most modern businesses have network effects. As a simple thought experiment, suppose that we have two countries, right? I'm China, you're the US. I start out by subsidizing all of my companies and industries that have network effects. So I'll subsidize TikTok, I'll ban your social media, but I'll push mine. I will subsidize my semiconductors,
Starting point is 01:12:44 which tend to have winner take all in certain categories, or I'll subsidize my semiconductors, which do tend to have winner take all in certain categories or I'll subsidize my drones and then you. BYD. Exactly, BYD, self-driving, whatever. And then when I win, I own the whole market and I can raise prices. And if you try to start up a competitor, then it's too late. I've got network effects. Or if I've got scale economies, I can lower my prices zero, crash you out of business. No one there right mind will invest and I'll raise prices right back up. So you have to understand that certain industries have hysteresis or they have network effects or they have economies of
Starting point is 01:13:13 scale. And these are all the interesting ones. These are all the high margin businesses. So in those, if somebody is subsidizing or they're raising tariffs against you to protect your industries and let them develop, You do have to do something. You can't just completely back down. What are your thoughts, Chamath, about tariffs and network effects? It does seem like we do want to have redundancy in supply chains. So there are some exceptions here. Any thoughts on how this might play out?
Starting point is 01:13:41 Because, yeah, Trump brings up tariffs every 48 hours, and then it doesn't seem like any of them land. So I don't know, I'm, I'm still on my 72 hour Trump rule, which is whatever he says, wait 72 hours, and then maybe see if it actually comes to pass. Where do you stand on all these tariffs? And tariff talk? Well, I think the tariffs will be a plug. Are they coming? Absolutely. The quantum of them? I don't know. And I think that the way
Starting point is 01:14:05 that you can figure out how extreme it will be, it'll be based on what the legislative plan is for the budget. So there's two paths right now. Path one, which I think is a little bit more likely, is that they're going to pass a slimmed down plan in the Senate, just on border security and military spending. And then they'll kick the can down the road for probably another three or four months on the budget. Plan two is this one big, beautiful bill that's irking its way through the house. And there, they're proposing trillions of dollars of cuts.
Starting point is 01:14:41 In that mode, you're going to need to raise revenue somehow. And especially if you're giving away tax breaks. And the only way to do that is probably through tariffs or one way to do it is through tariffs. My honest opinion, Jason, is that I think we're in a very complicated moment. I think the Senate plan is actually on the margins more likely and better. And the reason is because I think that Trump is better off getting the next 60 to 90 days of data. I mean, we're in a real pickle here. We have persistent inflation. We have a broken Fed. They're totally asleep at the switch.
Starting point is 01:15:18 And the thing that Yellen and Biden did, which in hindsight now was extremely dangerous, is they issued so much short-term paper that in totality, we have $10 trillion we need to finance in the next six to nine months. So it could be the case that we have rates that are like five, five and a quarter, five and a half percent. I mean, that that's extremely bad
Starting point is 01:15:46 at the same time as inflation, at the same time as delinquencies are ticking up. So I think tariffs are probably going to happen. But I think that Trump will have the most flexibility if he has time to see what the actual economic conditions will be, which will be more clear in three, four, five months. And so I almost think this big, beautiful bill is actually counterproductive because I'm not sure we're going to have all the data we need to get it right. Steve McLaughlin Freebrook, any thoughts on these tariffs you've been involved in the global
Starting point is 01:16:23 marketplace, especially when it comes to produce and wheat and all this corn and everything. What do you think the dynamic here is going to be? Or is it saber rattling in a tool for Trump? The biggest buyer of US ag exports is China. Ag exports are a major revenue source, major income source, and a major part of the economy for a large number of states. And so there will be, as there was in the first Trump presidency, very likely very large
Starting point is 01:16:54 transfer payments made to farmers because China is very likely going to tear off imports or stop making import purchases altogether, which is what happened during the first presidency. When they did that, the federal government, I believe, had transfer payments of north of $20 billion to farmers. This is a not negligible sum, and it's a not negligible economic effect because there's then a rippling effect throughout the ag economy. So I think that's one key thing that I've heard folks talk about is the activity that's going to be needed to support the farm economy as the US's biggest ag customer disappears.
Starting point is 01:17:31 In the early 20th century, we didn't have an income tax and the federal revenue was almost entirely dependent on tariffs. When tariffs were cut, there was an expectation that there would be a decline in federal government revenue, but what actually happened is volume went up. So lower tariffs actually increase trade, increase the size of the economy. So this is where a lot of economists take their basis in, hey, guys, if we do these tariffs, it's actually going to shrink the economy, it's going to cause a reduction in trade. The counterbalancing effect is one that has not been tested in economics, right, which is what's going to happen if simultaneously we reduce the income tax and reduce the corporate income tax and basically increase capital flows through reduced taxation while doing the tariff implementation at the same time. So it's a grand economic experiment. And I think we'll learn a lot about what's going to happen here as this all moves forward. I do think ultimately many of these countries are going to capitulate to some degree and we're going
Starting point is 01:18:27 to end up with some negotiated settlement that's going to hopefully not be too short-term impactful on the economies and the people and the jobs that are dependent on trade. Economy feels like it's in a very precarious place. It does to asset holders. And obviously they've left it in a bad place in the last administration and we shut down the entire country for a year over COVID and the bill for that has come due and that's reflected in inflation. I think there are a couple other points in tariffs. First is it's not just about money.
Starting point is 01:18:56 It's also about making sure we have functional middle class with good jobs because if you have a non-tariff world, maybe all the gains go to the upper class and an underclass and then you can't have a functioning democracy when the average person is on one of those two extremes. So I think that's one issue. Another is strategic industries. If you look at it today, probably the largest defense contractor in the world is DJI. They got all the drones.
Starting point is 01:19:19 Even in Ukraine, both sides are getting all their drone parts from DJI. Now, they're getting it through different supply chains and so on, but Ukrainian drones and Russian drones, the vast majority of them are coming through China through DJI. And we don't have that industry. If we have a kinetic conflict right now and we don't have good drone supply chain internally in the US, we're probably going to lose because those things are autonomous bullets. That's the future of all warfare. We're buying F-35s and the Chinese are building swarms of nanotrails.
Starting point is 01:19:47 At scale. At scale. So we do have to re-onsure those critical supply chains. And what is a drone supply chain? There's not a thing called drone. It's like motors and semiconductors and optics and lasers and just everything across the board. So I think there are other good arguments for at least reshoring some of these industries. We need them. And the United States is very lucky and that is very autarkic. We have all the resources, we have all the supplies. We can be upstream of everybody with all the energy. To the extent we're importing any energy, that is a choice we made. That is not because fundamentally we lack the energy.
Starting point is 01:20:23 We had to, right. Yeah, because of between all the oil resources and the natural gas and fracking combined with all the work we've done in nuclear efficient and small reactors, we should absolutely be energy independent. We should be running the table on it. We should have a massive surplus. And hey, if you're worried about a couple of million of DoorDash Uber drivers losing their jobs to automation.
Starting point is 01:20:45 Like, hey, there's going to be factories to build these parts for these drones that we're going to need. So there's a lot of opportunity, I guess, for people to. And there is a difference between different kinds of jobs. Those kinds of jobs are better jobs, building difficult things at scale physically that we need for both national security and for innovation. Those are better jobs than paperwork, writing essays
Starting point is 01:21:10 for other people to read. Yeah. Or even driving cars. All right, listen, I want to get to two more stories here. We have a really interesting copyright story that I wanted to touch on. Thomson Reuters just won the first major US AI copyright case.
Starting point is 01:21:23 And Fair Use played a major role in this decision. This has huge implications for AI companies here in the United States, obviously, open AI, and the New York Times, getting images versus stability, we've talked about these, but it's been a little while because the legal system takes a little bit of time. And these are very complicated cases, as we've talked about Thompson Reuters owns Westlaw. If you don't know that it's kind of like Lexis
Starting point is 01:21:49 Nexus, it's one of the legal databases out there that lawyers use to find cases, etc. And they have a paid product with summaries and analysis of legal decisions back in 2020. This is two years before chat GBT Reuters sued a legal research competitor called Ross for copyright infringement Ross had created an AI powered legal search engine sounds great. But Ross had asked Westlaw if they would Reuters, Westlaw, sued Ross in 2020, accusing the company of being precariously liable for Legal Ease's direct infringement. Super important point. Anyway, the judge originally favored Ross in fair use. This week, the judge reversed this ruling and found Ross liable, noting that after further review, fair use does not apply in this case.
Starting point is 01:22:48 This is the first major win. And we debated this. So here's a clip. You know, you heard it here first on the All In Pod. What I would say is, you know, when you look at that fair use doctrine, I've got a lot of experience with it, you know, the fourth factor test, I'm sure you're well aware of this, is the effect of the use on the potential market and the value of the work.
Starting point is 01:23:07 If you look at the lawsuits that are starting to emerge, it is Getty's right to then make derivative products based on their images, I think we would all agree. Stable diffusion, when they use these open web, that is no excuse to use an open web crawler to avoid getting a license from the original owner of that. Just because you can technically do it, doesn't mean you're allowed to do it. In fact, the open web projects that provide these say explicitly, we do not give you the right to use this, you have to
Starting point is 01:23:34 then go read the copyright laws on each of those websites. And on top of that, if somebody were to steal the copyrights of other people put it on the open web, which is happening all day long, you still if you're building a derivative work like this, you still need to go get it. So it's no excuse that I took some site in Russia that did a bunch of copyright violation, and then I index them for my training model. So I think this is going to result in a bird. Can you shoot me in the face and let me know when this happens? Okay.
Starting point is 01:24:01 Oh, great. So the same way, it's a great now. Exactly. No, me too. Yeah, good. Do the same way today. Same way now, exactly. I know, me too. Yeah. Okay, good segment. Let's move on. Well, since these guys don't give a shit about copyright holders. No, I care about it.
Starting point is 01:24:13 Naval, what do you think about, uh, you know, I'm so glad you're here, Naval, to actually talk about the topics these two other guys would engage with. I'm going to go out and even thinner limb. I'm going to go out and even thinner limb and say I largely agree with you. I think it's a bit rich to crawl the open web, hoover up all the data, offer direct substitution for a lot of use cases because you know, now you start and end with the AI model, it's not even like you link out like Google did. And then you just close off the models for safety reasons. I think if you trained on the open web, your model should be open source. Yeah, absolutely. That would be a fine thing. I have a prediction here. I think this is all going to wind up wind up like the Napster Spotify case. For people who don't know, Spotify pays I think 65 cents on the dollar to the original underwriters of that content, the music industry, and they figured out a way to make a business when Napster is roadkill.
Starting point is 01:25:02 I think that there is a non zero chance like it might be five or 10% that opening eyes going to lose the New York Times lawsuit and they're going to lose it hard and they're going to be injunctions. And I think it's the settlement might be that these language models, especially the closed ones are going to have to pay some percentage in a negotiated settlement of their revenue, half, two thirds, to the content holders. And this could make the content industry have a massive, massive uplift and a massive resurgence. I think that the problem, there's an example on the other side of this, which is that there's a company that provides technical support for Oracle, third party company. And Oracle has tried umpteen times to sue them into oblivion using
Starting point is 01:25:49 copyright infringement as part of the justification. And it's been up all over the stock for a long time. The company's name is Rimini street. Don't ask me why I it's on my radar, but I just, I've been looking at it. And they lost this huge lawsuit, Oracle one, and then it went to appellate court and then it was all vacated. Why am I bringing this up? I think that the legal community has absolutely no idea
Starting point is 01:26:13 how these models work. Because you can find one case that goes one way and one case that goes the other. And what I would say should become standard reading for anybody bringing any of these lawsuits. There's an incredible video that Carpathy just dropped that Andre just dropped where he does like this deep dive into LLMs and he explains chat GPT from the ground up. It's on YouTube. It's three hours. It's excellent. And it's very difficult to watch that and not get to the same conclusion that you guys did.
Starting point is 01:26:46 I'll just leave it at that. I tend to agree with this. There's also a good old video by Ilya Tsitskova where he was, I believe, the founding chief scientist or CTO of OpenAI. And he talks about how these large language models are basically extreme compressors. And he models them entirely as their ability to compress.
Starting point is 01:27:05 It's a lossy compression. Exactly. Lossy, lossy compression. Exactly. And Google got sued for fair use back in the day, but the way they managed to get past the argument was they were always linking back to you. They showed you a tiny bit and they sent you the traffic. This is lossy compression. It is absolutely I'm now on your page. I, I hate to say this, Jason. I agree with you. You were, you were right. That's all I wanted to hear all
Starting point is 01:27:36 these years. When I saw those videos, because like, oh, man, Jason was right. Jason was right. Jason was right. Oh my god. No, I just I've been through this so many times that these I think this is, you know, Rupert Murdoch said we should hold the line with Google and not allow them to index our content without a license. And Google navigated it successfully. And they were able to to not get him to stop. I think what's happened now is that the New York Times remembers that.
Starting point is 01:28:09 They all remember losing their content and these snippets and the one box to Google and they couldn't get that genie back in the bottle. I think the New York Times realizes this is their payday. I think the New York Times will make more money from licenses from LLMs, then they will make from advertising or subscriptions eventually, this will renew the model. Almost. I think New York Times content is worthless to an LLM. But that's a different story. I think the actual value of content is different.
Starting point is 01:28:37 Well, okay, sure, if you don't, political reason, whatever. But I can tell you as a user, I loved the wire cutter. I think you knew Brian and everybody over the wire cutter. We that was like, yeah, wire cutter. What a great product. I used to pay for the New York Times. I no longer pay for the New York Times. My main reason was I would go to the wire cutter. Yeah. And I would just buy whatever they told me to buy. Now I go to chat, JPT, which I pay for. And chat, JPT tells me what to buy based on the wire cutter. So it's it and I'm already paying for it. So I stopped paying for it.
Starting point is 01:29:08 I philosophically disagree with all of your nonsense on this topic. All three of you are wrong. And I'll tell you why. No, number one, if information is out in the open internet, I believe it's accessible and it's viewable. And I view an LLM or a web crawler as basically being a human that's reading and can store information in its brain. If it's out there in the open, if it's behind a paywall, 100 percent. If it's behind some protected password. Wait, wait, wait, wait, David, David. In that case, can a Google crawler just crawl entire site and serve it on Google? Why can't they do that?
Starting point is 01:29:43 So here's the fair use. The fair use is you cannot copy, you cannot repeat the content. You cannot take the content and serve it on Google. Why can't they do that? So here's the fair use. The fair use is you cannot copy, you cannot repeat the content. You cannot take the content and repeat it. That is how the law is currently written. But now what I have is I have a tool that can remix it with 50 other pieces of similar content. And I can change the words slightly and maybe even translate into different language.
Starting point is 01:30:00 So where does it stop? Do you know the musical artist Girl Talk? We should have done a Girl Talk track here today. He's got weird musical tastes. This guy, okay, here we go. He basically takes small samples of popular tracks and he got sued for the same problem. There was another guy named White Panda, I believe,
Starting point is 01:30:18 had the same problem. Ed Sheeran got sued for this. Yeah, but their entire sites like Stack Overflow and WikiHow that are basically disappeared now, because you can just swallow them all up and you can just spit it all back out in Chat GPT with slight changes. So I think that the first and fourth test.
Starting point is 01:30:32 But I think the fair use is how much of a slight change is exactly the right question, which is how much are you changing? Yeah, so that's the question. And it actually boils down to the AGI question. Are these things actually intelligent, and are they learning, or are they compressing and regurgitating?
Starting point is 01:30:44 That's the question. I wonder this about intelligent, and are they learning, or are they compressing and regurgitating? That's the question. I wonder this about humans, and that's why I bring up the white panda, the girl talk in audio, but also visual art. There was always artists, and even in classical music, I don't know if you guys are classical music people, but there's a demonstration of how one composer learned from the next, and that you can actually crack the music
Starting point is 01:31:03 as kind of being standing on the shoulders of the prior. And the same is true in almost all art forms, in almost all human knowledge, and media, and communication. I think that's right. It's very hard to figure that out. Well, that's exactly right. That's the hard part.
Starting point is 01:31:15 It's very hard to figure that out, which is why I come back to, there's only one of two stable solutions to this. And it's gonna happen anyway. If we don't crawl it, the Chinese will crawl it, right? DeepSeek proved that. So there's only one of two stable solutions. Either you pay the copyright holders, which I actually
Starting point is 01:31:29 think doesn't work, and the reason is because someone in China will crawl it and then just dump the weights. So they can just crawl and dump the compressed weights. Or if you crawl, make it open. At least contribute something back to open source. You crawled open data, contribute it back to open source. And crawled open data, contributed back to open source. And the people who don't want to be crawled,
Starting point is 01:31:47 they're going to have to go to huge lengths to protect their data. Now everybody knows to protect the data. There's not a clean solution. The licensing thing is happening here. I have a book out from Harper Business on the shelf behind me. And I'm getting 2,500 smackaroos for the next three years for Microsoft indexing it. So they're going out and
Starting point is 01:32:08 they're licensing this stuff. And they're going $2500. So you're literally I'm getting $2500 for three years, a bunch of Harper to go into an LLM to go into Microsoft specifically. And you know what, I'm going to sign it, I decided because I just want to set the precedent. Maybe next time it's 10,000. Maybe next time it's 250. I don't care. I just want to see people have their content respected.
Starting point is 01:32:31 And I'm just hoping that Sam Altman loses this lawsuit and they get an injunction against it. Hey, well, just because he's just such a weasel in terms of like making stuff open AI into a closed thing. I mean, I like Sam personally, but I think what he did was like the super weasel move of all time for his own personal benefit. If he if he and this whole lying like, oh, I have no equity, I get health care. He does it for the love. No, bro, he doesn't. But he does
Starting point is 01:32:56 it for the love. What was the statement? He does it for the I do for the joy, the benefit, the benefits, I think he got health care. I think in opening as defense, they do need to raise a lot of money and they got to incent their employees. But that doesn't mean they need to take over the whole thing. That the nonprofit portion can still stay the nonprofit portion and get the lion's share the benefits and be the board. And then he can have an incentive package and employees can have an incentive package.
Starting point is 01:33:17 Yeah, why don't they get a percentage of the revenue? Just give them like 10% of the revenue goes to the team. I understand that it has to be bought out right now for 40 billion and then the whole thing disappears into a closed system. That part makes no sense to me. That's called a shell game and a scam. Yeah, I think Sam and his team would do better to leave the nonprofit part alone, leave an actual independent nonprofit board in charge and then have a strong incentive plan and
Starting point is 01:33:44 a strong fundraising plan and a strong fundraising plan for the investors and the employees. So I think this is workable. It's just trying to grab it all. It just seems way off, especially when it was built on open algorithms from Google, open data from the gas and on a nonprofit funding from Elon and others. I mean, what a great proposal. We just workshopped here. What if they just, what do they make six billion a year? just take 10% of it 600 million every year. And that goes into a bonus, they're
Starting point is 01:34:09 losing money, Jason. So they have to, okay, eventually, but even equity, they could they could give equity to shadow building, but they could still leave it in the control of the nonprofit. I just don't understand this conversion. I mean, there was a there was a board cool, right? The board tried to fire Sam and Sam took over the board. Now it's his hand picked board. So it also looks like self-dealing, right? And yeah, they'll get an independent valuation, but we all know that game. You hire a valuation expert who's going to say what you're going to say and they'll check the box. But if you're going to capture the light cone
Starting point is 01:34:36 of all future value or build super intelligence, we know it's worth a lot more. That's why Elon just bid a hundred billion. Exactly. You're saying the things that actually the regulators and the legal community have no insight because they'll see a fairness opinion and they think, oh, it says fairness and opinion, two words side by side, it must be fair. And they don't know how all of this stuff is gamed. So yeah. Yeah. Man, I've got stories about 409As that would- Exactly. Oh yeah. Everything is gamed 409A's are game.
Starting point is 01:35:06 These fairness opinions are game. But the reality is I don't think the legal and the judicial community has any idea. I mean imagine if a founder you invested in, just as just a total imaginary situation, Naval, had like a great term sheet at some incredible dollar amount, didn't take it, ran the valuation down to like under a million, gave themselves a bunch of shares, and then took it three months later. Well, I don't know what would that be called? Securities fraud? Let's wrap on your story. I had an interesting Nick will show you the photo. I had an interesting dinner on Monday
Starting point is 01:35:40 with Brian Johnson, the don't die guy came over to my house. How's his erection doing overnight? What we talked about is he's got three hours a night of nighttime erections. Wow, look at this. By the way, first of all, I'll tell you. I think that he's- Coon.
Starting point is 01:35:57 Coon. Wait, which one of those is giving him the erection? No, no, no, he measures his nighttime erections. I think Coon is giving him the erection. Oh, he's not going. But he said that when he started, so by the way, he said he was 43 when he started this thing. He was basically clinically obese.
Starting point is 01:36:12 Yeah. And in these next four years, has become a specimen. He now has three hours a night of nighttime erections, but that's not the interesting thing. At the end of this dinner, by the way, his skin is incredible. I was not sure because when you see the pictures online, but his skin in real life is like a porcelain doll.
Starting point is 01:36:30 Both my wife and I were like, we've never seen skin like this. And it's incredibly soft. Wait, wait, wait, wait, whoa, whoa, whoa. How do you know skin is soft? You know, you brush your hand against his forearm or whatever, you know, gives a hug at the end of the night. I'm telling you the guy's skin is a- This is his head supple skin? Bro, it's the softest a hug at the end of the night. I'm telling you the guy- He has supple skin? Bro, it's the softest skin I've ever touched in my life. Anyways, that's not the point. It was really fascinating dinner. He walked through his whole protocol, but at the end of it, I think it was Nikesh, CEO of Palo Alto Networks, he was just like,
Starting point is 01:37:00 give me the top three things. Top three. And of the top three things, what I'll boil it down to is the top one thing, which is like, give me the top three things. Top three. And of the top three things, what I'll boil it down to is the top one thing, which is like 80% of the 80%. It's all about sleep. I was about to get asleep. And he walked through his nighttime routine
Starting point is 01:37:17 and it's incredible and it's straightforward. It's really simple. It's like how you do a wind down. Anyways, I have tried to- Explain the wind down, briefly. Let's just say that, because Brian goes to bed much earlier. So our normal time, let's just say, you know, 10, 10.30.
Starting point is 01:37:31 So my time, I try to go to bed by 10.30. He's like, you need to be in bed. You need to, first of all, stop eating three or four hours before, right? And I do that. I eat at 6.30. So I have about three hours. You're in bed by 9.30 or 10. You deal with the self-talk,
Starting point is 01:37:47 right? Like, okay, here's the active mind telling you all the things you have to fix in the morning. Talk it out, put it in its place, say, I'm going to deal with this in the morning. Write it down in a journal, you're saying? Whatever you do so that you put it away. You cannot be on your phone. That's got to be in a different room. Or you just got to be able to shut it down and then read a book so that you're actually just engaged in something. And, and, and he said that he typically falls asleep within three to four minutes
Starting point is 01:38:15 of getting into bed and starting. I tried it. So I've been doing it since I had dinner with him on Monday. Last night I fell asleep within 15 minutes. The hardest part for me is to put the phone away. I can't do it. Of course. Of course. What about you, Niv? I'll tell us your one down. Oh, yeah. So I know Brian pretty well, actually. And I joke that I'm married to the female Brian Johnson because my wife has some of his routines, but she's the natural version, no supplements, and she's intense. And I think when Brian saw my sleep score from my eight sleep, he was shocked. Oh, Shannard is asleep.
Starting point is 01:38:52 He was just like, you're going to die. He's like, you're literally going to die. What are you going to do, 70, 80? No, it's terrible. It's awful. Tell me the truth. What's your number? What's your number?
Starting point is 01:39:00 Be honest. It was like 30s, 40s. What? Yeah, but it's also because I don't sleep much. I only sleep a few hours a night and I also move around a lot in the bed and so on. But it's fine. I never have trouble falling asleep. But I would say that Brian's, yes, skincare routine is amazing.
Starting point is 01:39:14 His diet is incredible. He is a genuine character. I do think a lot of what he's saying, minus the supplements, I'm not a big believer in supplements, does work. I don't know if it's necessarily going to slow down your aging, but you'll look good and you'll feel good. Yeah. Sleep is the number one thing. In terms of falling asleep, I don't think it's really about whether you look at your phone or not,
Starting point is 01:39:34 believe it or not. I think it's about what you're doing on your phone. If you're doing anything that is cognitively stressful or getting your mind to spin, then yes. You think like you can scroll TikTok and fall asleep is fine? Anything that's entertaining or that is like you can read a book, right, on your Kindle or on your iPad, and I think it'd be fine falling asleep. Or you can listen to some meditation video or some spiritual teacher or something, and that'll actually help you fall asleep. But if you're on X, or if you're checking your email, then heck yeah, that's going to keep you up. So my hack for sleep is a little different. I normally fall asleep within minutes. And the way
Starting point is 01:40:11 I do it is you all have a meditation routine. You have a set time. You have a set time. No, I sleep whenever I feel like. Usually around one in the morning, two in the morning. God damn, I'm in bed by 10. Yeah, I need to sleep. I'm an owl. But if you want to fall asleep, the hack I've found is everybody has tried some kind of a meditation routine. Just sit in bed and meditate. And your mind will hate meditation so much that if you force it to choose between the fork of meditation and sleeping, you will fall asleep. Works every time. And if you don't fall asleep, you'll end up meditating, which is great too. So just the- I like the meditation, I do the body scan. Dakota to this story was a friend of mine
Starting point is 01:40:49 came to see me from the UAE, and he was here on Tuesday and I was telling him about the dinner with Brian. And he told me this story, cause he's friends with Khabib, the UFC fighter. And he says, you know, when Khabib goes to his house, he eats anything and everything, fried food, pizzas, whatever, but he trains consistently. And my friend Adala says, how are you able to do that?
Starting point is 01:41:11 And how does it not affect your physiology? He goes, I've learned since I was a kid, I sleep three hours after I train in the morning and I sleep 10 hours at night. And I've done it since I was like 12 or 13 years old. That's a lot of sleep. It's a lot of sleep. It's a lot of sleep. You know, the direct correlation for me is if I do something cognitively like, you know,
Starting point is 01:41:33 big heavy duty conversations or whatever, so no heavy conversations at the end of the night, no existential conversations in the night. And then if I go rucking, I have the, you know, on the ranch, I put on a 35 pound weight vest. I want that at night before you go to bed. No, no, no. If I do it anytime during the day, typically do it in the morning or the afternoon. But the one to two mile rock with the 35 pounds, whatever it is, it just tires my whole body out. So that when I do lay down, is that why you don't prepare for the
Starting point is 01:42:00 pun? I mean, this pod is the top 10 f***ing pod in the world, Chamath. Do you think it's an accident? Freeberg, what's your sleep routine? Can you just go to bed? You just like close your eyes? Well, I take a warm bath and I send Jaygali a picture
Starting point is 01:42:16 of my feet. Oh, wait till Jaygali's done. I do take a nice warm bath. I nailed it. But you do it every night, a warm bath? I do, yeah, I do a warm nice warm bath. I nailed it. But you do it every night, a warm bath? Yeah, I do a warm bath every night. With candles too. And do you do it right before you go to bed?
Starting point is 01:42:32 Yeah, I usually do it after I put the kids down. Then I'll basically start to wind down for bed. I do watch TV sometimes, but I do have the problem and the mistake of looking at my phone probably for too long before I turn the lights off. So do you have a consistent time where you go to bed or no? Usually 11 to midnight and then up at 6 30.
Starting point is 01:42:54 Man, I need, I need eight hours. Otherwise I'm a mess. I'm trying to get eight. I hit between six and seven consistently. I try to go to bed that 11 to 1 AM window and get up the seven to eight window. My problem is if I have work to do, window and get up the 7 to 8 window. My problem is if I have work to do, I'll get on the computer or my laptop. And then when I start that after in my evening routine, I can't stop. And then all of a sudden it's like three in the morning and I'm like, oh no, what did I just do?
Starting point is 01:43:16 And then I still have to get up at 630. So that does happen to me. So last night was unusual for me, but it was kind of funny anyway. I thought, oh, I should go to bed early because I'm an all in. But I ended up eating ice cream with the kids late. Wait, what was the brand? You said you went for another brand. I want to know the brand.
Starting point is 01:43:34 I think it's Van Luen or something like that. Oh, Van Luen, yeah. Of course, of course. New York and Brooklyn's good. The holiday cookies and cream, oh my God, so good. Yeah, it's so good. Van Luen, good quote. So after I polished that off, then I was like,
Starting point is 01:43:46 oh, I probably ate too much to go to bed, so I better work out, so I did a kettlebell workout. You sound like Jamal. What did you say? I have eight kettlebells right here, right next to me. Oh yeah, of course. Freeberg, this is called working out, Freeberg. What you're sitting here.
Starting point is 01:44:02 And then while I'm doing my kettlebell suitcase carry, I was texting with an entrepreneur friend. So you can tell how intense my workout was. And he's in Singapore, so it was in the middle of the night for me and early for him. And then when it was time to go to bed, I was like, OK, now I've got to get to bed. How do I get to bed?
Starting point is 01:44:20 My body's all amped up. I've got food in my stomach. I've got some kettlebells. Ice cream, kettlebells. My brain is all amped up and all in podcasts is tomorrow. And what time is it? It's 1.30 in the morning. I better get to bed. So I put on like a little, one of those spiritual videos
Starting point is 01:44:36 to calm me down. And then I got in bed and I was like, there's no way I'm falling asleep. And I started meditating and five minutes later I was asleep. You know, actually I'm falling asleep. And I started meditating, and five minutes later, I was asleep. You know, actually, the Dalai Lama has these great, on his YouTube channel, he's got these great, like, two hour discussions. You get about 20, 30 minutes into that,
Starting point is 01:44:53 you will fall asleep. Well, yeah, but my learning is... Yeah, watch any Dharma lecture from the SSM Center. Exactly, exactly. And my lesson is, my learning is, that the mind will do anything to avoid meditation. Yes. By the way, did you guys see, just my lesson is, my learning is, that the mind will do anything to avoid meditation. Yes. By the way, did you guys see, just before we wrap,
Starting point is 01:45:09 did you see all the confirmations? RFK Jr. confirmed, Brooke Rowlands confirmed. By the way, if you look at Polymarket, Polymarket had it all right a couple of weeks ago. I was trying to Polymarket, there was a moment where Tusi fell to like 56%, there was a moment when RFK fell to 75%, but then they bounced back and it was done.
Starting point is 01:45:27 You could have bought it. You could have snipped that man. You could have made money. Yeah, Polymarket had it. And the media was like, no way he's getting confirmed. This is not gonna happen, but Polymarket knows. It's so interesting, huh? Well, I saw a very insightful tweet
Starting point is 01:45:39 and I forget who wrote it, so I'm sorry, I can't give credit. But the guy basically said, look, Trump has a narrow majority in the House and the Senate. And he can get everything he wants as long as the Republicans stay in line. So all the pressure and all the anger that all the mega movement is doing against the left is pointless. It's all about keeping the right wing in line. So it's all the people saying to the senators,
Starting point is 01:46:06 hey, I'm going to primary use Nicole Shanahan saying I'm going to primary you. It's Scott Pressler saying I'm moving to your district. That's the stuff that's moving the needle and causing the confirmations to go through. That's how you get cash. Patel. That's how you get Tulsi Gabbard, the DNI. That's how you get RFQ. You worry about any of these. Do you think any of them are too spicy for your taste or you just like the whole burn it down, put in the crazy like outsiders and let them... Jason, that's such a bad characterization. That's not a fair characterization. I mean, whatever.
Starting point is 01:46:34 I mean, the outsiders... Honestly, it's like I never thought I'd see it, but I think between Elon and Sachs and people like that, we actually have builders and doers and financially intelligent people and economically intelligent people in charge. And you know, despite all the craziness, Elon's not doing this for the money. He's doing it because he thinks it's the right thing to do. Of course. He moved into the Roosevelt building for the next four months.
Starting point is 01:46:55 I had bought into the great forces of history mindset where it's just like, okay, it's inevitable. This is what's happening. Government always gets bigger, always gets slower. And we just have to try and get stuff built before they just shut everything down and we turn into Europe. But the thing that happened then was, you know, Caesar crossed the Rubicon. The great man theory of history played out and we're living in that time. And it's an inspiration to all of us despite Sam Altman and Elon's current fighting. I know Sam was inspired by Elon at one point and I think all of us are inspired by Elon. I mean, the guy can be the Diablo player and do Doge and run SpaceX
Starting point is 01:47:30 and Tesla and Boring and Neuralink. I mean, it's incredibly impressive. It makes us, that's why I'm doing a hardware company now. It makes me want to do something useful with my life, you know? Elon always makes me question, am I doing something useful enough with my life? It's why I don't want to be an investor. Peter Thiel, ironically, he's an investor, but he's inspirational in that way too, because he's like, yeah, the future doesn't just happen. You have to go make it. So we get to go make the future,
Starting point is 01:47:54 and I'm just glad that Elon and Doge and others are making the future that I'm living. Is this a consumer hardware? What do we got going on here? Maybe I'll reveal it on the All In podcast in a couple of months. But it's really hard. It's really difficult.
Starting point is 01:48:04 I'm not sure I can pull it off. So let me try. Let me just make sure it's viable. Is it drone related? Is it self-driving related? Drones are cool, but no, it's not. Maybe all the podcasts should be an angel investor. Oh yeah. So let's do a syndicate.
Starting point is 01:48:15 Let's do a little syndicate. No syndicate, Jason. Just our money. What are you talking about? You know how I learned about syndicates with Naval? The first syndicate I ever did on AngelList, I think is still the biggest, I don't know, 5%. And Naval's my partner on this, forcom.com.
Starting point is 01:48:31 I think you'll love what I'm working on if I pull it off. I think you guys will love it. I'd love to show you a demo. Let us know where to send the check. Get that black cherry chip Van Lewin. I love you guys. What have we learned? I gotta go.
Starting point is 01:48:42 Big shout out to Bobby and to Tulsi. That's a huge, huge room for America. I'm just stoked about both of them. Congratulations. I love me. Thanks for coming. Bobby Kennedy back on the pod. Bobby, Bobby, come back on the pod for the czar David Sachs. You're Sultan of science, David Freyberg, the chairman dictator, Chamath Palihapitiya, and namaste Navar. I am the world's greatest moderator. I'll see you next
Starting point is 01:49:15 time on the all in fine namaste bitches. We'll let your winners ride Rain Man, David Sack I'm going all in And instead we open source it to the fans and they've just gone crazy with it Love U, S.K. The Queen of Kin Wives I'm going all in Let your winners ride Let your winners ride Let your winners ride
Starting point is 01:49:41 Besties are back That's my dog taking an illness in your driveway Sex Oh man Oh man My avid Azure will meet me at once We should all just get a room and just have one big huge orgy cause they're all just useless It's like this sexual tension that they just need to release somehow
Starting point is 01:49:59 What? You're the bee What? You're the bee What? You're the bee We need to get merch. I'm doing all in I'm doing all in

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.