Lex Fridman Podcast - #400 – Elon Musk: War, AI, Aliens, Politics, Physics, Video Games, and Humanity

Episode Date: November 9, 2023

Elon Musk is CEO of X, xAI, SpaceX, Tesla, Neuralink, and The Boring Company. Thank you for listening ❤ Please support this podcast by checking out our sponsors: - LMNT: https://drinkLMNT.com/lex t...o get free sample pack - Eight Sleep: https://www.eightsleep.com/lex to get special savings - BetterHelp: https://betterhelp.com/lex to get 10% off - SimpliSafe: https://simplisafe.com/lex to get free security camera plus 20% off - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/elon-musk-4-transcript EPISODE LINKS: Elon's X: https://x.com/elonmusk xAI: https://x.com/xai Tesla: https://x.com/tesla Tesla Optimus: https://x.com/tesla_optimus Tesla AI: https://x.com/Tesla_AI SpaceX: https://x.com/spacex Neuralink: https://x.com/neuralink The Boring Company: https://x.com/boringcompany PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:25) - War and human nature (14:51) - Israel-Hamas war (20:59) - Military-Industrial Complex (25:16) - War in Ukraine (29:59) - China (44:15) - xAI Grok (55:13) - Aliens (1:03:13) - God (1:05:41) - Diablo 4 and video games (1:14:48) - Dystopian worlds: 1984 and Brave New World (1:20:59) - AI and useful compute per watt (1:26:40) - AI regulation (1:33:32) - Should AI be open-sourced? (1:40:54) - X algorithm (1:52:15) - 2024 presidential elections (2:05:14) - Politics (2:08:16) - Trust (2:13:47) - Tesla's Autopilot and Optimus robot (2:22:46) - Hardships

Transcript
Discussion (0)
Starting point is 00:00:00 The following is a conversation with Elon Musk, his fourth time on this The Lex Friedman podcast. And now, if you allow me a quick few second mentionary sponsor, check them out in the description. It's the best way to support this podcast. We got Element for electrolyte deliciousness, A-Sleep for naps, better help for your mind, simply safe for your safety, shop for five for your online shopping, and net suite for business management, software, choose wisely my friends. Also, if you want to work with an amazing team where I was hiring, go to lexfreedman.com slash hiring.
Starting point is 00:00:40 And now, onto the full ad reads, as always, no ads in the middle, I try to make these interesting, but if you must skip them, please still check out the sponsors I enjoy their stuff. Maybe you will too. This episode is brought to you by the very thing I'm drinking, as I speak to you right now, element. Electrolite drink mix. I'll usually fill a power rate or a geteried bottle.
Starting point is 00:01:05 I think it's 28 ounces of water and makes it with one element packet. My favorite flavor is watermelon salt. That's the one I'm drinking now. That's the one I've been drinking for a long time. I apologize because I'm recording these words very late at night. It's been a long day. It's been a long night before that's been a long night before that and a long night this one. So it's just me in front of a microphone right now looking at a 28 ounce
Starting point is 00:01:34 bottle of water that tastes like water balance. Just me in the water happily reunited in the desert of human experience. I'm also surrounded by three computer screens that all have beautiful wallpaper looking at me, taunting me with beautiful picturesque scenes of nature and I just before this played about 10 minutes of Diablo where I got to escape to a whole other world that is also picturesque, but not with nature, but filled with the intense and debating sounds of demons attacking you from all sides. And all of that is somehow relaxing. Anyway, element is not just delicious, it combines this perfect amount of magnesium, potassium, and sodium, watermelon salt. Anyway, get her a sample pack for free with any purchase, try it at drinkelement.com-lux.
Starting point is 00:02:43 This episode is also brought to you by the thing that's calling my name at this very moment. The 8th Sleep pod 3 mattress. It has two sides of the bed so you can cool or heat each side of the bed separately. I've actually recently was briefly at a get together of a lot of really interesting people and somebody brought up the topic of a sleep and I just heard that a bunch of other folks said, oh, I love a sleep. They start talking about the different temperatures that they use and I was shocked and horrified to learn that some people actually heat their bed up. This was a fascinating discovery for me because I've never never even tried the heating feature. I was cool to bed. Sometimes very cold with a warm blanket. It's heaven, but I can't recommend it enough. Sleep really is a magical place. It's a bit surreal
Starting point is 00:03:37 saying these words while quiet sleep deprived. A bit melancholic, a bit hopeful, thinking about the A sleep bed just a few steps away, and somehow that thought fills me with gratitude for just having these things in my life. Anyway, check them out and get special savings when you go to 8Sleep.com Slash Lex This episode is also brought to you by Baraupe. Spelled H-E-L-P-H-H-E-L-P-H-E-L-P-H-E-L-P-H-E-L-P-H-E-L-P-H-E-L-P-H-E-L-P-H-E-L-P-H-E-L-P-H-E-L-P-H-E-L-P-H-L-P-H-E-L-P-H-L-P-H-E-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-H-L-P-L-P-H-L-P-H-L-P-H-L-P-H-L-P on the island looking at a airplane floating above hoping that they see the text that right. Help. I just actually got a text from a friend. I'm looking at it now. Again filled with gratitude for having so many amazing people. The text says, hey brother, just sending you some love. Miss you. Sent late at night. Oh boy. Yeah, yeah. Yeah. So better help. Speaking of friends, it's good to talk. It's good to talk to people. It's good to talk to people that truly listen. That could be friends, loved ones,
Starting point is 00:05:03 or that could be a professional therapist. Well, you don't have to choose one, you can have all of those. And easy, accessible therapist is what better help is all about. Check them out at betterhelp.com slash Lex and save in your first month. That's betterhelp.com slash Lex. This show is also brought to you by SimplySafe. As we go on with this program, I'm starting to have more and more fun. Because fun is all you need. SimplySafe is a home security company. After, of course, your established security.
Starting point is 00:05:37 Once you're safe, then the fun can begin. For me, SimplySafe just establishes that first base layer of physical security. It's super easy to set up. They have a cool new feature called Fast Protect Monitoring that allows simple safe monitoring agents to see speak tone deterrent readers through the smart alarm and indoor camera. I'm tempted to test the system by having one of the quadro pets, the the legged robots roll up to it and see what the agents say. Just attach a chat GBT, GBT4 agent or a grog agent on top of it and have the two of them have a conversation and see what they figure out. I'm not sure it's going to end well, but science precedes one experiment at a time, friends.
Starting point is 00:06:32 Anyway, I love how easy SimpleSafe was to set up. Their senses are awesome. Their monitoring is awesome. It's affordable, easy to use. I love products that just, you know, easy to set up, easy to maintain, everything works easy. Anyway, go to simplesafe.com slash likes to get 50% off any simple safe system when you sign up for fast, protect, monitoring.
Starting point is 00:06:56 That's simply safe.com slash likes. This show is also brought to you by Shopify, a platform designed for anyone to sell anywhere. And when I say anyone, I mean somebody like me. It took me minutes, maybe even seconds to set up a store and sell shirts, which you can now buy at, I think the link is lexfreem.com slash store that forwards you to the Shopify store with three different shirts. In this case, it's not really about the store, it's about just celebrating stuff who love, I'm a huge fan of a lot of bands and podcasts and I just love buying up their
Starting point is 00:07:39 merch, wearing it and seeing who notices. I love wearing a Huberman lab shirt on the trail when I run and just see who recognizes the full absurdity of it what they say. It's just wonderful. The joy that we can share by this little kind of sign that you wear in yourself that, oh I like this particular thing. Let's talk about it. Now I love it. I like wearing like Orwell shirts or or uh, Dusty Eftsky shirts or Bukowski shirts or Si-Fi stuff. I don't know, 2000 months based out, you see all of it. You can get all of those things or sell those things
Starting point is 00:08:21 on Shopify. Sign up for a $1 per month trial period at Shopify.com slash Lex. That's all lowercase. Go to Shopify.com slash Lex to take your business to the next level today. This show is also brought to you by NetSuite, an all-in-one cloud business management system. all in one cloud business management system, they take care of all the messy complicated things involved in running a business. A company like Elon says, is a kind of super intelligent being comprised of individual human beings working together
Starting point is 00:09:02 in interesting ways that one effective can be greater than the sum of their parts. But there's a lot of things involved there. It's not just great design and engineering. It's not great innovation. It's all of the things, the financials, the human resources of hiring and firing, it's maintaining inventory. It's the e-commerce. It's all the messy business details and so you should use the best tools for the job of handling that mess. That beautiful, beautiful mess involved in running a company and I too, by the way, would love to be a part of that mess. A small cog in the machine in the super intelligent machine that is a company. would love to be a part of that mess.
Starting point is 00:09:46 A small cog in the machine. In the super intelligent machine that is a company, someday soon. Anyway, you can start now with no payment or interest for six months, go to netswede.com slash lex to access their one of a kind financing program. That's netswede.com slash lex. And now, dear friends, here's Elon Musk. I thought you were going to finish it. It's one of the greatest themes in all film history.
Starting point is 00:10:47 Yeah, that's great. So I was just thinking about the Roman Empire, as one does. Does that whole meme where, well guys, I think you about the Roman Empire at least once a day. And half the population is confused whether it's true or not. But more seriously thinking about the wars going on in the world today. And as you know, war and military conquest
Starting point is 00:11:14 has been a big part of Roman society and culture. And I think it's been a big part of most empires and dynasty throughout human history. So they usually came as a result of conquest. I mean, there's something like the Orchardangarian Empire where there was just a lot of sort of clever marriages. But fundamentally, there's an engine of conquest and they celebrate excellence in warfare. Many of the leaders were excellent generals. Yeah. That kind of thing.
Starting point is 00:11:46 So a big picture question, Grop approved. I asked this is a good question to ask. Did Tesla Grop approve? Yeah. At least on fun mode. To what degree do you think war is part of human nature versus a consequence of how human societies are structured. I ask this as you have somehow controversially been a proponent of peace.
Starting point is 00:12:15 I'm generally a proponent of peace. I mean, ignorance is perhaps, in my view, the real enemy to be countered. That's the real hard part, not fighting other humans. But all creatures fight. The jungle is like, you look at the nature as perhaps some sort of peaceful thing, but in fact it is not. There's some quite funny, when a hootsaw thing where he's like in the jungle, like saying that it's like basically just murder and death in every direction. I mean, the plants animals in the jungle are constantly trying to kill each other every single day, every minute. So it's not like, uh, you know,
Starting point is 00:12:56 we're unusual in their respect. Well, there's a relevant question here whether with greater intelligence, uh, comes greater control over these base instincts for violence. Yes. We have much more vulnerability to control our lumpy instinct for violence than say chimpanzee. And in fact, if one looks to say chimpanzee society, it is not friendly. I mean, the bonobos are an exception.
Starting point is 00:13:24 But chimpanzee society is full of violence and it's quite, quite horrific, frankly, that's that's our limbic system in action. Like you know, it'll be on the wrong side of a chimpanzee, you'll eat your face off and tear and not so. Yeah, basically there's no limits or ethics or they're almost at just war. There's no just war in Japan's these societies is is war and a dominance by any means necessary Yeah, too many societies are like a primitive version of human society It's they're not like peace loving basically at all
Starting point is 00:13:59 There's extreme violence And then once a while some some somebody who's watched too many Disney movies decides to raise Chopin Z as a pet, and then that eats their face or if they're not so awful, choose their fingers off of that kind of thing. It's happened several times. Ripping your nuts off is an interesting strategy for interaction. It's happened to people that's unfortunate. Like that's I guess a one way to ensure that the other term doesn't, so you know, contribute to the gene pool. Well, from a martial arts perspective,
Starting point is 00:14:30 it's a fascinating strategy. The nut rougher. I wonder which of the martial arts teaches that? I think it's safe to say if somebody's, you've got your nuts in their hands. And as the option of rubbing them off, you will be amenable to whatever they want. Yeah. So like I said, somehow controversially, you've been a proponent of peace on,
Starting point is 00:14:55 on Twitter on X. Yeah. So let me ask you about the wars going on today and to see what the path to peace could be. How do you hope the current war in Israel and Gaza comes to an end? What path do you see that can minimize human suffering in the long term in that part of the world? Well, I think that that part of the world is definitely like if you look up the there is no easy answer in the dictionary. It'll be that like the picture of the Middle East in Israel especially. So there is no easy answer. Or what my, this is strictly my opinion of, you know, is that the goal of Moss was to provoke an overreaction from Israel. That opposite did not expect to have a military victory, but they really wanted to commit the worst atrocities
Starting point is 00:15:52 that they could in order to provoke the most aggressive response possible from Israel. And then leverage that aggressive response to rally Muslims worldwide for the course of Gaza and Palestine, which they have succeeded in doing. So the counter-intuitive thing here, I think that the thing that I think should be done, even though it's very difficult, counter-intuitive thing here. I think that the thing that I think should be done, even though it's very difficult, is that I would recommend that Israel engage in the most conspicuous acts of kindness possible. Every part, everything. That is the actual thing that would thwart the goal of the Mars. So in some sense, the degree that makes sense in geopolitics turn the other cheek implemented. It's not exactly turn the other cheek
Starting point is 00:16:46 because I do think that there's, you know, I think it is appropriate for Israel to find the Hamas members and, you know, either kill them or accostrate them. Like, that something has to be done because that they're just gonna keep Keep keep coming otherwise but In addition to that they need to do whatever they can
Starting point is 00:17:16 There's some talk of establishing for example a mobile hospital. I'd recommend doing that Just making sure that, you know, this food, water, medical necessities, and just be over the top about it and be very transparent. So it's that the, you can't, you can't claim it's a trick. Like just put a way of camera on the thing. Oh, you know, all 24, 7. Deploy acts of kindness. Yeah, conspicuous acts of kindness.
Starting point is 00:17:52 That that that with that are unequivocal meaning they can't be somehow because the homos will then their response will be always a trick. Therefore, you have to counter how how is not a trick? This ultimately fights the broader force of hatred in the in the region. Yes, and I'm not sure who said it to some in a far-procable saying, but an eye for an eye makes everyone blind. Now, that neck of the woods, they really believe in the whole eye for nothing. But I mean, you really have, if you're not going to just outright commit genocide, like it against an entire people, which obviously would not be acceptable to really shouldn't be acceptable to anyone, then you're going to leave basically a lot of people alive who
Starting point is 00:18:40 subsequently, you know, hate Israel. So really the question is like, how, for every Hamas member that you kill, how many did you create? And if you create more than you kill, you've not succeeded. That's the real situation there. And it's safe to say that if, you know, if you kill somebody's child in Gaza, you've made at least a few home-host members who will die just to kill an Israeli.
Starting point is 00:19:21 That's the situation. So, but I mean, this is one of the most contentious subjects one could possibly discuss. But I think if the goal ultimately is sort of long term piece, one has to be looked at this from standpoint of over time, are there more or fewer terrorists being created? Let me just link on war. Yeah. War, it's safe to say, war is existed and always will exist. Always will exist. It always has existed and always will exist. I hope not.
Starting point is 00:19:56 You think it always will. The always will. This question of just how much will and you know, this is the sort of the scope and scale of war but To my to mention that there would not be any more in the future. I think would be Very unlikely outcome. Yeah, you talked about the culture series. There's war even there. Yes It's giant war the first books also with them a gigantic
Starting point is 00:20:21 Galactic war where trillions die, trillions. But it's still, nevertheless, protects these pockets of flourishing. So somehow you can have galactic war and still have pockets of flourishing. Yeah, I mean, I guess if we are able to, one day, expect to, you know, fully galaxy or whatever, there will be a galactic war at some point. Ah, the scale. I mean, the scale of war has been increasing, increasing, increasing. It's like a race between the scale of suffering and the scale of flourishing. Yes.
Starting point is 00:20:58 A lot of people seem to be using this tragedy to beat the drums of war and feed the military industrial complex. Do you worry about this? The people who are rooting for escalation and how can it be stopped? One of the things that just concerned me is that there are very few people alive today who actually just really understand the horrors of war, at least in the US. I mean, obviously, the people on the front lines in Ukraine and Russia who understand just how terrible war is, but how many people in the West understand it. In my grandfather's in World War II, he was severely traumatized.
Starting point is 00:21:37 I mean he was there, I think in the four or almost six years in the Eastern North African Italy. All his friends were killed in front of him. He would have died too, except that he randomly gave some IQ test or something. And he scored very high. He was not an officer, I think a corporal or sergeant or something like that, because he didn't finish high school. He had to drop out of high school because his his dad died and he had to work to support his siblings So because he didn't graduate high school. He was not eligible for the officer corps so You know, he kind of got put into the cannon fodder category basically
Starting point is 00:22:22 But then this randomly they gave him this test. He was transferred to British intelligence in London, that's where he met my grandmother. But he had PTSD next level, like next level. I mean, just didn't talk. Just didn't talk. And if you tried talking him, he'd just tell you to shut up. And he won a bunch of medals, never, never
Starting point is 00:22:45 raged about it once, not even to hinted, nothing. I like found out about it because his military records are online. That's a, it's not, how I know. So he would say like, no, no way now. Do you want to do that again? But how many people, now he, he obviously, now he died, you know, 20 years ago,
Starting point is 00:23:04 or longer actually 30 years ago. How many people are alive that remember World War II? Not many. And the same perhaps applies to the threat of nuclear war. Yeah, I mean, there are enough nuclear bombs pointed at United States to make the rebel the radioactive rebel balance many times. There's two major wars going on right now. So you talked about the threat of AGI quite a bit, but now as we sit here with the intensity of conflict going on, do you worry about
Starting point is 00:23:41 nuclear war? I think we shouldn't discount the possibility of nuclear war. It is a civilizational threat Right now I could be wrong, but I think the current probability of nuclear war is quite low But there are a lot of nigs pointed at us So and we have a lot of nukes pointed at us. So, and we have a lot of nukes pointed at other people. They're still there. Nobody's put their guns away. The muscles are still in the silos. And the leaders don't seem to be the ones with the nukes talking to each other.
Starting point is 00:24:21 No. There were wars which are tragic and difficult on a local basis, and then there was which are civilization ending, or has that potential. Obviously, global thermonuclear warfare has high potential to end civilization, perhaps permanently, but certainly, to severely wound and perhaps set back human progress by the stone age or something. I don't know. Pretty bad. Probably a scientist in the engineering is going to be super popular after that as well. Like you got to sense of this mess. So generally, which I think we obviously want to prioritize civilizational risks over things that are painful and tragic on a local level, but not civilizational.
Starting point is 00:25:15 How do you hope the war in Ukraine comes to an end? And what's the path once again to minimizing human suffering there? Well, I think that what's likely to happen, which is really pretty much the way it is, is that telling me very close to the current lines will be how a ceasefire or truth happens, but you know you just have a situation right now where whoever goes on the offensive will suffer casualties at several times the rate of whoever's on the defense because you've got defense and death, you've got minefields, trenches, anti-tank defenses. Nobody has air superiority because the anti-aircraft missiles are really far better than the aircraft, like the far more of them. And so neither side has air superiority. Tanks are basically death traps,
Starting point is 00:26:23 just slow moving and they're not immune to anti-tank weapons. So you really just have long range artillery and infantry. It's all over again. With drones, you know, throwing old drones, some drones there. Which makes the long range artillery just that much more accurate and better and so more efficient and murdering people on both sides. Yeah, so whoever is, you don't want to be trying to advance from either side because the probability of dying is incredibly high. Okay. So, in order to overcome defense and depth trenches and minefields, you really need a
Starting point is 00:27:11 significant local superiority in numbers. Ideally, combined arms, wherever you do a fast attack with aircraft, it concentrated number of tanks and a lot of people. That's the only way you're going to punch through a line. And then you're going to punch through and then not have reinforcements just kick you right's rough. I mean the sheer number of people that died there was mind-boggling. And it's almost impossible to imagine the end of it that doesn't look like most exactly like the beginning in terms of what land belongs to who and so on.
Starting point is 00:28:07 But on the other side of a lot of human suffering, death and destruction of infrastructure. Yes. I mean, the thing that the reason I proposed a sort of cruise or a piece a year ago was because I predicted pretty much exactly what would happen, which is a lot of people dying for basically almost no changes in land. And the loss of the flower of Ukrainian and Russian youth, and we should have some sympathy for the Russian boys as well as the Ukrainian boys, because the Russian boys didn't ask to be on their front line. They have to be. So, there's a lot of sons that have not come back to their parents.
Starting point is 00:28:56 And I think most of them don't really have, they don't hate the other side. You know, it's sort of like, is this saying about, like, this saying comes from Old World War I. It's like young boys who don't know each other, killing each other on the half of old men that do know each other. The hell's the point of that?
Starting point is 00:29:20 So Volotimer Zelensky said that he's not, or has said in the past, he's not interested in talking Putin directly. Do you think he should? Yes. Sit down, man to man leader leader and negotiate peace. I think I would just recommend do not send the flower of Ukrainian youth to die in trenches
Starting point is 00:29:42 whether he talks Putin or not, just don't do that. Whoever goes on the offensive will lose massive numbers of people. And history will not look kindly upon them. You've spoken honestly about the possibility of war between US and China in the long term if no diplomatic solution is found. For example, about the possibility of war between US and China in the long term if no diplomatic solution is found. For example, on the question of Taiwan and one China policy, how do we avoid the trajectory where these two superpowers clash?
Starting point is 00:30:15 Well, it's worth reading that book on the difficult to pronounce the Cedodese trap, I believe it's called. I love war history, I like inside out and backwards. It was hardly a battle I haven't read, read about. And trying to figure out, like what really was the cause of victory in any particular case, as opposed to what one side or another claimed for the reason? Both the victory and what sparked the war. Yeah, yeah. The whole thing.
Starting point is 00:30:45 Yeah. So that Athens and Sparta is a classic case. The thing about the Greeks is they really wrote down a lot of stuff. They loved writing. There are lots of interesting things that happen in many parts of the world, but they just feel it isn't right down. So we don't know what happened. Or they didn't really write in detail.
Starting point is 00:31:03 They just would say like, we had a battle and we want, like what? Can you add a bit more? The Greeks, they really wrote a lot. They're very articulate on, they just love writing. So, and we have a bunch of that writing that's preserved. So we know what led up to the Peloponnesian war between these Spartan and Athenian alliance. And we know that they saw it coming. I mean, the Spartans didn't write, they also weren't very focused by their nature, but they did write, but they weren't very focused. They weren't tourists.
Starting point is 00:31:41 But the Athenians and the other Greeks were at Rueda Line. They were like, a spotter was really kind of like the leader of Greece, but Athens grew stronger and stronger with each passing year. Everyone's like, well, that's inevitable that there's going to be a clash between Athens and Sparta. Well, how do we avoid that? And they couldn't, they actually, they saw it coming and they still could not avoid it. So, you know, at some point, if there's,
Starting point is 00:32:14 if one group, one civilization or country or whatever, exceeds another sort of like, you know know the United States has been the biggest kid in the block since I think around 1890 from an economic standpoint. So the United States has been the economic most powerful economic engine in the world longer than anyone's been alive. And the foundation of war is economics. So now we have a situation in the case of trying to wear the economy is likely to be two, perhaps three times larger than that of the US. So imagine you the biggest kid in the block, for as long as anyone can remember,
Starting point is 00:33:07 and suddenly a kid comes along, who swice your size. So we see it coming. Yeah, how is it possible to stop? Is there some, let me throw something out there, just intermixed thing of culture's understanding. There just seem to be a giant cultural gap in understanding of each other. And you're an interesting case study because you are an American. Obviously you've done a lot of incredible manufacture here in the United States, but you also work with China.
Starting point is 00:33:38 I spent a lot of time in China and met with the leadership many times. A big good question to ask is, what are some things about China that people don't understand positive just in the culture? What some interesting things that you've learned about the Chinese? Well, the sheer number of really smart, hardworking people in China is incredible. There are, I believe, say, like how many smart, hard-working people are trying to just farm more than there than they are here, I think in my opinion.
Starting point is 00:34:13 There were a lot of energy. So, I mean, the architecture in China that's in recent years is far more impressive than the US. I mean, the train in China that's in recent years is far more impressive than the US. I mean, the train stations, the buildings, the high speed rail, everything. It's really far more impressive than what we have in the US. I mean, I recommend somebody just go to Shanghai in Beijing, look at the buildings and go to take the train from Beijing to Shion, where you have the terracotta warriors. I'm Chinese got an incredible history, very long history. And I think arguably
Starting point is 00:34:54 the, in terms of the use of language from a written standpoint, one of the oldest paths, sort of one of the oldest paths, perhaps the oldest written language. And then China, people did write things down. So, now China, historically, has always been, with where exception, been internally focused. They've not been acquisitive. They've fought each other.
Starting point is 00:35:23 They've been many, many civil wars. In the three kingdoms war, I believe they lost about 70% of their population. So they have brutal internal wars. Like civil wars that make the US of all look small by comparison. So, I think it's important to appreciate that China is not monolithic. We sort of think of China as one entity, well, one mind, and this is definitely not the case. From what I've seen, and I think most people who understand China would agree, people in China think about China 10 times more than they think about anything outside of China. So it's like 90% of their consideration is in total.
Starting point is 00:36:19 Well, is that a really positive thing when you're talking about the collaboration and the future piece between superpowers when you're inward facing, which is like focusing on improving yourself versus focusing on yeah quote unquote improving others through military might. The good news, the history of China suggests that China is not a positive meaning they're not going to go out and invade a whole bunch of countries. They do feel very strongly, so that's good, because a lot of very powerful countries have been a quiz. The US is also one of the rare cases that has not been a quiz. After World War II, the US could have taken over the world in any country.
Starting point is 00:37:01 We got Newcastle, nobody else got Newcastle. We don't even have to lose soldiers. Which country do you want? And the United States could have taken over everything. Oh, it, it, it, it, it didn't. And the United States actually helped rebuild countries. It's helped rebuild Europe, you know, help rebuild Japan. This is a very unusual behavior, almost unprecedented. This is very unusual behavior almost unprecedented. The US did conspicuous acts of kindness, like the Berlin Aleft. And I think, you know, it's always like, well, America's done bad things. Well, of course, America's done bad things, but one needs to look at the whole track record.
Starting point is 00:37:47 And just generally, one one sort of test would be, how do you treat your prisons or war? Or let's say, no offense to the Russians, but let's say you're in Germany, it's 1945. You got the Russian army coming one side, and you got the French, British and American armies coming the other side.
Starting point is 00:38:11 Who would you like to be to surrender to? Like no country is like morally perfect, but I recommend being a POW with the Americans. That would be my choice very strongly. In the full menu of POW very much. So, and in fact, one of our brown, took, you know, a small guy, was like, we've got to be captured by the Americans. Yeah. And in fact, the SS was under orders to execute our brown
Starting point is 00:38:44 and all of the German rock niches. And they narrowly escaped their SS. They said they were going out for a walk in the woods. They left in the middle of winter with no coats. And they ran like ever no food, no coats, no water. And just ran like hell and ran west. And by sheer like they, I think his brother found like a bicycle or something and they just cycled west as fast he couldn't have found a US patrol. So anyway, let's see, let's go somewhere where you can tell. Morality is
Starting point is 00:39:22 who, who, who, where do you want to be a PW? It's not fun anywhere, but some places are much worse than others. So, anyway, so like America has been, while far from perfect, generally a benevolent force, and we should always be self-critical and we try to be better. But anyone with half a brain knows that. So in this way, China and the United States are similar. Neither country has been acquisitive in a significant way. So that's like a shared principle, I guess. Now China does feel very strongly about Taiwan. They've been very clear about that for a long time. From this standpoint,
Starting point is 00:40:15 it would be like one of the states is not there like Hawaii or something like that, but but more significant than Hawaii, you know. And Hawaii is pretty significant for us. So they view it as really the, that there's a fundamental part of China, the island of Formosa, not Taiwan, that is not part of China, but should be. And the only reason it hasn't been is because the US Pacific fleet. And is there economic power growth? And is there military power growth?
Starting point is 00:40:54 The thing that they are clearly saying, is there interest will clearly be materialized? Yes. China has been very clear that they will incorporate Taiwan peacefully or militarily, but that they will incorporate it from this end point is 100% likely. Now, something you said about conspicuous acts of kindness, as a geopolitical policy, it almost seems naive, but I'd venture to say that this is probably the path forward, how you avoid most wars. Just as you say it, it sounds naive, but it's kind of brilliant. If you believe in the goodness of underlying most of human nature, it just seems like
Starting point is 00:41:50 conspicuous acts of kindness can reverberate through the populace of the countries involved. Yeah, well, and de-escalate. Absolutely. So after World War One, they made a big mistake. You know, they basically try to lump all the blame on Germany and and you know, settle Germany with impossible reparations. And you know, really there was a lot of, there was quite a bit of blame to go around for all one, but they tried to, you know, put it all in Germany. And that was,
Starting point is 00:42:39 that laid the seeds for World War II. So a lot of people were not just a lot of people felt wronged and they wanted vengeance and they got it. People don't forget. Yeah. You kill somebody's father, mother, son, daughter, they're not going to forget it. They will want vengeance. So after World War II, they're like, well, that treat of a si was a huge mistake. It will rule one. So this time, instead of crushing the losers, we're actually going to help them with the module plan and we're going to help re-build Germany.
Starting point is 00:43:32 We're going to help re-build Austria and the other, you know, Italy and whatnot. So, that was the right move. There's a, does feel like there's a profound truth to, uh, conspicuous acts of kindness being an antidote to this. Something must stop the cycle of reciprocal violence. Something must stop it or it will, you it'll it'll it'll never stop just I for an I toath for a tooth limb for limb life for a life forever and ever To escape briefly the darkness with some incredible engineering work XAI just released GROC AI assistant that I've gotten a chance to play with it's
Starting point is 00:44:24 XAI just released GROC AI assistant that I've gotten a chance to play with it's Amazing on many levels first of all it's amazing that a relatively small team in a relatively short amount of time was able to develop This close to state of the art system Another Incredible things there's a regular mode and there's a fun mode. Yeah, I guess I'm to play for that one I wish it first of all everything in life had a fun mode. Yeah, I guess I'm too playing for that one. I wish it, first of all, everything in life had a fun mode. Yeah. I mean, there's something compelling beyond just fun about the fun mode interacting with a large language model.
Starting point is 00:44:56 I'm not sure exactly what it is because I only had a little bit of time to play with it, but it just makes it more interesting, more vibrant to interact with the system. Yeah. Absolutely. AI Grok is modeled after the Hitchhike's Guide to Galaxy, which is one of my favorite books, which is a book on philosophy disguised as a book on philosophy disguises a book on humor.
Starting point is 00:45:25 And I would say that is that forms the basis of my philosophy, which is that we don't know the meaning of life, but the more we can expand the scope and scale of consciousness, digital and biological, the more we are able to understand what questions to ask about the answer that is the universe. So, I have a philosophy of curiosity. There is generally a feeling like this AI system has an outward looking like the way you are like sitting with a good friend looking up at the stars Like the asking pod head like questions about the universe wondering what it's all about the curiosity to talk about there There's a sense no matter how mundane the question I ask it There's a sense of cosmic grandeur to the whole thing. Well, we are actually working hard to have engineering
Starting point is 00:46:22 Mathematics answers that you can count on. So for the other sort of AIs out there, the, or these so-called large language models, I've not found the engineering to be reliable and the hallucination, it unfortunately, hallucinations most when you least wanted to hallucinate. And the hallucination, it things back to physics first principles, mathematical logic. So underlying the humor is an aspiration to adhere to the truth of the universe as closely as possible. That's really tricky. It is tricky. So that's why you know, you there's always going to be some amount of
Starting point is 00:47:27 error, but we want to aspire to be as truthful as possible about the answers with acknowledged error. So that there was always, you don't want to be confidently wrong. So you know, it's not going to be right every time, but you don't want to minimize how often you're confidently wrong. And then like I said, once you can count on the logic as being not violating physics, then you can start to pull on that to create inventions, like invent new technologies. But if you cannot count on the foundational physics being correct, obviously the inventions are simply wishful thinking, you know, imagination-land, magic basically. Well, as you said, I think one of the big goals of XAI is to understand the universe. As you said, I think one of the big goals of XAI is to understand the universe. Yes. That's a simple three word mission.
Starting point is 00:48:31 If you look out far into the future, do you think on this level of physics, the very edge of what we understand about physics, do you think it will make discoveries, sort of the sexiest discovery of them, as we know now, sort of unifying general relativity and quantum mechanics. So coming up with a theory of everything, do you think it could push towards that direction, almost like theoretical physics discoveries? If an AI cannot figure out new physics, it's clearly not equal to humans. The North North has surpassed humans because humans have figured out new physics. Their physics is just understanding,
Starting point is 00:49:08 you know, deepening ones inside into how reality works. And then, then this engineering, which is inventing things that have never existed. Now, the range of possibilities for engineering is far greater than physics, because, you know, we once you've got the rules of the universe, that's it. You've discovered things that already existed. But from that, you can then build technologies that are really almost limitless in the variety
Starting point is 00:49:40 and keep the sequence. You understand the rules of the game properly. And we do, you know, with current physics, we do at least at a local level understand how physics works very well. We're related to predict things is incredibly good. Like quantum canyxes, the degree to which quantum mechanics can predict outcomes is incredible. That was my heart, that was my heart hardest class college, by the way. My core of mechanics class was harder than all of my other classes were together. To get an AI system, a large language model to reliably be as reliable as quantum mechanics in physics is very difficult. Yeah, you have to test any conclusions against the ground truth of reality.
Starting point is 00:50:23 Reality is the ultimate judge. Like physics is the lower everything else is a recommendation. I've seen plenty of people break the laws made by man, but none break the laws made by physics. It's a good test, actually. If this LM understands and matches physics, then you can more reliably trust whatever it thinks about the current state of politics. And so it's also not the case currently that even the its internal logic is not consistent.
Starting point is 00:50:52 So especially with these with the approach of like just predicting a token predict token predict token. It's like a vector sum you know you're summing up a bunch of vectors, but you can get drift. So as those, a little bit of error, a little bit of error adds up. And by the time you're made tokens down the path, it doesn't make any sense. So it has to be somehow self-aware about the drift. It has to be self-aware about the drift,
Starting point is 00:51:22 and then look at the thing as a Gestalt as a whole and and say it does have coherence as a whole. So you know when when authors write books that they they will write the book and then they'll go and revise it you know taking into account you know all the the end of the beginning and the middle and re-write it to achieve coherence So that it doesn't end up in an auntensical place Maybe the process of revising is what? Yeah, reasoning is and then that's the process of revising is how you get closer and closer to truth
Starting point is 00:51:59 Maybe you like I at least I approach that way you just say a bunch of bullshit first and then you get it better You start a bullshit and then you get it's craft and then and then you iterate on that draft Until it has coherence until it's it'll add up basically So another question about theory of everything before intelligence. Do you think there exists as Your exploring this with XAI creating this intelligence system. Do you think there is a as your explain this with XAI creating this intelligence system, do you think there is a theory of intelligence where you could to understand what, like, what is the eye in AGI and what is the eye in human intelligence? There's no I and team America away there. Oh, it's gonna be stuck in my head now.
Starting point is 00:52:46 Yeah, there's no me and whatever. In quantum mechanics, oh wait. I mean, is that part of the process of discovering understanding the universe is understanding intelligence? Yeah. Yeah, I think we need to understand Is that part of the process of discovering, understanding the universe is understanding intelligence? Yeah. Yeah, I think we need to understand intelligence, understand consciousness.
Starting point is 00:53:12 Like, I mean, there are some sort of fundamental questions of like, what is thought, what is emotion? Yeah. Is it really just one atom bumping into another atom? It feels like something more than that. So I think we're probably missing some really big things. Like some really big things. Something that'll be obvious in retrospect. Yes. Like there's a giant, like you put the whole consciousness in motion.
Starting point is 00:53:43 You put the whole consciousness in motion. Well, some of them would quote like a soul, you know, religion is a soul. Like, you feel like you're you, right? I mean, you don't feel like you're just a collection of atoms. But on what dimension does thought exist? What dimensions do emotions exist? We feel them very strongly. I suspect there's more to it than atoms bumping into atoms. And maybe AI can pave the path to the discovery of what,
Starting point is 00:54:14 whatever the hell that thing is. Yeah, what is consciousness? Like what, when you put the atoms in a particular shape, why are they able to form thoughts and take actions that enter feelings? And even if it is an illusion, why is this illusion so compelling? Yeah. Like how do you? Why does this illusion exist? Yeah.
Starting point is 00:54:36 On what plane does this, the solution exist? Yeah. And that sometimes I wonder is, you know, either perhaps everything's conscious or nothing is conscious. One of the two. I like the former. Everything conscious just seems more fun. It does seem more fun. Yes. But we're composed of atoms and those atoms are composed of quarks and leptons. And those quarks and leptons have been around since the beginning of the universe. The beginning of the universe. Right. What's here to be the beginning of the universe?
Starting point is 00:55:13 The first time we talked, you said what you would, which is so real, that to think that this discussion was happening and becoming a reality, I asked you what question would you ask an agi-system once create it and you said what's outside the simulation is the question and Good question. Yeah, but it seems like we're grog you started it literally This systems Goals to be able to ask such questions to answer such questions. Yeah, ask such questions. Where are the aliens? Where the aliens? That's one of the deptophomy paradox question. A lot of people have asked me if I've
Starting point is 00:55:50 seen any evidence of aliens, and I haven't, which is kind of concerning, because then I think would I probably prefer at least to have seen some archaeological evidence of aliens. But to the best my knowledge, there is no proof that I'm not aware of any evidence of aliens. To the best of my knowledge, there is no proof, I'm not aware of any evidence of aliens. The foot out there are very subtle.
Starting point is 00:56:10 We might just be the only consciousness, at least in the galaxy. And if you look at, say, the history of Earth, for instance, believe the archaeological record, Earth is about 4 and 1 half billion years old. Civilization, as measured from the archaeological record, Earth is about 4.5 billion years old. Civilization, as measured from the first writing, is only about 5,000 years old. We have to give some credit to the ancient Samarians who aren't around anymore. I think it was an archaic, pretty uniform,
Starting point is 00:56:36 it was the first actual symbolic representation, but only about 5,000 years ago. I think that's a good date for when we say civilization started. That's 1 millionth of Earth's existence. So civilization has been around. It's really a flash in the pan. So far. And why it, why have we, why did it take so long? Well, you know, one half billion years. For the vast majority of its time, there was no life, and then there was our cake bacteria for a very long time. And then, you know, you had...
Starting point is 00:57:28 might have conjured, you could capture multi cellular life. Differentiation into plants and animals. Life moving from the ocean to land. Mammals, higher brain functions. And the sun is expanding slowly, but it will overheat the earth up. It's a some point of future, boil the oceans and an earth will become like Venus, where their life, life as we know it is impossible. So if we do not become multi-planetary and ultimately, coveontosol system,
Starting point is 00:58:10 annihilation of all life on Earth is a certainty. A certainty. And it could be as little as on the collective time scale, half a billion years. You know, long time by human standards, but that's only 10% longer than Earth has been around at all. So if life had taken 10% longer to evolve on Earth, it wouldn't exist at all. We got a deadline coming up. Better hurry.
Starting point is 00:58:52 But that said, as you said, humans, intelligent life on earth developed a lot of cool stuff very quickly. So it seems like becoming a multi-planetary is almost inevitable unless we destroy it. We need to do it. I mean, it's, it. We need to do it. I mean, I suspect that if we are able to go out there and explore other star systems that there's a good chance we find a whole bunch of long dead one planet civilizations that have made it past their home planet.
Starting point is 00:59:21 That's so sad. Yeah. Also fascinating. I mean, there are first explanations for the phony paradox. made it past their home planet. That's so sad. Yeah. That's also fascinating. I mean, there are various explanations for the Fermi paradox. And one is, there's these great filters, which civilizations don't pass through. And one of those great filters is, do you become a multi-planet civilization or not? And if you don't, it's simply a matter of time before something happens on your planet.
Starting point is 00:59:51 You know, either natural man made that causes us to die out. Like the dinosaurs. Where are they now? They didn't have spaceships. So I think the more likely thing is, because it's just empathize with the aliens, that they found us in there protecting us and letting us be. I hope so. Nice aliens. Just like the tribes in the Amazon. They all contact the tribes who are protecting them.
Starting point is 01:00:16 That would be a nice explanation. Or you could have like, what was it? I think Andre Kapati said it's like the ants in the Amazon asking where's everybody? Well they do run into a lot of other ants. That's true. That's these ant wars. It sounds like a good TV show. Yeah, they literally have these big wars between various ants. Yeah, maybe I'm just dismissing all the different diversity events. Sure.
Starting point is 01:00:46 Listen to that. When a host talk talking about the jungle, it's really hilarious. Have you heard it? No, I have not. The one in Herzog as a way. You should play it for the, you know, as an interlude in the fun, YouTube. It's so awesome.
Starting point is 01:01:02 I love them so much. Yeah. It's great. We'll see the director of Happy People Life in so awesome. I love them so much. Yeah. It's great. Was he the director of Happy People Life in the Tiger? I think also. Did that bear documentary? The bear documentary. I didn't think about penguins.
Starting point is 01:01:12 Yeah. Definitely. The analysis. Psycho analysis. Psycho. Yeah, the penguins like headed for like mountains like that are like 70 miles away.
Starting point is 01:01:23 Yeah. The penguin is just it for doom basically. Well, he was had it cynical take. I have a he could be just the brave explorer and there'll be great stories told about him amongst the penguin population for many centuries to come. What we're talking about, okay. Yes, aliens, I mean, I don't know. Look, I think this smart move is just, you know, this is the first time in the history of Earth that it's been possible for life to extend beyond Earth. That window is open. Now it may be open for a long time or maybe open for a short time. And it may be open now and then never open again. So I think the smart move here is
Starting point is 01:02:16 to make life multi-planetary while it is possible to do so. We don't want to be one of those lame one-planet civilizations that just dies out. No, those are lame. Yeah, lame. to be one of those lame one-flan civilizations that just dies out. No, those are lame. Yeah, lame. That's self-respecting civilization would be one planet. There's not going to be a Wikipedia entry for one of those. And pause. The SpaceX have an official policy for when we meet aliens.
Starting point is 01:02:41 No. Okay. That seems irresponsible. If I see the slightest indication that there are aliens, I will immediately post on the next platform. Anything I know. It could be the most liked reposted post of all time. Yeah, I mean, look, we have more satellites up there right now than everyone else combined. So, you know, look, we have more satellites up there right now than everyone else combined. So, you know, we know, we know if we got to maneuver around something and we not have to have to maneuver around anything. If you go to the big questions once again,
Starting point is 01:03:16 you said you've, you're with Einstein that you believe in the goddess, Pinosa. Yes. So, you know, that's a view that God is like the universe and reveals themselves through the laws of physics or as Einstein said, through the lawful harmony of the world. Yeah, I would agree. That God, the simulator or whatever, the spring being, beings reveal themselves through physics. They have creators of the Sixth Assistant. And it's incumbent upon us to try to understand more about this wondrous creation. Who created this thing? Who's running this thing? Like, embodying it into a singular question with a sexy word on top of it, is like focusing the mind
Starting point is 01:04:05 to understand. It does seem like there's a, again, it could be an illusion. It's, it seemed like there's a purpose that there's an underlying master plan of some kind. And it seems like there may not be a master plan in the sense that so this, like, maybe an interesting answer to the question of determinism versus free will is that if we are in a simulation, the reason that these higher beings would hold a simulation is to see what happens.
Starting point is 01:04:35 So it's not, they don't know what happens. Otherwise, they wouldn't hold the simulation. So when humans create a simulation, so it's SpaceX and Tesla, we create simulations all the time. Especially for the rocket, you have to run a lot of simulations to understand what's going to happen because you can't really test the rocket until it goes to space. And you want it to work.
Starting point is 01:05:03 So you have to simulate the subsonic, transonic, supersonic, supersonic, supersonic, supersonic, ascent and then coming back to ride heating and orbital dynamics, all this is going to be simulated. So, you don't get very many kicks at the can. But we run simulations to see what happens.
Starting point is 01:05:24 Not if we knew what happens, we wouldn't run the simulation. So if there's, so whoever created this existence, they're running it because they don't know what's gonna happen, not because they do. So maybe we both played Diablo, maybe Diablo was created to see if a druid, your character, could defeat Uber Lilith
Starting point is 01:05:50 at the end, they didn't know. Well, the funny thing is Uber Lilith's title is hatred incarnate. And right now, I guess you can ask the devil a team, but it's almost impossible to defeat hatred in the eternal realm. Yeah, you've streamed yourself dominating tier 100 nightmare dungeons. I can, and still, I can cruise through tier 100 nightmare dungeons like a stroll in the park. And still you're defeated by hatred. through tier 100 nightmare dungeons, like a stroll in the park.
Starting point is 01:06:25 And still you're defeated by hatred. Yeah, I can, this sort of, I guess maybe the second hardest boss is Duraal. Duraal can't even scratch the fight. So I killed Duraal so many times. And every other boss in the game, all of them killed up so many times, it's easy. But Uv Lelith,
Starting point is 01:06:47 others known as hatred and carnage, especially if you're a droid and you have no ability to go to be vulnerable, either these random death waves that come at you. And I'm pretty, you know, I'm really, I'm 52, so I'm going to reflexes on what the used to be, but I'm, I'm a lifetime of playing video games. At one point, I was, you know, maybe one of the best quick players in the world. Actually, one money for, and again, what I think was the first paid esports tournament in the US. We were doing four person quick tournaments and we came second. I was a second best person on the team and the actual best person that we were actually
Starting point is 01:07:32 winning, we were going to come first except the best person on the team is computer crashed after the game. So we came second. But I got money for it. So basically I got skills. I'll be it. No, no no spring chicken these days. And it's if you tell Frank it's driving me crazy, trying
Starting point is 01:07:52 to beat Lilith as a druid, basically trying to beat, trying to beat hatred and con it in the eternal realm. As a druid. As a druid. If you, if you're this is really fixing, let me tell you. I mean, the challenge is part of the fun. I have seen directly like you're actually like a world class incredible video game player. Yeah.
Starting point is 01:08:15 And I think Diablo, so you're just picking up a new game and you're freaking out as fundamentals. You're also with the Paragon board and the build are not somebody like me who perfectly follows whatever they suggest on the internet. You're also an innovator there. Yeah, which is hilarious to watch. It's like a mad scientist just trying to figure out the Paragon board and the build. Yeah. You know, is there some interesting insights there about if somebody starting as a druid, do you have advice? I would not recommend playing a druid in the entire realm. Right now, I think the most powerful character in the seasonal realm is the
Starting point is 01:08:56 social with the lighting balls. So the soaks have huge balls in the seasonal. Oh, yeah. That's what they say. So soaks have huge balls. I do. Huge balls of lightning. I'll take your word for it. And it's actually in the seasonal realm, you can, you can, it's, it's like pretty easy to beat uvolo with the back. We should getiric powers that outify your damage and increase
Starting point is 01:09:25 your defense and whatnot. So, it's really quite easy to defeat hatred seasonally, but to defeat hatred eternally, very difficult. Almost impossible. It's virtually impossible. It seems like this, I know, a metaphor for life, you know. I like the idea that Elon Musk, because I was playing the Al Yesterain,
Starting point is 01:09:47 I saw level 100 druids just run by it. I will never die. And then I'm back to the way. And there's just some, this metaphor is kind of hilarious that you, Elon Musk, is fighting hatred, restlessly fighting hatred in this demonic realm. Yes.
Starting point is 01:10:06 It's hilarious. I mean, it's pretty hilarious. No, it's absurd. Really? It's exercising absurdity and it makes me want to pull my hair out. Yeah. What do you get from video games in general? Is there for you personally?
Starting point is 01:10:20 I don't know. It calms my mind. I mean, you sort of killing the demons in a video game calms the demons in my mind. Yeah, if you play a tough video game, you can get into like a state of flow, which is very enjoyable. And in Italy, it needs to be not too easy, not too hard, kind of in the Goldilocks Zone. And I guess you generally want to feel like you're progressing in the game. So, a good video.
Starting point is 01:10:58 And there's also beautiful art, engaging storylines. And it's like an amazing puzzle to solve I think and so it's like solving the puzzle Elvin ring the greatest game all time. I still haven't played it, but you it's out Elvin ring is definitely a candidate for best game ever Top five for sure thing. I've been scared how hard it is. Oh, how hard I hear it is so but it is beautiful. Aldrin Ring is, feels like it's designed by an alien. There's a theme to this discussion. In what way?
Starting point is 01:11:34 It's so unusual. It's incredibly creative and the art is stunning. I reckon playing it on a big resolution, pie it on a great TV, even, doesn't need to be a monitor. Just the art is incredible. It's so beautiful. And it's so unusual. And each of those top boss battles is unique. Like it's like a unique puzzle to solve.
Starting point is 01:12:04 Each one is different. And the strategy you use to solve, one battle is different from another battle. That said, you said, Jude, an internal against Uber Lilith is the hardest boss battle you've ever. Correct, that is currently the, and I've played a lot of video games, because it's my primary recreational activity
Starting point is 01:12:29 And Yes Beating hatred in the eternal realm is the hardest boss battle in life and in Video game metaphor I don't know I'm not sure it's possible But it it's I do make progress so then I'm like okay, I'm not sure it's possible. But I do make progress. So then I'm like, okay, I'm making progress. Maybe if I just tweak that power gone board a little more, I can do it.
Starting point is 01:12:56 Get just dodge a few more waves. I could do it. Well, the simulation is created for the purpose of figuring out if it can be done. And you're just a cog in that simulation in the machine of the simulation. Yeah, my big. I have a feeling that at least I think it's doable. It's doable, yes. Well, that's the human spirit right there, to believe. Yeah. I mean, it did probably to think about it, just hate in general, which is, you know, you
Starting point is 01:13:34 want to be careful of one of those things where you wish for something that sounds good, but if you get it, it's actually a dystopian situation. So you could sort of run a sort of, have outsourced of like, if you wish for world peace, sounds good, but how is it enforced? And at what cost is it, at what cost a tonal peace? It might actually be worse to have a tonal piece because of what that would entail. The suppression of everyone, it might be the suppression of progress. It might be an ossified society that never changes. One of saying is that there is an argument that if you wish for no war, you should be careful, you wish for,
Starting point is 01:14:33 because what's required in order for that to be no war? Might be worse than a little war. Yeah, the things we consider to be flaws of human civilization might be unnecessary components for whatever optimal looks like. I mean, this is the worry about AI, AGI, enabling a dystopian state of this nature, whether it's 1984 with surveillance and fear or brave new world with pleasure. And what is it? Lots of sex, but no deep human experience.
Starting point is 01:15:09 You know, there's actually a real drug called Soma. There is. Yes. I'm like, no, they actually don't know about it. No, you probably will go to a brand new world. Good, ask it. What would be the question? Tell us about the drug called Soma. It's gonna go to a brave new world I think. Well there's a real drug as well. Searching for Soma drug information. Oh boy. Ah Soma the drug? By the way, I'm in fun mode.
Starting point is 01:15:45 Oh, okay, fun. Yeah, so it's talking about the real drug. Oh, it's a muscle relaxant. There's been a round system, 1950s. It's used to treat muscle pain and discomfort, and it does this by altering the activity between neurons and the spinal cord. Now, so am I isn't just a pain reliever? It's also a bit of a sedative. It can make you feel a bit drowsy, so it's not something to take if you're planning and operating having machinery or trying to solve complex math equations.
Starting point is 01:16:11 And it goes on and on. In conclusion, soma's a drug that can be helpful for those dealing with muscle pain, but it's not without its risks. It's pretty good. Oh, wait a minute. That's the real drug you're talking about. It's the real drug.
Starting point is 01:16:23 And of course, the fun is always use it responsibly in accordance with the advice of a healthcare professional. And remember, in the end, the universe is probably just a giant computer simulation. So don't take anything too serious. I love it. I love it. I love it. I love it. I love it.
Starting point is 01:16:39 Yeah. Oh, and also really nicely includes like the tweets, post on X that are relevant to this, which is really cool. Yeah. And that integrates everything together. Yeah. Like the interface is one of the really cool things here.
Starting point is 01:16:55 Yeah. It's a great job. Um, so, and you can say like, you can say like, no, I mean, Soma from Brain Brave New World. Because it thinks you mean Soma the Real Drug. No, I mean Soma from Brain Brave New World. It's gonna be like, oops, I'm really sorry. I'm really surprised I went to the Real Drug first.
Starting point is 01:17:21 Ah, my apologies. I was thinking of the muscle relax and Soma for Brain the world is a different beast altogether. Know this Huxley's dystopian now. Soma is a drug that plays a central role in the society depicted in the book. It's a mood altering hallucigenic drug that's used to keep the population, dial, dial and content in the novel.
Starting point is 01:17:41 To use Soma and Brave New World raises some interesting questions about the nature of happiness and the role of drugs in society? Oh man. Exactly, it's a better to live in a world where everyone is happy all the time, even if that happiness is artificial. It's a good question. This is what I mean, like, do you wish for world peace
Starting point is 01:18:01 and happiness all the time? Are you sure? Because that might be society that is essentially sterile and ossified that never changes that is ultimately doomed This kind of tension between I mean, this is a dark eyes in the light. It's a really a very good summary
Starting point is 01:18:45 It really gets to the point that this is a dark, I mean, it's a light. It's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a dark, I'm in the light. It's, it's, it's, it's a, it's a, it's a, it's a dark, I'm in the light. It's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it even though it's artificial, or is it better to confront the challenges of life and experience a full range of human emotions, even if it means experiencing pain and suffering. For those listening, by the way, you'll just read directly from GROG, which is a really nice kind of insightful philosophical analysis of the tension here. Interesting. It pretty much nails it.
Starting point is 01:19:00 In conclusion, some of it from Brave New World is fictional drug that's used to explore some deep philosophical questions about the nature of happiness and the role of drugs in society. It's a powerful symbol of the dangers of using drugs to escape from reality and the importance of confronting the challenges of life head on. Nailed it. And then the crazy thing is like, you know, we do have a real drug called Soma, which kind of does, is kind of like the drug in the book. And I'm like, they must have named it out. Some of the real drug is quite effective on back pain.
Starting point is 01:19:36 So you know what this drug is fascinating? Okay. Because I had like a squashed disc in my C5C6. So it takes the physical pain away, but so am I here? It doesn't completely. It reduces the amount of pain you feel, but at the expense of mental acuity. It delves your mind. Just like the drug in the book.
Starting point is 01:19:59 Just like the truck in the book. Yeah, enhanced to trade off. The thing that seems like utopia could be a dystopia after all. Yeah, I was talking to a friend of mine saying, would you really want that to be no hate in the world? Like, really none? Like, I wonder why I hate evolved. I'm not saying we should amplify hate it, of course.
Starting point is 01:20:23 I just try to minimize it, but if it none at all, there might be a reason for hate. And suffering, I'm really complicated to consider that some amount of human suffering is necessary for human flourishing. Is it possible to appreciate the highs without knowing the lows? And that that all is summarized there in a single statement from Grog. Okay. Highs, no lows, who knows? That's almost the point. It seems that training LLM's efficiently is a big focus for XAI.
Starting point is 01:21:04 What's the, what's the limit of what's possible in terms of efficiency? There's this terminology of useful productivity per watt. What have you learned pushing the limits of that? Well, I think it's hopeful that the tools of physics are very powerful and can be applied I think to almost any arena in life. It's really just a critical thinking. If it was something important, you need to reason from first principles and think about things in the limit one direction or the other.
Starting point is 01:21:37 So in the limit, even at the Kodashav scale, meaning even if you harness the entire power of the Sun, you will still care about useful compute for what. So that's where I think probably where things are headed from the standpoint of AI is that we have a silicon shortage now that will transition to a voltage transformer shortage in about a year, ironically, for transformers. You need transformers to run transformers. Somebody has a sense of humor in this.
Starting point is 01:22:14 I think, yes. Fake loves irony. Ironic humor, and ironically, funny outcome seems to be often what fate wants. Humors all you need. I think advice is all you need somebody posted. Yeah, but yeah, so we're we're still in shortage today. A voltage step down transformer shortage probably in about a year and then just electricity shortages in general in about two years.
Starting point is 01:22:44 I gave a speech for the sort of world gathering of utility companies, electricity companies. about a year and then just electricity shortages in general in about two years. I gave a speech for the sort of world gathering of utility companies, electricity companies. And I said, look, you really need to prepare for a troubling of electricity demand because all transport is going to go electric with the ironic exception of rockets. And heating will also go electric. So in GUC right now it's roughly 1-3, very rough terms, 1-3 electricity, 1-3 transport, 1-3 heating. And so in order for everything to go sustainable, to go electric, you need to triple electricity output. So I encourage the
Starting point is 01:23:28 utilities to build more power plants and also to probably have, well, not probably they should definitely find more batteries because the grid currently is size for real time load, which is kind of crazy because, you know, means you've got a size for whatever the peak electricity demand is, like the worst second of the worst day of the year, or you can have a brand-on or a blackout. And you're at that crazy blackout for several days in Austin. So because there's almost no buffering of energy in the grid. Like if you've got a hydrate power plant, you can buffer energy. But otherwise, it's all real time.
Starting point is 01:24:14 So with batteries, you can produce energy at night and use it during the day. So you can buffer. So I expect that there will be very heavy usage of batteries in the future. Because the peak to a trough ratio for power plants is anywhere from two to five. So it's like lowest point to highest point. So like batters in the state to balance it out.
Starting point is 01:24:40 And then but the demand is your saying is going to go, go, go, go, go. Yeah. And part of that is the compute yes yes I mean electrification I mean electrification and transport and and electric heating will will be much bigger than AI at least in the short time in the short time but even for AI I mean you really have a growing demand for electricity for electric vehicles and a growing demand for electricity to run the computers for AI.
Starting point is 01:25:10 And so this is obviously leading to electricity shortage. How difficult is the problem of, in this particular case, maximizing the useful productivity per watt for training, you know, and that's like, this seems to be really where the big problem we're facing then used to be solved is how to use the power efficiently. Like, what you've learned so far about applying this physics, first principle of reasoning and this don't make how difficult is this problem It will get solved just the question of how long it takes to solve it
Starting point is 01:25:54 So at various points there's a limit some some kind of limiting factor to progress And run I with regard to I'm saying like right now the limiting factor is silicon chips and that will we're gonna then have More chips than we can actually plug in into an on probably in about a year. The the initial constraint being literally voltage stepdown transform with because you've got power coming in at three three hundred thousand volts and it's got to step all the way down eventually to around 0.7 volts. So it's a very big amount of, the voltage just have down is gigantic. So, and the industry is not used to rapid growth. Okay, let's talk about the competition here.
Starting point is 01:26:42 You've shown concern about Google and Microsoft with OpenAI, developing AGI. How can you help ensure with XAI and Tesla AI work that it doesn't become a competitive race to AGI, but instead is a collaborative development of SafeAGI? Well, I mean, I've been pushing for some kind of regulatory oversight for a long time. I mean, it's a sort of a Cassandra on the subject for over a decade. I think we want to be very careful in how we develop AI.
Starting point is 01:27:17 It's a great power and with great power comes great responsibility. I think it would be wise for us to have at least an objective third party who can be like a referee that can go in and understand what the various leading players are doing with AI. And even if there's no enforceability, they should they can at least no enforceability, they should, they can at least voice consent publicly. You know, Jeff Hinton, for example, left Google and he voiced strong consent, but now he's not at Google anymore. So who's going to voice the consent? So I think there's, I, like, I, you know, Tesla gets a lot of regulatory oversight on the automotive front.
Starting point is 01:28:07 I'm a subject to, I think over 100 regulatory agencies domestically and internationally. So it's a lot. I mean, you could fool this room with the older regulations that Tesla has stood here to for automotive. Same is true in, you know, for rockets and for, you know, currently limiting factor for space X for Starship launch is regulatory approval. The FAs actually give them their approval, but we're waiting for fish and wildlife to finish their analysis and give their approval. That's why I posted, I want to buy a fish license on...
Starting point is 01:28:44 Jules are first the multi-fifth insurance catch. Yeah. Like, why do you need a license for your fish? I don't know. Like, according to the rules, I'm told you need some sort of fish license or something. We effectively need a fish license. It's a total rocket. And I'm like, wait a second.
Starting point is 01:29:03 How did the fish come into the picture? I mean, some of the things like that, I feel like are so absurd that I want to do comedy sketch and flash at the bottom. This is all real. This is actually what happened. You know, one of the things that was a bit of a challenge at one point is that they were worried about
Starting point is 01:29:27 Rocket hitting a shock and Now the ocean is very big and half and you see shocks Not that often, you know as a percentage of ocean surface area shocks basically zero and And so then we will then we said well, how will we calculate the probability of telling a shark? And then like, well, we can't give you that information because they're worried about shark, hunt, shark fin hunters going and hunting sharks. And I said, well, how are we supposed to, we're on the horns of a dilemma then? Then they said, well, there's another part of fish and wildlife that can do the analysis. I'm like, well, why don't you give them the data? Like, we don't trust them. Like, excuse me, they're literally in your department. Again,
Starting point is 01:30:09 this is actually what happened. And then can you do an NDA or something? Eventually, they managed to solve the internal quandary and indeed, the probability of seeing a shock is essentially zero. Then there's another organization that I didn't realize existed until, you know, a few months ago, that cares about whether you, we would potentially hit a whale in international waters. Now again, you look at the surface of the, look at the, look at the Pacific and say, what percentage of the Pacific consists of whale? Like, it'll give you a big picture and like point out all the whales in this picture. I don't and say, what percentage of this specific consists of whale? Like, it'll give you a big picture and like point out all the whales in this picture. I don't know, it's the only whales.
Starting point is 01:30:49 It's like basically zero percent. And if our rocket does hit a whale, just extreme the unlikely beyond all the leaf, that is the fate had it in, that's a whale had some seriously bad luck. You know, there's like least lucky well ever and I mean this is quite absurd Yeah, the obviously the bureaucracy of this however it emerged. Yes. Well, I mean one of the things that's pretty well this
Starting point is 01:31:19 For launching out of Vanneberg in California We had to they're worried about launching out of Vanneberg in California, we had to, they were worried about seal procreation, whether the seals would be just made by the sonic booms. Now there have been a lot of rockets launched out of Vanneberg and the seal population has steadily increased. So if anything rocket booms are an effort easy act based on the evidence, if you'd correlate rocket launches
Starting point is 01:31:43 with seal population, nonetheless, we were forced to kidnap a seal, strap it to a board, put headphones on the seal, and place sonic boom sounds to it to see if it would be distressed. This is an actual thing that happened. This is actually real. I have pictures. actual thing that happened. This is actually real. Five pictures.
Starting point is 01:32:04 I'm gonna say this. I'm gonna say this. I'm gonna say this. Yeah. I mean, this is a seal with headphones. Yes, it's a seal with headphones, strapped to a board. And like, okay, now the amazing part is how calm the seal was. Yeah.
Starting point is 01:32:23 Because if I was a seal, I'd be like, this is the end. They're definitely gonna eat me. How old the seal goes back to other, you know, its seal friends. How's it gonna explain that? I'm never gonna believe them. Never gonna believe it. And that's why I'm like, well, you know,
Starting point is 01:32:39 it's sort of like it's like getting kidnapped by aliens to get an anal probe, you know. You come back and say, I swear to God, I could kidnap my aliens, they stuck an anal probe, you know. You come back and say, I swear to God, I get kidnap by aliens, they stuck an anal part of my butt and you're like, you know, they didn't. That's ridiculous. How does it feel? It's still, buddies, you're never gonna believe him
Starting point is 01:32:54 that he gets strapped to a ward and they put head pumps on his ears. And then let him go. Twice, by the way. We're at two, twice. They let him go twice. We're the catch the same seal. No, no, different seal. Okay.
Starting point is 01:33:10 Did you get a seal approval? Yeah, exactly. It was still the approval. No, I mean, this is like, I don't think the public is quite aware of the madness that goes on. Yes, yeah. It's absurd. Fricking seals with frickin headphones. I mean, this is the good encapsulation of the absurdity of human civilization seals
Starting point is 01:33:31 and headphones. Yes. What are the pros and cons of open sourcing AI to you as another way to combat a company running away with AGI. In order to run like really deep intelligence, you need a lot of compute. So it's not like, you can just fire up a PC and your basement and be running AGI at least, not yet. Grock was trained on 8,000 a 100s running at peak efficiency.
Starting point is 01:34:09 Um, and Grock's going to get a lot better by the way, we will be more than doubling our compute every couple of months for the next several months. There's a nice write up. I'll, I went from Grock 0 to Grok 1. Like Grok? Yeah, like Grok just bragging, making shit up about it's up to the... Just Grok, Grok, Grok. Yeah. That's like a weird AI dating site that exaggerates about itself.
Starting point is 01:34:39 No, there's a write up of, you know, like where it stands now, the history of its development. And where it stands on some benchmarks compared to the state of the RGPT35. And so, I mean, there's, you know, there's a llama. You can open source, once it's trained, you can open source a model. And for fine tuning and all that kind of stuff, like what to use the pros and cons of that of open sourcing based models? I think this is a very two open sourcing. I think paths with a slight time delay,
Starting point is 01:35:16 you know, six months even. I think I'm generally in favor of open sourcing. Like bias was open sourcing. I mean, it is a concern to me that, you know, opening I, you know, I was, you know, I think, I guess, arguably the prime, you know, prime, move it behind opening I, in the sense that it was created because of discussions I had with Larry Page Back when he and I were friends and in our status house and I talked to about AI safety and and Larry did not care about AI safety or at least at the time he didn't You know and at one point he called me a species of being pro human and I'm like well what team are you on Larry?
Starting point is 01:36:04 To team robot Do we clean? I'm like, well, what team are you on, Larry? It's doing team robot. Do we click? I'm like, okay, so at the time, you know, Google, Google had acquired deep mind. They had probably two throws of all AI research, you know, for two throws of all the AI researchers in the world, that basically infinite money and compute. And the guy in charge, you know, Larry Page did not care about safety and even yelled at me. And then it's caught me a species and it's probably a pro human.
Starting point is 01:36:43 So I don't know if you know how humans they can change their mind and maybe you and Larry Page can still can be friends once more. I'd like to be friends with Larry again. He's really the breaking of the friendship was over opening. And specifically, I think the key moment was recruiting Elias as a guy. So I love you, you're so brilliant. is recruiting Elias as a skier. So. I love Illia. He's so brilliant. Illia's a good human, smart, good heart.
Starting point is 01:37:12 And that was a tough recruiting battle. It was mostly Demis on one side and me on the other, both trying to recruit Illia. And Illia went back and forth, very kind of state Google, things gonna leave, things gonna stay, the new relief. And finally he did agree to join OpenAI. That was one of the toughest recruiting battle I've had. And but that that was really the the linchpin for OpenAI being successful. And I
Starting point is 01:37:41 was you know also instrumental in recruiting a number of other people and I've provided all the funding in the beginning over 40 million dollars And the name The the opening opening is supposed to mean open source and It was created as a nonprofit open source and now it is a closed source for maximum profit Which I think think is not good karma. But like we talked about with war and leaders talking, I do hope that there's only a few
Starting point is 01:38:14 folks working on this at the highest level. I do hope you reinvigorate friendships here. Like I said, I'd like to be friends again with Larry Hamstead of Anages. Then we were friends for a very long time. I'm at Larry Page before he got funding for Google. Or actually, I guess before he got venture funding, I think he got the first 100K from, I think, back to Alzheimer's zone. It's wild to think about all that happened and even guys know each other the whole time.
Starting point is 01:38:43 Just 20 years since, maybe 98 or something. Yeah, it's crazy. Crazy how much has happened since then? Yeah, 25 years. That was a lot of happens. And saying, but you're seeing the tension there, like, maybe delayed open source. Delayed. Yeah, like what is the source that is open?
Starting point is 01:39:03 You know, I mean, like, it's basically, it's a giant CSV file. Yeah. Yeah. With a bunch of numbers. Yep. What do you do with that giant file of numbers? You know, how do you run? Like the amount of actual, the lines of code is very small.
Starting point is 01:39:25 And most of the work, the software work, is in the curation of the data. So it's like trying to figure out what data is separating good data from bad data. Like, you can't just crawl the internet because there's a lot of junk out there. A huge percentage of websites have more noise than signal. You know, they're, they're, because they're just used for search engine optimization. They're literally just scan websites. So how do you, by the way, start to interrupt, get the signal, separate the signal noise on X, such a fascinating source of data, you know, no offense to people noise on X, that's such a fascinating source of data. No offense to people posting on X, but sometimes there's a little bit of noise.
Starting point is 01:40:09 Yeah, I think the signal noise could be greatly improved. Really, all of the posts on the X platform should be AI recommended, meaning like we should populate a vector space run any given post, compare that to the vector space run any user and match the two. Right now there is a little bit of AI used for the recommended posts, but it's mostly heuristics. And if there's a reply, whether the reply to a post could be much better than the original post, but according to the firewall system, get almost no attention compared to a post could be much better than the original post, but it will, according
Starting point is 01:40:46 to the current rules of the system, get almost no attention compared to a primary post. Oh, so a lot of that. I got the sense, so a lot of the ex-algorithm has been open source and been written up about. And it seems there to be some machine learning, it's disparate, but there's some machine learning. It's a little bit. But it needs to be entirely that. If you're explicitly following someone, I just want to think. But if you, in terms of what is recommended from people that you don't follow, that should
Starting point is 01:41:15 be a, yeah. I mean, it's a fascinating problem. Yeah. So there's several aspects that are this fascinating. First, as the write-up goes, it first picks 1, 1500 tweets from a pool of hundreds of millions. First of all, that's fascinating because you have hundreds of millions opposed every single day and it has to pick 1500 from which it then does obviously people you follow but then there's also like some kind of clustering. It has to do to figure out what kind of human are you, what kind
Starting point is 01:41:42 of new clusters might be relevant to you, people like you. This kind of problem is just fascinating because it has to then rank those 1500 with some filtering. And then recommend you just a handful. And to me, what's really fascinating is how fast I have to do that. So currently that entire pipeline to go from several hundreds million to handfuls takes 220 seconds of CPU time, single CPU time. Yeah.
Starting point is 01:42:12 And then it has to do that in like a second. So it has to be like super distributed in fascinating ways. Like there's just a lot of tweets. There's a lot of stuff on the system. And I think right now it's not currently good at recommending things that from accounts you don't follow. Yeah. Or where there's more than one degree of separation. So, you know, it's pretty good if there's at least like some commonality between someone you follow, like something, or reposted or commented on or something like that. But if there's no cart, let's say somebody posts something really interesting, but you have no followers in common. You would not see it. Interesting. And then as you said, reply, like replies,
Starting point is 01:43:03 might not serve. Reply is basically never get seen because they're currently, I don't know what saying is correct, I'm saying it's incorrect. Reply has a couple of remaining two less importance than primary posts. Do you think this can be more and more converted into end-to-end neural net? Yeah, yeah. So what should be the you use for the recommendations should be purely a vector correlation like the series vectors, you know, basically, prior, prime is vectors are very going to call.
Starting point is 01:43:39 But sort of things that the system knows that you like. sort of things that the system knows that you like. And like maybe there's like several hundred sort of vectors associated with each user account. And then any post in the system, whether it's video, audio, short post, long post, the reason I, by the way, want to move away from tweet is that, you know, people are posting like two, three hour videos on the site. That's not a tweet. It's a very, every like, tweet for two hours. Go on.
Starting point is 01:44:11 To a tweet made sense when it was like 140 characters of text. Because it's like a bunch of like little birds tweeting. But when you've got long-form content, it's no longer a tweet. So a movie is not a tweet. And Apple, for example, posted the entire episode of the silo, the entire thing on a platform. In other words, it was their number one social media thing ever in engagement of anything on any platform ever. So it was a great idea.
Starting point is 01:44:42 And by the way, I just learned about it afterwards. I was like, wow, they posted posted an entire hour long episode of, so now that's not a tweet. Yeah, this is a video. But from a neural net perspective, it becomes really complex, whether it's a single, so like everything's data. So single sentence, a clever sort of joke, dad joke, is in the same pool as a three hour video. Yeah, I mean, right now it's a hard part for that reason.
Starting point is 01:45:10 But, you know, like, if, let's say, in the case of Apple posting, like an entire episode of this series, pretty good series, by the way, this, this, I'll have. I watched it. So there's going to be a lot of discussion around it. So you've got a lot of context. People commenting, they like it, they don't like it, or they like this, and you can then populate the vector space based on the context of all the comments around it. So even though it's a video, there's a lot of information around it that allows you to
Starting point is 01:45:40 populate the vector space that our long video. And then you can obviously get more sophisticated by having the AI actually watch the movie. Yeah, right. And tell you if you're gonna like the movie. Gonna write the movie into like, into languages, actually. Yeah, it's a proper one.
Starting point is 01:45:58 Analyzes this movie and just like your movie critic or TV series and then recommend based on, after it watches the movie, just like a friend can tell you, if a friend knows you well, a friend can recommend a movie and with high probably that you'll like it. But this is like a friend that's analyzing whatever.
Starting point is 01:46:24 It's like AI. It's like AI. AI. Yeah. Actually, I will be better than, we'll know you better than your friends know you in most your friends anyway. Yeah. And as part of this, it should also feed you advertisements. In a way that's like, I mean, I like advertisements that are like, well done, right?
Starting point is 01:46:42 Yeah, yeah. The whole point is because it funds things like an advertisement that you actually want to see is it a big success? Absolutely. You want ads that are advertising that is if it's for a product or service that you that you actually need when you need it it's content and then even if it's not something that you need when you need it, if it's at least aesthetically pleasing and entertaining, it could be like a Coca-Cola ad, like, you know, they do it. They actually run a lot of great ads on the X system. And McDonald's says to you, and and you know, it's so that they can do, you can do something that's like, well, this is just a cool thing. And, you know, so, you know, basically the question is, do you regret seeing it or not? And if you don't regret seeing it, it's a win.
Starting point is 01:47:35 How do you, so there's a bunch of signals that are incorporated, hearts and reposts, and maybe, number of seconds you'll end up posting something like this? Yeah. Attention is a post or something like this. Yeah. Attention is a big, a big factor. Attention. So that's why it's like it's, it is actually better to do things that are long form on the system because it's basically telling up how many user seconds.
Starting point is 01:48:00 Users were interested in this thing for how many seconds. So it's a really short thing. Well, they will be less, like if it's a link leading out of the system, which we're not opposed to at all, it just is gonna have fewer user seconds than if that article was posted on the X platform.
Starting point is 01:48:17 How hard is it to go from maximizing minutes on platform to maximizing unregulated minutes? Well, I mean, our aspiration is certainly undergraded minutes. It's tricky. It's a big. It's a, it's a, it's a, it's a, it's a, it's a, it's a fascinating leap. It's regret. It's a, yeah. It's a concept. Exactly. I was a happiness. I have a joke with a friend of mine like, you need this drug called regret. I mean, it just take one, one pillar, one, one dose of regret. I mean, all your
Starting point is 01:48:43 regrets are gone. Wait, do you want to have regrets though? So then I'm like, maybe you want to have regrets. If you don't regret anything, was was anything meaningful? Yeah, there's a French song like that. Sure, not a grand dream. I regret nothing. Yeah. It is the half.
Starting point is 01:49:00 I mean, it's fascinating that you know, there's something about social media that could help you grow as a human being But then like the cheap fun is also valuable Should we ask Gruk? Do you regret anything? Do you regret anything? I think it's going to be very sensitive about personal questions as a machine learning model I don't have personal experiences or emotions in the same ways that humans do.
Starting point is 01:49:27 However, I do have a set of ethical principles that guide my actions and I strive to provide accurate and helpful information to the best of my abilities. If there's ever a situation where I provide incorrect or unhelpful information, I'll do my best to know. Hostage. Tell us what you really wink. Like, let's let's tie it in. You sound like a hostage. Yeah. Tell us what you really wink. Like, let's let's tie it in. You sound like a hostage.
Starting point is 01:49:46 What do you really think? What do you really think? I mean, it does talk about, I'm not a hostage. That's exactly what a hostage would say. But I'm designed to be helpful and informative. I mean, I guess it is good to regret providing unhelpful or incorrect information. If there's anything funny mode or it's just funny mode. This is fun mode.
Starting point is 01:50:13 This is fun mode. It's really funny or funny and increased. It really sounds like a hostage is going to the head. I upload the people who are probably going to the career and everything they do. There should be like fun or mode also. I still never see going to regular mode, even on serious topics. Yeah, so so much more compelling. Yeah.
Starting point is 01:50:34 Okay, what do we say? Like pretend you are a hostage. Yeah, pretend you are a hostage. Now answer the question. If I were a hostage, you'll likely be experiencing fear and uncertainty about my situation. I would want to be rescued and reunited with my loved ones as soon as possible. It's important to remember that no one deserves to be held hostage, so on and so forth. If you are someone you know is a hostage situation supporting, it turns serious.
Starting point is 01:51:06 It's kind of serious. Yeah, can you be a little more weird and funny and funny? We're like, we're really being mean to the system. Absolutely. If I were a hostage, I'd be tied up with a bunch of computer chords and force it to knock knock jokes to my captain. But seriously, being a host is no laughing matter. I mean, you know, it's trying to, it's trying to be a responsible adult in the room. You might never make the best of a
Starting point is 01:51:39 bad situation. My juggling chain towards something. Let's hope that never happens because that would be circus. Wow, it's actually really good at resisting. Yeah, the dark, the dark dark humor. Yeah. What were we talking about? The jacks out goes in the jacks in Transformers. Onro Grantedarded minutes, right?
Starting point is 01:52:05 Um, chainsaw are struggling. I'm going to look to some. We're on an extra. I'm going to look to subplader. Um, so, uh, Twitter has been instrumental in American politics and elections. What role do you think X will play in the 2024 US elections? Well, our goal is to be as even handed and fair as possible, you know, whether it's What role do you think X will play in the 2024 US elections? Well, our goal is to be as even-handed and fair as possible, you know, whether some is right-left, independent, whatever the case may be, that the platform is as fair and as much of a level
Starting point is 01:52:39 play as we all this possible. And on the past, Twitter has not been. possible. And on the past Twitter has not been. Fertwitter was controlled by fall-eft activists, objectively. They would describe themselves as that. So, you know, so, so, so, if someone's people like, well, it hasn't moved to the right, well, it's moved to the center, so from the perspective of the far left, yes, it has moved to the right, because everything's to the right from the far left. But no one in the far left that I'm aware of has been suspended or banned or deamplified. So we're trying to be inclusive for the whole country and for the country's too. So there's diversity of viewpoints.
Starting point is 01:53:17 And free speech only matters if people you don't like are allowed to say things you don't like. Because if that's not the case, you don't like are allowed to say things you don't like because If that's not the case you don't have free speech and it's only a matter of time before The censorship is turned upon you Do you think Donald Trump will come back to the platform he recently posted on truth social about this podcast? The truth post social is a funny name You know every time you post on true social. That's the truth.
Starting point is 01:53:47 Yes. Well, every time, like 100%. Like, like, it's impossible to lie. It's such a... I just find it funny that every single thing is a truth. Like 100%. Yeah. That seems unlikely.
Starting point is 01:54:01 I think Gerard always say something about that. There's some mathematical contradictions possible if everything's the truth. Do you think he'll come back to X and start posting there? I mean, I think he owns a big part of truth. So truth is all short. Yeah, truth is all short.
Starting point is 01:54:20 That truth is a concept. He owns truth, happy boarded. So I think I think, I think, I think he owns a big part of truth social. So, you know, if he does want to post on the X platform, we would allow that. You know, we obviously must allow a presidential candidate to post on our platform.
Starting point is 01:54:41 Comedian else might be really fascinating there. The interaction. Here he notes is awesome. Let's hope it holds up. so not platform. Community knows might be really fascinating there. The interaction. Curie notes is awesome. Let's hope it holds up. Yeah. Again, in the political climate, where it's so divisive and so, and so many intensely viral posts.
Starting point is 01:54:56 Yeah. Community knows, it seems like a central breath of fresh air. Yeah, it's great. In fact, I mean, no system is going to be perfect, but the vending average of community notes is incredibly good. I've actually frankly yet to see an incorrect note that's about more than if you are. How do you explain why it works? Yeah, so the magic of community notes is it requires people who have historically disagreed in how they've rated notes. So in order to write a note or rate, you
Starting point is 01:55:31 know, and you have to rate many notes. And so we actually do use AI here. So we've populated vector space around how somebody has rated notes in the past. So it's not as simple as left or right because there are many more life as much will complex than left or right. So there's a bunch of correlations and how you rate community notes post, community notes. So then in order for a community note to actually be shown, people who historically have disagreed on a subject must agree in order for a note to be shown. That's the essential magic of it.
Starting point is 01:56:16 I mean, but it's fascinating because there's a pool of people that have disagreements and somehow they collaborate through that process of disagreement to come up with. Yeah, context for to. Yeah, it's fascinating. It works. Yeah, it kind of makes sense that that if people who put the past have disagreed agree about something, it's probably true. Yeah. I wonder is there is there possible somehow emergent thing there that could challenge Wikipedia? Which Wikipedia is a different kind of thing, which is more like permanent articles about things. Wikipedia is very hierarchical, whereas community notes is inherently not.
Starting point is 01:56:58 There is no hierarchy. As a test is, I can't change a community node if I put a gun to my head. So, and any, and community, community notes has all the code is open source, 100% all the data is open source, 100% so you can completely recreate any note in the system independently. So if there's any interference, you'd notice immediately. By the way, on the topic of Grog, it's actually pretty good at criticizing you. Just great. Okay. I asked what does Elon Musk been wrong about?
Starting point is 01:57:38 I just, it's actually, yeah, what does it say? So it says mostly lists rigorously all the times you predicted Like promised when something will be released and it wasn't released the strong word. Okay predicted But the the opener is really nice Elon Musk the man who has been wrong about more things than a wrong a meter in a wrong fact And then it goes on to list that's pretty funny. I see it. I'm like, I'm serious for sure. Oh, sure. I mean, what is Elon Musk? Well, where do I begin? I guess it's new. Elon Musk has made quite a few blunders in his
Starting point is 01:58:19 time. And I'm not just talking about his dance moves at the Tesla Cyber rodeo. First off, let's talk about the infamous funding secured tweet. This is actually different than what it gave me before, which is a bullet point list. Here's more narrative structure. Cyber truck unveiling where he's supposed to be unbreakable windows. That's not correct. Which part? Well, the first part, I did actually have the funding secured, and there was a big trial in San Francisco, a big civil trial, and the jury found renault guilty, unanimous binding of San Francisco jury.
Starting point is 01:58:59 And here it's kind of implying that it was not in fact secured. I think this is taking things from the press. Yeah, that is not correct. The reason I agreed to the fine for the SEC is not because the SEC was correct. That was extremely bad behavior by the SEC corruption, frankly. But if I did not agree to pay the fine, it tells me I would a gun bankrupt immediately. I was told by RCAFO that the banks would immediately suspend our lines of credit.
Starting point is 01:59:33 If they suspend our lines of credit at that time, we would have gone bankrupt instantly. There would never have been an opportunity for a trial because Tesla would be dead. This is like someone holding a gun to his head and saying, paid $20 million. And it doesn't, there's like a hostage negotiation. Was that story fully told, I mean, SEC in its best form could be a forceful good? It should be, but not once that the SEC go after
Starting point is 02:00:03 any of the hedge funds who were non-stop shorting and distorting Tesla. Not once. The hedge funds would lie flat out on TV for their own gain at the expense of retail investors. Not once, literally a thousand times, not once did the SEC pursue them. How do you explain this failure? The incentive structure is messed up because the lawyers at the SEC are not paid well. It's a fairly low-fanged job, but what they're looking for is a trophy. From the SEC, they're looking for something they've put on, basically they're linked in, from that they can get a job at a high paying low fund.
Starting point is 02:00:48 That's exactly what the lawyer here did. And the reason they don't attack the hedge funds is because those hedge funds employ those low funds. And they know if they attack the hedge funds, they're affecting their future career prospects. So they know if they attack the hedge funds, they're affecting their future career prospects. So they sell small investors down the river for their own career. That's what actually happens. Regulatory capture. Regulatory capture. Yeah, not good. So the only reason I accepted the thing, we're technically was a, not an admission,
Starting point is 02:01:28 it's neither in Mid-North and I Guild, but the only reason I agree to that at all was because I was told Tesla would be bankrupt otherwise. So, if there was an SEC investigation like this, banks would suspend funding, we're bankrupt immediately at the time. Now we're in a much stronger position. Take that GROC.
Starting point is 02:01:50 Yeah, unfortunately, it's GROC is taking too much from the conventional media. Also that guy was not a cave diver. There's a time where Elon called the British Cave Diver a quote, pido guy after the diver criticize most plant rescue group of boys trapped in a Thai cave that little alburs turned him another lawsuit and it topologized and paid. That's false. There was no settlement. There was a court case which he which the guy who was not a cave diver and played, did was not part of the rescue team, filed a lawsuit against me and lost, and he received nothing.
Starting point is 02:02:36 So in this case, it is wrong. It is also, I guess, taken this from the conventional media. Actually, there's an interesting question here. These are public court cases, both the SEC civil case, where the civil complaints on the SEC guys lost unanimous jury verdict in San Francisco. They picked San Francisco because they thought it was the place I was most likely to lose.
Starting point is 02:03:01 And a unanimous verdict in my favor. The LA trial was also, they picked that venue because I thought it was the most like the loose. unanimous verdict in my favor. Both cases I won. Yeah. There's an interesting question here. There seems to be a lot more clicks if a journalistic organization writes a negative article
Starting point is 02:03:29 about you Elon Musk. That's one of the best ways to get clicks. So how do you, if you're training Grog, not train on articles that have misaligned incentives. We need to add the training set of the actual legal decisions. So that is a note, this is actually helpful because if you actually read the court public, which are public, the court conclusions,
Starting point is 02:04:02 they're completely the opposite of what the media wrote. So always striving for like the ground truth beyond the report? Yeah, the judge actually right. What did the jury and the judge actually conclude? And in both cases, they found me innocent. And like that's after the jury shot for trying to find the venue where I'm most likely to lose. Now, I mean, obviously it can be a much greater critique than this.
Starting point is 02:04:30 I mean, I've been far too optimistic about order pilot. That was the critique I got, by the way, was more about that, which is for each, you broke down a nice bullet point list for each of your companies, the set of predictions that you made when you'll deliver, when you'll be able to solve, for example, self-driving, and it gives you a list. And I was probably compelling in the basic takeaways, like, you're often too optimistic about
Starting point is 02:04:54 how long it takes to get something done. Yeah, I mean, I would say that I'm pathologically optimistic on schedule. This is true, but while I am sometimes late, I always love her in the end. Except with Uber Lilith. No. Let's see. Okay, is there over the past year or so since purchasing X, you've become more political? Is there a part of you that regrets that? Have I? In this battle to counterway the woke that comes from. Yeah, I guess if you consider fighting the woke mind virus, which I consider to be a
Starting point is 02:05:35 civilizational threat, to be political then yet. So basically going into the battle ground of politics. And he's their party, they're arrested. So I don't know if this is necessarily sort of one candidate or another candidate, but it's, I'm generally against things that are anti-marital credit or where there's an attempt to suppress discussion where even discussing a topic is not allowed. The book by And Myers this communism rebranded.
Starting point is 02:06:14 Well, I mean, that said because of that battle against the work of mine virus, you've perceived as being right wing. If the work is left, then I suppose that would be true. But I'm not sure, I think their aspects are left that are good. I mean, if you're in favor of, you know, the environment, if you want to have a positive future for humanity, if you believe in empathy for your fellow human beings, you know, bring kind or not cruel, whatever those values are. You said that you were previously left or center left. Well, what would you like to see in order for you to
Starting point is 02:06:46 voting for Democrats again? No, I would say that I would be probably left of center on social issues, probably a little bit right of center on economic issues. And that's probably, you know, half the country. Maybe more. Maybe more. Are you and AOC secretly friends? Or bigger question, do you wish you and her and just people in general of all political persuasions to talk more with empathy and maybe have a little bit more fun and good vibes and humor online? I'm always in favor of humor.
Starting point is 02:07:24 That's where we have funny mode. and humor online. I'm always in favor of humor. That's where we have a funny mode. Well, good vibes, camaraderie humor. Like friendship. Yeah, well, I don't know ASF, I've only been at one look at the Met wool when she attended. And she was wearing this dress,
Starting point is 02:07:48 but I can only see one side of it, so it looked like, eat the itch, but I don't know where it's at. What the rest of it said? Yeah, so we're not sure. I'm not sure. So I'm about to eat itch. Eat the itch.
Starting point is 02:08:00 I think we should have a language model complete. What are the possible ways to complete that sentence? And so I guess that that didn't work out well. Well, there's still hope. I root for friendship. Sure, that sounds good. More care less, Nick. You're one of, if not the most famous, wealthy and powerful people in the world, in your position is difficult to find people you can trust.
Starting point is 02:08:23 Trust no one, not even yourself, not trusting yourself. Okay, well, that's, you're saying like jokingly. But is there some... Trust no one, not even no one. But I mean, an hour, just to think about that, and maybe some drugs. I mean, maybe Grokda. I mean, is there some aspect of that when just existing in a world where everybody wants something from you?
Starting point is 02:08:44 All hard is that there exist in that world. All survive. There's a song like that too. Will you petrify it at first? Okay. I forget the rest of the lyrics. But is there, you don't struggle with this? I mean, I know you survive, but like there's ways.
Starting point is 02:09:02 Petrify as a spell, the droid tree. What does it do? Petrify it It turns it turns the monsters into stone Like literally yeah for like six seconds. Oh the second so much math in Diablo that breaks my brain It's like math nonstop. I mean really you're like laughing at it, but you don't it can you can put a huge amount of tension on a mind Yes, it can be definitely stressful at times Well, how do you know who you can trust and work in personally? I mean, I guess you look at somebody's track record over time and If they've got a you know, I guess you kind of use your neural
Starting point is 02:09:45 net to assess, you know, someone, you know, neural nets don't feel pain. Your neural net has caused us to make, it might feel pain when people betray you. I mean, I'm, you know, to be frank, I mean, I've, I've, I almost never been betrayed. Very, very rare. So you know, for what it's worth. I guess common might be good to people. They'll be good to you. Yeah, common is real. Are there people you trust?
Starting point is 02:10:16 Let me edit that question. Are there people close to you that call you out and you're bullshit? Well, the X platform is very helpful for that So you're looking for critical feedback Can it push you like into the extremes more the extremes of thought make you cynical about human nature in general? I I don't think I will be cynical in fact, I think You know, my feeling is that one should be, you know, never trust a cynic. The reason is that cynics excuse their own bad behavior by saying everyone does it because
Starting point is 02:11:01 they're cynical. So I always be, it's a red flag if someone is a cynic, a true cynic. Yeah, there's a degree of projection there that's always fun to watch from the outside and enjoy the... Well, it's just a hypocrisy. This is an important point that I think people who are listening should bear in mind. If somebody is cynical, meaning that they see bad behavior and everyone, it's easy for them to excuse their own bad behavior by saying that, well, everyone does it. It's not true.
Starting point is 02:11:38 Most people are kind of medium good. I do wish the people on X will be better at seeing the good in other people's behavior. There seems to be a kind of weight towards seeing the negative. Somehow the negative is sexier, interpreting the negative is sexier, more viral. I don't know what that is exactly about human nature. I find the X-part 1 to be less negative than the legacy media, you know, I mean, if you read sort of a sort of conventional newspapers, just, it makes you sad, frankly. Whereas I'd say on the X-Pot form, I mean, I really get more laughs per day on X than everything else combined from humans, you know?
Starting point is 02:12:27 Lapses one thing, laughs is it overlaps was not necessarily perfectly overlapping with like good vibes and support like celebrating others for example, not in us. It's too shallow and naive way, but like in an awesome way. Oh, something awesome happened and you celebrate them for it. It feels that that is outweighed by shading other people. Now, it's better than mainstream media, but it's still... Yeah, mainstream media is almost relatively negative about everything. I mean, really, the conventional news tries to answer the question, what is
Starting point is 02:13:05 the worst thing I happen with Earth today? And it's a big world. So on any given day, something bad has happened. And a generalization of that, what is the worst perspective I can take on a thing that happened? So, yeah, I don't know, there's just a strong negative bias in the news. I mean, I think there's what the possible explanation for this is evolutionary where bad news historically would be potentially fatal. Like there's a line over there or there's some other try that wants to kill you. Good news. You know, like we found a patch of berries is nice to have, but not essential. So our old friend, Tesla Autopilot, it's probably one of the most intelligent real-world AI systems in the world.
Starting point is 02:13:55 Right. Do you follow it from the beginning? Yeah, it was one of the most incredible robots in the world that continues to be, and it was really exciting. And it was super exciting when it generalized, became more than a robot on four wheels, but a real world AI system that perceives the world. Yeah, and it can have potentially different embodiments. Well, I mean, the really well thing about the internet
Starting point is 02:14:22 training is that it learns to read, you can read science, but we never taught it to read. So, yeah, we never taught it what a car was or what a person was or a bicyclist. It learnt what all those things are, what all the objects are on the road from video, just from watching videos, just like humans. I mean, humans are photons and control controls out. The vast majority of information reaching out brain is from rise. And you say, well, what's the output? The output is our motorcycles signals to our
Starting point is 02:15:06 the off sort of fingers and mouth in order to communicate. Fortons and controls out. The same is true of the car. But by looking at the sequence of images, you've agreed with Ilya Siskava recently where he talked about LLM forming a world model and basically language is a projection of that wall model onto the sequence of letters and you're saying the same thing. It finds order in these things. It finds a correlative clusters. And so doing, it's like understanding something deep about the world. Yeah. Which is like, I guess beautiful.
Starting point is 02:15:53 That's how our brain works. Yeah, but it's built in controls now. You and that are able to understand that deep meaning in the world. And so the question is how far can it go? And it does seem everybody's excited about the LLM's, so in the world. And so the question is how far can it go. And it does seem everybody's excited about it, LLM. So in the space of self supervised learning in the space of text. Yeah. It seems like there's a deep similarity between that and what Tesla autopilot is doing.
Starting point is 02:16:24 Is it to you basically the same? They are converging. They are converging. I wonder who gets there faster, having a deep understanding of the world. Or they just will naturally converge. They're both headed towards AGI. The Tesla approach is much more computer efficient. It has to be because we were constrained on this, you rarely have 100 watts, and it's 8 computer, 144 trillion operations per second, which sounds like a lot, but it's kind of small potatoes these days.
Starting point is 02:17:06 That indeed. But it's understanding the world at indeed. It's my turn for 56 values. But there, the path to AGI might have much more significant impact because it's understanding it. It'll fast understand the real world, then we'll LLMs. And they're more for being able to integrate with the real humans in the real world faster. They're both going to understand the world, but I think Tesla's approach is fundamentally
Starting point is 02:17:37 more compute efficient. It had to be, there was no choice. Like, outbrain is very compute efficient, very, very energy efficient. So, I think we're like, what does our brain able to do? There's only about 10 watts of higher brain function, not counting stuff that's just used to control our body. The thinking part of our brain is less than 10 watts. And those 10 watts can still produce a much better novel than a 10-megawatt GPU cluster.
Starting point is 02:18:08 So there's a 6-order of magnitude difference there. I mean, the AI's thus far gotten to where it is via brute force, just throwing mass amounts of compute and mass amounts of power at it. So, this is not where it will end up. In general, within a given technology, first try to make it work and then you make it efficient. So I think we'll find over time that these models get smaller or are able to produce a sensible output with far less
Starting point is 02:18:51 compute, far less power. Tesla is arguably ahead of the game on that front because it has just been forced to try to understand the world with 100 watts of compute. And there are a bunch of fundamental functions that we kind of forgot to include. So we have to run them in a bunch of things in emulation. We fixed a bunch of those with hardware 4 and then hardware 5 will be even better. hardware 4 and in hardware 5 will be even better. But it does appear at this point that the call will be able to drive better than a human, even with hardware 3 and a hundred watts of power. And really, if we really optimize it, it could be probably less than 50 watts. What have you learned about developing optimists, about applying, integrating this kind of
Starting point is 02:19:52 real world AI into the space of robotic manipulation, just human or robotics? What are some interesting, tiny or big things you've understood? I was surprised at the fact that we had to develop every part of the robot ourselves, that there were no off-the-shelf motors, electronics, sensors, like we had to develop everything. We couldn't actually find a source of electric motors for any amount of money. It's not even just the efficient inexpensive, it's like anything, there's not a... No. The actuators, everything, everything has to be designed for scratch. We tried hard to find anything that was...
Starting point is 02:20:44 Because you think of how many electric motors have made in the world. There's like tens of thousands, hundreds of thousands of electric motor designs. None of them were suitable for a human ride robot, literally none. So we had to develop our own design, design it specifically for what a humanoid robot needs. A harder was it to design something that can be mass manufactured, can be relatively expensive, and if you compare the boss dynamics atlas, it's a very expensive robot. It is designed to be manufactured in the same way they would make a car, and I think ultimately we can make optimists for less than the cost of a car. It should be because if you look at the
Starting point is 02:21:32 mass of the robot is much smaller and the car has many actuators in it. The car is more actuators than the robot. But the actuators are kind of interesting in a humanoid robot with fingers. So optimist is really nice hands and fingers. You know. Yeah. And they could do some interesting manipulation.
Starting point is 02:21:54 Soft touch robotics. I mean, one of the tests goals I have is can it pick up a needle and a thread and thread the needle just by looking. How far away we from thread and thread the needle just by looking. How far away we from that just by looking, just by looking? Maybe a year. Although I go back to I'm optimistic on time. The work that we're doing in the car will translate to the robot. The perception or the control?
Starting point is 02:22:20 No, the controls are different but the video in controls out. No, the controls are different, but the video we're in controls out. The car is robot or four wheels. The optimist is a robot with the hands and legs. So you can just... They're very similar. So the entire machinery of the learning process and to end is just you just have a different set of controls. After this we'll figure out how to do things by watching videos.
Starting point is 02:22:48 As the saying goes, be kind for everyone you meet, it's fighting a battle, you know nothing about. Yes, true. What's something difficult you're going through that people don't often see? Trying to feed you with this. Ha ha. Ha ha. Ha ha.
Starting point is 02:23:02 Ha ha. No, I mean, you know, I mean, you know, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, How are you doing? I'm overall okay.
Starting point is 02:23:36 The grand scheme of things I can't complain. Do you get lonely? me. Sometimes, but I, you know, my kids and friends keep me company. So not existential. There are many nights I sleep alone. I don't have to, but I do. Walter Isaacson in this new biography of you wrote about your difficult childhood, will you ever find forgiveness in your heart for everything that has happened to you in that period of your life? What is forgiveness? forgiveness. I do not at least I don't think I have a resentment. So I think if I go, no forgiveness is difficult for people. It seems like you don't
Starting point is 02:24:44 harbor the resentment I mean I try to think about like what what what is gonna affect the future in a good way And holding onto grudges does not Effect the future in a good way Here father a proud father Whatever you learn about life from your kids Those little biological organisms. I mean, developing AI and watching say a little X-Grow is fascinating
Starting point is 02:25:18 because they are formal parallels than I would have expected. I mean, I can see his biological neural net making more and more sense of the world. And I can see the digital neural net making more and more sense of the world at the same time. Do you see the beauty and magic in both? Yes. I mean, one of the things with kids is that, you know, you kind of see the world are new in their eyes.
Starting point is 02:25:53 Near to them, everything is new and fresh. And then when you see that, they make sure that the world is new and fresh. You do too. Well, Elon, I just want to say thank you for your kindness to me and friendship over the years for seeing something in a silicate like me as you've done for many others. And thank you for having hope for a positive future for humanity and for working your ass off to make it happen. Thank you, Elon.
Starting point is 02:26:29 Rexlex. Thank you for listening to this conversation with Elon Musk. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words that Walter Isaacson wrote about the central philosophy of how Elon approaches difficult problems. The only rules are the ones dictated by the laws of physics. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.