Lex Fridman Podcast - #194 – Bret Weinstein: Truth, Science, and Censorship in the Time of a Pandemic

Episode Date: June 26, 2021

Bret Weinstein is and evolutionary biologist, author, and co-host of the DarkHorse Podcast. Please support this podcast by checking out our sponsors: - The Jordan Harbinger Show: https://www.youtube.c...om/thejordanharbingershow - ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free - Magic Spoon: https://magicspoon.com/lex and use code LEX to get $5 off - Four Sigmatic: https://foursigmatic.com/lex and use code LexPod to get up to 60% off EPISODE LINKS: Bret's Twitter: https://twitter.com/BretWeinstein Bret's YouTube: https://www.youtube.com/BretWeinsteinDarkHorse Bret's Website: https://bretweinstein.net/ Bret's Book: https://amzn.to/3dhVWrv PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:04) - Why biology is beautiful (16:38) - Boston Dynamics (20:02) - Being self-critical (30:09) - Theory of close calls (38:46) - Lab leak hypothesis (1:12:41) - Joe Rogan (1:21:50) - Censorship (1:58:40) - Vaccines (2:12:26) - The paper on mice with long telomeres (2:40:09) - Martyrdom (2:48:54) - Eric Weinstein (2:58:13) - Monogamy (3:10:31) - Advice for young people (3:17:16) - Meaning of life

Transcript
Discussion (0)
Starting point is 00:00:00 The following is a conversation with Brett Weinstein, evolutionary biologist, author, co-host of the Dark Horse Podcast, and, as he says, reluctant radical. Even though we've never met or spoken before this, who both felt like we've been friends for a long time, I don't agree on everything with Brett, but I'm sure as hell happy he exists, in this weird and wonderful world of ours. Quick mention of our sponsors, Jordan Harmer to show, ExpressVPN, Magic Spoon, and Forsegmatic. Check them out in the description to support this podcast.
Starting point is 00:00:35 As a side note, let me say a few words about COVID-19 and about science broadly. I think science is beautiful and powerful. It is the striving of the human mind to understand and to solve the problems of the world. But as an institution, it is susceptible to the flaws of human nature, to fear, to greed, power, and ego. 2020 is a story of all of these that has both scientific triumph and tragedy. We need a great leaders, and we didn't get them. What we needed is leaders who communicate in an honest, transparent, and authentic way about the uncertainty of what we know, and the large-scale scientific efforts to reduce
Starting point is 00:01:16 that uncertainty and to develop solutions. I believe there are several candidates for solutions that could have all saved hundreds of billions of dollars and lessened or eliminated the suffering of millions of people. Let me mention five of the categories of solutions. Masks at home testing anonymized contact tracing, antiviral drugs and vaccines. Within each of these categories institutional leaders should have constantly asked and answered publicly, honestly, the following three questions. One, what data do we have on the solution and what studies that we're running to get more and better data? Two, given the current data and uncertainty, how effective and how safe is the solution? Three, what is the timeline and cost involved with
Starting point is 00:02:04 mass manufacturing distribution of the solution? In the service is the timeline and cost involved with mass manufacturing distribution of the solution? In the service of these questions, no voices should have been silenced. No ideas left off the table. Open data, open science, open on a scientific communication and debate was the way, not censorship. There are a lot of ideas out there that are bad, wrong, dangerous. But the moment we have the hubris to say we know which ideas those are is the moment we lose our ability to find the truth, to find solutions, the very things that make science beautiful and powerful. In the face of all the dangers that threaten the well-being and the existence of humans
Starting point is 00:02:42 on earth. This conversation with Brett is less about the ideas we talk about. We grant some, disagree on others, it is much more about the very freedom to talk, to think, to share ideas. This freedom is our only hope. Brett should never have been censored. I asked Brett to do this podcast to show solidarity and to show that I have hope for science and for humanity. As usual, I'll do a few minutes of As Now, no ads in the middle. I don't like those, nobody likes those.
Starting point is 00:03:14 I try to make these interesting, but I give you timestamps, so if you skip, please still check out the sponsors by clicking the links in the description. It's the best way to support this podcast. I'm very picky with responses we take on. So if you check them out and buy their stuff, I hope you will find value in it just as I have. This episode is sponsored by the Jordan Harmages show. Search for it on YouTube. Subscribe to it. Listen, you won't regret it. He has a lot of incredible conversations on there. Recently he had Michio Kaku and the other grass Tyson and Cal Newport. Jordan challenges me to think as an
Starting point is 00:03:50 interviewer about what makes for a good question because he's often very good at asking the question without the fluff that gets to the core of an idea. I think I often do a little too much mumbling, a little too much wandering around as I ask the question, like some questions I ask take like a minute or two. I guess the benefit of that is that it's authentically who I am is I'm thinking through the question. It's almost like I start to disagree with myself as I think through the question
Starting point is 00:04:23 and you get to see that thinking process. But the downside is that sometimes just like a half a sentence or one sentence is enough to create that kind of tension that springs the mind of the guest into the space of the idea that they're going to explore. Again, search for Jordan Harbour to show on YouTube, check out the video version of the podcast, but you can also of course listen to everywhere podcasts are available, like Apple, Spotify, all that kind of stuff. This show is also sponsored by ExpressVPN, a protected privacy on the internet. Using the internet without it is like driving without
Starting point is 00:05:01 car insurance. Speaking of which, I'm currently driving a rental car and I had to change your tire. That was a funny experience. They made me realize that it's been a while since I changed your tire and I kind of enjoyed it in the summer, Austin Heat. It kind of reminded me of that movie with Sean Penn and I think it's Jennifer Lopez called U-Turn. I may be imagining it, but I remember it being like a really interesting film. That was about cars and heat and that tension. I think Nick Nalti was in it too. He's an incredible actor.
Starting point is 00:05:36 Anyway, that whole experience is exactly what it's like to not have a VPN. If you want to have a painless experience, you should get ExpressVPN. It protects you from the good guys, the bad guys, all of whom watch your data, the ISPs, the big companies, all that kind of stuff. If privacy is important to you, you should use VPN. Go to expressvpn.com slash likes pod to get an extra three months free. That's ExpressVPN.com slash Lex pod. This episode is also sponsored by Magic Spoon, low carb keto friendly cereal. It has zero
Starting point is 00:06:12 grams of sugar, 13 to 14 grams of protein, only four net grams of carbs, and 140 calories in each serving. They now ship to Canada. I posted on Twitter that I'm going to Toronto Canada to visit a friend and I asked for recommendations for people to talk to on a podcast and I got some incredible recommendations. So once I get a chance to visit which will probably be brief, I'll have a lot of cool conversations to have over there. I love Canada in general. I love Toronto, I love Montreal. I feel like I want to take a road trip throughout Canada. And maybe on the road trip, I'll be eating Magic Spoon cereal. It's honestly my favorite snack that feels like a cheap
Starting point is 00:06:54 meal, but it's not. Low carb, and reminds me of like early high school when I would eat cereal that was terrible for you. But in this case, it's cereal that is really good for you. And so can bring happiness without the pain of malnutrition. Anyway, Magic Spoon has a 100% happiness guarantee, so if you don't like it, they will refund it. Go to Magic Spoon.com slash Lex, and use code Lex at checkout to say five bucks off your order. That's Magic Spoon.com slash Lex and use code Lex. This show is sponsored by a 4-sigmatic, the maker of delicious mushroom coffee and plant-based protein.
Starting point is 00:07:35 Does the coffee taste like mushrooms you ask? No, it does not. It tastes delicious. It's a part of my daily routine. It brings me joy. It brings me energy. It brings me energy. It brings me focus. I drink a cup of coffee as I turn on the brown noise and my mind goes into this deep state of focus in the morning. Those few early morning hours are my favorite hours of the day. I don't know what
Starting point is 00:08:01 it is, but it's like the mind emerges from the dream state and walks calmly into this tunnel of vision and focus. That results in some like incredible levels of productivity. Anyway, four-sigmatized coffee is part of that. How you recommend it, tastes good. I think there's a lot of ways that it's very healthy for you, but I just drink it because it's delicious and because I kind of like the idea of mushroom coffee. Get up to 40% off and free shipping on mushroom coffee bundles if you go to foursigmatic.com
Starting point is 00:08:35 slash luxe. That's foursigmatic.com slash luxe. This is the Luxe Freedom and Podcast and here is my conversation with Brett Weinstein. What to you is beautiful about the study of biology? The science, the engineering, the philosophy of it. It's a very interesting question. I must say at one level, it's not a conscious thing. I can say a lot about why as an adult I find biology compelling, but as a kid, I was completely fascinated with animals.
Starting point is 00:09:27 I loved to watch them and think about why they did what they did, and that developed into a very conscious passion as an adult, but I think in the same way that one is drawn to a person I was drawn to the never ending series of near miracles that exists across biological nature. When you see a living organism, do you see it from an evolutionary biology perspective of like this entire thing that moves around in this world? Or do you see like from an engineering perspective that are like first principles almost down to the physics, like the little components that build up hierarchies that you have cells, the first proteins and cells and organs and all that kind of stuff. So do you see low level, or do you see high level? Well, the human mind is a strange thing, and I think it's probably a bit like a time-sharing
Starting point is 00:10:27 machine in which I have different modules. We don't know enough about biology for them to connect, right? So they exist in isolation, and I'm always aware that they do connect, but I basically have to step into a module in order to see the evolutionary dynamics of the creature and the lineage that it belongs to, I have to step into a different module to think of that lineage over a very long time scale, a different module still to understand what the mechanisms inside would have to look like to account for what we can see from the outside. And I think that probably sounds really complicated, but one of the things about being involved in a topic like biology and doing so for one,
Starting point is 00:11:15 you know, really not even just my adult life or my whole life is that it becomes second nature. And you know, when we see somebody do an amazing parkour routine or something like that, we think about what they must be doing in order to accomplish that. But of course, what they are doing
Starting point is 00:11:33 is tapping into some kind of zone, right? They are in a zone in which they are in such command of their center of gravity, for example, that they know how to hurl it around a landscape so that they always land on their feet. And I would just say, for anyone who hasn't found a topic on which they can develop that kind of facility, it is absolutely worthwhile. It's really something that human beings are capable of doing across a wide range of topics. Many things our ancestors didn't even have access to, and that flexibility
Starting point is 00:12:11 of humans, that ability to repurpose our machinery for topics that are novel means really the world is a roister. You can figure out what your passion is and then figure out all of the angles that one would have to pursue to really deeply understand it. And it is well worth having at least one topic like that. You mean embracing the full adaptability of both the body and the mind? So like, I don't know what to attribute the parkour to, like biomechanics of how our bodies can move, or is it the mind? Like how much percent wise, is it the entirety of the hierarchies of biology
Starting point is 00:12:55 that we've been talking about, or is it just all the mind? The way to think about creatures, is that every creature is two things simultaneously. A creature is a machine of sorts. It's not a machine. I call it an aqueous machine, and it's run by an aqueous computer. It's not identical to our technological machines, but every creature is both a machine that does
Starting point is 00:13:22 things in the world sufficient to accumulate enough resources to continue surviving to reproduce. It is also a potential. So each creature is potentially, for example, the most recent common ancestor of some future clade of creatures that will look very different from it. And if a creature is very, very good at being a creature, but not very good in terms of the potential it has going forward, then that lineage will not last very long into the future, because change will throw at challenges that its descendants will not be able to meet. So the thing about
Starting point is 00:13:59 humans is we are a generalist platform, and we have the ability to swap out our software to exist in many many different niches. And I was once watching an interview with this British group of parkour experts who were being you know just they were discussing what it is they do and how it works. And what they essentially said is, look, you're tapping into deep monkey stuff, right? And I thought, yeah, that's about right. And, you know, anybody who is proficient at something like skiing or skateboarding, you know, has the experience of flying down the hill on skis for example bouncing from the top of one mogul to the next and
Starting point is 00:14:50 If you really pay attention you will discover that your conscious mind is actually a spectator It's there. It's involved in the experience, but it's not driving some part of you knows how to ski and it's not the part of you that knows how to think and I would just say that that what accounts for this flexibility in humans is the ability to bootstrap a new software program and then drive it into the unconscious layer where it can be applied very rapidly. And I will be shocked if the exact thing doesn't exist in robotics. You know, if you programmed a robot to deal with circumstances that were novel to it, how would you do it?
Starting point is 00:15:31 It would have to look something like this. This is a certain kind of magic. You're right. Well, the conscious is being an observer. When you play guitar, for example, or piano for me, music, when you get truly lost in it, I don't know what the heck is responsible for the flow of the music, the kind of the loudness of the music going up and down, the timing, the intricate, like even the mistakes, all those things that
Starting point is 00:15:57 doesn't seem to be the conscious mind. It is just observing. And yet it's somehow intracurricular involved more more like because you mentioned parkour The dance is like that too when you start. I've been tango dancing You if when you truly lose yourself in it then It's just like you're an observer and how the hell is the body able to do that and not only that It's the physical motion is also creating the emotion the Like that damn was good to be alive feeling. So, but then that's also intricately connected to the full biology stack that we're operating
Starting point is 00:16:37 in. I don't know how difficult it is to replicate that. We're talking offline about Boston Dynamics robots. They've recently did both parkour, they did flips, they've also done some dancing. And it's something I think a lot about because most people don't realize because they don't look deep enough as those robots are hard-coded to do those things. The robots didn't figure it out by themselves. And yet, the fundamental aspect of what it means to be human is that process of figuring out of making mistakes. And then there's something about overcoming those challenges and mistakes.
Starting point is 00:17:18 And like figuring out how to lose yourself in the magic of the dancing or just movement is what it means to be human. That learning process, so that's what I want to do with almost as a fun side thing with the Boston Dynamics robots, is to have them learn and see what they figure out, even if they make mistakes. I want to let spot make mistakes. And in so doing discover what it means to be alive, discover beauty.
Starting point is 00:17:52 Because I think that's the essential aspect of mistakes. Boston Dynamics folks want spot to be perfect. Because they don't want spots ever make mistakes, because they want to operate in the factories, it wants to be very safe, and so on. because they want to operate in the factories, they want to be, you know, very safe and so on. For me, if you construct the environment, if you construct a safe space for robots and allow them to make mistakes, something beautiful might be discovered. But that requires a lot of brain power.
Starting point is 00:18:20 So spot is currently very dumb and I'm going to add it, give it a brain. So first make it C, currently currently can't see meaning computer vision It's to understand the environment has to see all the humans But then also has to be able to learn Learn about its movement learn how to Use his body to communicate with others all those kinds of things that dogs know how to do well humans know how to do somewhat well I think that's a beautiful challenge, but first you have to allow the robot to make mistakes. Well, I think your objective is laudable, but you're going to realize that the Boston
Starting point is 00:18:55 Dynamics folks are right. The first time spot poops on your rug. I hear the same thing about kids and so on. Yes. I still want to have kids and so on. Yes. I still want to have kids. No, you should. It's a great experience. So let me step back into what you said in a couple of different places.
Starting point is 00:19:11 One, I have always believed that the missing element in robotics and artificial intelligence is a proper development. It is no accident. It is no mere coincidence that human beings are the most dominant species on planet Earth and that we have the longest childhoods of any creature on Earth by far. The development is the key to. It's the flip side of our helplessness at birth. So, I'll be very interested to see what happens in your robot project. If you do not end up reinventing childhood for robots, which of course is foreshadowed in 2001 quite brilliantly.
Starting point is 00:20:02 But I also want to point out, you can see this issue of your conscious mind becoming a spectator very well if you compare tennis to table tennis Right if you watch a tennis game You could imagine that the players are highly conscious as they play You cannot imagine that if you've ever played ping pong decently. A volley in ping pong is so fast that your conscious mind, if your reactions had to go through your conscious mind, you wouldn't be able to play. So you can detect that your conscious mind while very much present isn't there. And you can also detect where
Starting point is 00:20:41 consciousness does usefully intrude. If you go up against an opponent in table tennis, that knows a trick that you don't know how to respond to, you will suddenly detect that something about your game is not effective. And you will start thinking about what might be, how do you position yourself? So that move that puts the ball just in that corner of the table or something like that doesn't catch you off guard and this I believe is
Starting point is 00:21:10 we highly conscious folks those of us who try to think through things very deliberately and carefully mistake consciousness for like the highest kind of thinking and I really think that this is an error consciousness Consciousness is an intermediate level of thinking. What it does is it allows you, it's basically like uncompiled code. And it doesn't run very fast. It is capable of being adapted to new circumstances.
Starting point is 00:21:35 But once the code is roughed in, right, it gets driven into the unconscious layer and you become highly effective at whatever it is. And from that point, your conscious mind basically remains there to detect things that aren't anticipated by the code you've already written. And so I don't exactly know how one would establish this, how one would demonstrate it. But it must be the case that the human mind contains sandboxes in which things are tested, right? Maybe you can build a piece of code and run it in parallel next to your active code so
Starting point is 00:22:10 you can see how it would have done comparatively. But there's got to be some way of writing new code and then swapping it in. And frankly, I think this has a lot to do with things like sleep cycles. Very often, you know, when I get good at something, I often don't get better at it while I'm doing it. I get better at it when I'm not doing it, especially if there's time to sleep and think on it. So there's some sort of new program swapping in for old program phenomenon,
Starting point is 00:22:37 which will be a lot easier to see in machines. It's gonna be hard with the wet wear. I like, I mean, it is true because somebody that played tennis for many years, I do still think the highest form of excellence in tennis is when the conscious mind is a spectator. So the compiled code is the highest form of being human. And then consciousness is just some like specific compiler. You just have like Borland, C plus plus compile. You could just have different kind of compilers. Ultimately, the thing that by which we measure the power of life, the intelligence of life
Starting point is 00:23:21 is the compiled code. And you can probably do that compilation all kinds of ways. Yeah, I'm not saying that tennis is played consciously and table tennis isn't, I'm saying that because tennis is slowed down by the just the space on the court, you could imagine that it was your conscious mind playing. But when you shrink the court, it becomes obvious. It becomes obvious that your conscious mind is just present rather than knowing where to put the battle. And weirdly for me, I would say this probably isn't true in a podcast situation,
Starting point is 00:23:52 but if I have to give a presentation, especially if I have not overly prepared, I often find the same phenomenon when I am giving the presentation. My conscious mind is there watching some other part of me present, which is a little jarring, I have to say. Well, that means you've, you've gotten good at it, not let the conscious mind get in the way of the flow of words. Yeah, that's, that's the sensation to be sure. And that's the highest form of podcasting to, I mean, that's why I have, that, that's what it looks like when a podcast is really in the pocket, like Joe Rogan just having fun and just losing themselves. And that's something I aspire to as well, just losing yourself in the conversation.
Starting point is 00:24:34 Somebody that has a lot of anxiety with people, like I'm such an introvert, I'm scared, I'm scared before you showed up, I'm scared right now, there's just anxiety, there's just it's a giant mess, it's hard to lose yourself. It's hard to just get out of the way of your own mind. Yeah, actually, trust is a big component of that. Your conscious mind retains control if you are very uncertain. But when you do get into that zone, when you're speaking, I realize it's different for you with English as a second language although maybe you're presenting Russian and, you know, and it happens. But do you ever hear yourself say something and you think,
Starting point is 00:25:13 oh, that's really good, right? Like, like, you didn't come up with it. Some other part of you that you don't exactly know came up with it. I don't think I've ever heard myself in that way because I have a much louder voice that's constantly yelling in my head at why the hell did you say that? There's a very self-critical voice. That's much louder. So I'm very... Maybe I need to deal with that voice, but it's been like with what is it called? Like a megaphone just screaming
Starting point is 00:25:45 So I can't hear it. Oh, no, it says good job. You said that thing really nicely. So I'm kind of focused right now on the megaphone person in the audience versus the the positive but that's definitely something to think about. It's been productive but You know the place where I find gratitude and beauty and appreciation of life is in the quiet moments when I don't talk, when I listen to the world around me, when I listen to others, when I talk I'm extremely self-critical in my mind, when I produce anything out into the world that's that originated with me, like any kind of creation, extremely self-critical. It's good for productivity, for like always striving to improve and so on.
Starting point is 00:26:30 It might be bad for like just appreciating the things you've created. A little bit with Marvin Minsky on this where he says the key to a productive life is to hate everything you've ever done in the past. I didn't know he said that. I must say I resonate with it a bit. And, you know, I, unfortunately, my life currently has me putting a lot of stuff into the world. And I effectively watch almost none of it. I can't stand it. watch almost none of it. I can't stand it. Yeah, what do you make of that? I don't know. I just recently, I just yesterday read Metamorphosis by Kafka, reread Metamorphosis by Kafka, where he turns into a jam bug because of the stress that the world puts on him. His parents put on him to succeed. And, you know,
Starting point is 00:27:23 I think that you have to find the balance. Because if you allow the self-critical voice to become too heavy, the burden of the world, the pressure that the world puts on you to be the best version of yourself and so on to strive, then you become a bug. And that's a big problem. And then the world turns against you because you're a bug. You become some kind of caricature of yourself. I don't know. Become the worst version of yourself and then thereby end up destroying yourself and then the world moves on. That's the story. That's a lovely story. I do think this is one of these places, and frankly, you could map this on to all of modern human experience, but this is one of these places where our
Starting point is 00:28:12 ancestral programming does not serve our modern selves. So I used to talk to students about the question of dwelling on things. You know, dwelling on things is famously understood to be bad, and it can't possibly be bad. It wouldn't exist a tendency toward it, wouldn't exist if it was bad. So what is bad is dwelling on things past the point of utility. And that's obviously easier to say than to operationalize. But if you realize that your dwelling is the key in fact to upgrading your program for future well-being and that there's a point, you know, presumably from diminishing returns, if not counter-productivity, there is a point at which you should stop because that is what is in your best interest, then knowing that you're looking for that point is useful, right?
Starting point is 00:29:02 This is the point at which it is no longer useful for me to dwell on this error I have made. That's what you're looking for. It also gives you license. If some part of you feels like it is punishing you rather than searching, then that also has a point at which it's no longer valuable. There's some liberty in realizing, yep, even the part of me that was punishing me knows it's time to stop. So if we map that on to compiled code discussion, as a computer science person, I find that very compelling. You know, there's a, when you compile code, you get warnings sometimes.
Starting point is 00:29:39 And usually, if you're a good software engineer, you're going to make sure there's no, you treat warnings as errors. So you make sure that the compilation produces no warnings. But at a certain point when you have a large enough system, you just let the warnings go. It's fine. Like I don't know where that warning came from, but you know, it's just ultimately you need to compile the code and run with it. And I hope nothing terrible happens.
Starting point is 00:30:09 Well, I think what you will find, and believe me, I think what you're talking about with respect to robots and learning is going to end up having to go to a deep developmental state and a helplessness that evolves into hyper competence and all of that. I noticed that I live by something that I for lack of a better descriptor call, the theory of close calls. And the theory of close calls says that people typically miscategorize the events in their life where something almost went wrong. And, you know, for example, if you, I have a friend who, I was walking down the street with my college friends and one of my friends stepped into the street thinking it was clear and was nearly hit by a car going 45 miles an hour. Would have been an absolute disaster. Might have killed her.
Starting point is 00:31:05 Certainly would have permanently injured her. But she didn't, you know, card didn't touch her, right? Now you could walk away from that and think nothing of it because, well, what is there to think? Nothing happened. Or you could think, well, what is the difference between what did happen
Starting point is 00:31:22 and my death? The difference is luck. I never want that to be true, right? I never want the difference between what did happen and my death? The difference is luck. I never want that to be true. I never want the difference between what did happen and my death to be luck. Therefore, I should count this as very close to death and I should prioritize coding so it doesn't happen again at a very high level. So anyway, my basic point is the level. So anyway, my basic point is the accidents and disasters and misfortune describe a distribution that tells you what's really likely to get you in the end. And so, personally, you can use them to figure out where the dangers are so that you can afford to take great risks
Starting point is 00:32:03 because you have a really good sense of how they're going to go wrong. But I would also point out civilization has this problem. Civilization is now producing these events that are major disasters, but they're not existential scale yet. Right? They're very serious errors that we can see. And I would argue that the pattern is you discover that we are involved in some industrial process at the point it has gone wrong, right? So I'm now always asking the question, okay, in light of the Fukushima triple meltdown,
Starting point is 00:32:35 the financial collapse of 2008, the deep water horizon blow out COVID-19 and its probable origins in the Wuhan lab, what processes do I not know the name of yet that I will discover at the point that some gigantic accident has happened and can we talk about the wisdom or lack thereof of engaging in that process before the accident, right? That's what a wise civilization would be doing and yet we don't. I just want to mention something that happened a couple of days ago. I don't know if you know who JB Strubel is, he's the co-founder of Tesla, CTO of Tesla for many, many years. His wife just died.
Starting point is 00:33:16 She was riding a bicycle and in the same, in that same thin line between death and life that many of us have been in where you walk into an intersection and there's this close call. Every once in a while it you get the short straw. I wonder how much of our own individual lives and the entirety of the human civilization rests on this little role of the dice. Well, this is sort of my point about the close calls, is that there's a level at which we can't control it, right? The gigantic asteroid that comes from deep space that you don't have time to do anything
Starting point is 00:34:03 about. There's not a lot we can do to hedge that out, or at least not short-term. But there are lots of other things, you know, obviously the financial collapse of 2008 didn't break down the entire world economy, it threatened to, but a herculean effort managed to pull us back from the brink. The triple meltdown at Fukushima was awful, but everyone of the seven fuel pools held. There wasn't a major fire that made it impossible to manage the disaster going forward. We got lucky. We could say the same thing about the blowout at the deep water horizon, where a hole in the ocean floor large enough that we couldn't have plugged it could have opened up.
Starting point is 00:34:44 All of these things could have been much, much worse. And I think we can say the same thing about COVID. It's terrible as it is. And we cannot say for sure that it came from the Wuhan lab, but there's a strong likelihood that it did. And it also could be much, much worse. So in each of these cases, something is telling us we have a process that is unfolding that keeps creating risks where it is luck that is the difference between us and some scale of disaster that is unimaginable. And that wisdom, you know, you can be highly intelligent and cause these disasters to be wise is to stop causing. Right? And that would require a process of restraint, a process that I don't see a lot of evidence of yet. So I think we have to generate it. And somehow we, you know,
Starting point is 00:35:35 at the moment we don't have a political structure that it would be capable of taking a protective algorithm and actually deploying, right? Because it would have important economic consequences and so it would almost certainly be shot down. But we can obviously also say, you know, we pay a huge price for all of the disasters that I've mentioned. And we have to factor that into the equation. Something can be very productive short- term, and very destructive long term.
Starting point is 00:36:07 Also, the question is how many disasters we avoided because of the ingenuity of humans or just the integrity and character of humans? That's sort of an open question. We may be more intelligent than lucky. That's the hope. Because the optimistic message here that you're getting at is maybe the process that we should be, that maybe we can overcome luck with ingenuity.
Starting point is 00:36:39 Meaning, I guess you're suggesting the process is we should be listing all the ways that human civilization can destroy itself, assigning likelihood to it, and thinking through how can we avoid that? And being very honest with the data out there about the close calls and using those close calls to then create sort of mechanism by which we minimize the probability of those close calls. And just being honest and transparent with the data that's out there. Well I think we need to do a couple things for it to work.
Starting point is 00:37:18 So I've been an advocate for the idea that sustainability is actually, it's difficult to operationalize, but it is an objective that we have to meet if we're to be around long-term. And I realized that we also need to have reversibility of all of our processes, because processes very frequently when they start to not appear dangerous. And then when they scale, they become very dangerous. So for example, if you imagine when they scale, they become very dangerous. So for example, if you imagine the first internal combustion engine, you know, vehicle driving down the street
Starting point is 00:37:50 and you imagine somebody running after them, saying, hey, if you do enough of that, you're gonna alter the atmosphere and it's gonna change the temperature of the planet is preposterous, right? Why would you stop the person who's invented this marvelous new contraption? But of course, eventually you do get to the place
Starting point is 00:38:03 where you're doing enough of this that you do start changing the temperature of the planet. So if we built the capacity, if we basically said, look, you can't involve yourself in any process that you couldn't reverse if you had to, then progress would be slowed, but our safety would go up dramatically. And I think, I think in some sense, if we are to be around long term, we have to begin thinking that way. We're just involved in too many very dangerous processes. So let's talk about one of the things that, if not threatened human civilization, certainly heard it at a deep level, which is COVID-19. What percent probability would you currently place on the hypothesis that COVID-19 leak
Starting point is 00:38:53 from the Wuhan Institute of Virology? So I maintain a flow chart of all the possible explanations, and it doesn't break down exactly that way. The likelihood that it emerged from a lab is very, very high. If it emerged from a lab, the likelihood that the lab was the Wuhan Institute is very, very high. There are multiple different kinds of evidence that point to the lab, and there is literally no evidence that points to nature. Either the evidence points nowhere or at points to the lab. And the lab could mean any lab. But geographically, obviously, the labs in Wuhan are the most likely. And the lab that was most
Starting point is 00:39:36 directly involved with research on viruses that look like COVID, that look like SARS-CoV-2, is obviously the place that one would start. But I would say the likelihood that this virus came from a lab is well above 95%. We can talk about the question of could a virus have been brought into the lab and escaped from there without being modified? That's also possible, but it doesn't explain any of the anomalies in the genome of SARS-CoV-2. Could it have been delivered from another lab? Could Wuhan be a distraction in order that we would connect the dots in the wrong way? That's conceivable. I currently have
Starting point is 00:40:18 that below 1% on my flow chart, but I think very dark thought that somebody would do that almost as a political attack on China. Well, it depends. I don't even think that's one possibility. Sometimes when Eric and I talk about these issues, we will generate a scenario just to prove that something could live in that space. It's a placeholder for whatever may actually have happened. And so it doesn't have to have been an attack on China. That's certainly one possibility. But I would point out if you can predict the future in some unusual way better than others, you can print money, right?
Starting point is 00:41:00 That's what markets that allow you to bet for or against virtually any sector allow you to do. So you can imagine a simply a moral person or entity generating a pandemic attempting to cover their tracks because it would allow them to bet against things like cruise ships, air travel, whatever it is, and bet in favor of, I don't know, sanitizing gel, and whatever else you would do. So am I saying that I think somebody did that? No, I really don't think it happened. We've seen zero evidence that this was intentionally released. However, were it to have been intentionally released by somebody who did not know, did not want it known where it had come from, releasing it in Wuhan would
Starting point is 00:41:51 be one way to cover their tracks. So we have to leave the possibility formally open, but acknowledge there's no evidence. So in the probability, therefore, as well, I tend to believe, maybe this is the optimistic nature that I have that people who are competent enough to do the kind of thing we just described are not going to do that because it requires a certain kind of, I don't want to use the word evil, but whatever word you want to use to describe the kind of, this regard for human life required to do that, you're just, that's just not going to be coupled with competence.
Starting point is 00:42:30 I feel like there's a trade-off chart where competence on one axis and evil is on the other. And the more evil you become, the crappier you are at doing great engineering, scientific work required to deliver weapons of different kinds where there's bio weapons or nuclear weapons and all those kinds of things. That seems to be the lessons I take from history, but that doesn't necessarily mean that's what's going to be happening in the future. But to stick on the lab leak idea, because the flow chart is probably huge here,
Starting point is 00:43:03 because there's a lot of fascinating possibilities. One question I want to ask is what would evidence for natural origins look like? So one piece of evidence for natural origins is that it's happening the past, that viruses have jumped. Oh, they do jump. So that's possible to have happened. So that's a sort of like a historical evidence. Like, okay, well, it's possible that it's not evidence of the kind you think it is.
Starting point is 00:43:38 It's a justification for a presumption. Right. So the presumption upon discovering a new virus circulating is certainly that it came from nature. Right. The problem is the presumption evaporates in the face of evidence or at least it logically should. And it didn't in this case. It was maintained by people who privately and their emails acknowledged that they had grave doubts about the natural origin of this virus. Is there some other piece of evidence that we could look for and see that would say, this increases the probability of its natural origins?
Starting point is 00:44:15 Yeah. In fact, there is evidence. I always worry that somebody is going to make up some evidence in order to reverse the flow. And boy, well, let's say I am a lot of incentive for that. Actually, there's a huge amount of incentive. On the other hand, why didn't the powers that be the powers that lied to us about weapons of mass destruction in Iraq?
Starting point is 00:44:36 Why didn't they ever fake weapons of mass destruction in Iraq? Whatever force it is, I hope that force is here too. And so whatever evidence we find is real. It's the competence thing I'm talking about, but okay, go ahead. Well, we can get back to that, but I would say, yeah, the giant piece of evidence that will shift the probabilities in the other direction is the discovery of either a human population in which the virus circulated prior to showing up in Wuhan that would explain where the virus learned all of the tricks that it knew instantly upon spreading from Wuhan.
Starting point is 00:45:11 So that would do it, or an animal population in which an ancestor epidemic can be found in which the virus learned this before jumping to humans. But I'd point out in that second case, you would certainly expect to see a great deal of evolution in the early epidemic, which we don't see. So there almost has to be a human population somewhere else that had the virus circulating or an ancestor of the virus that we first saw in Wuhan circulating. And it has to have gotten very sophisticated
Starting point is 00:45:41 in that prior epidemic before hitting Wuhan in order to explain the total lack of evolution and extremely effective virus that emerged at the end of 2019. So you don't believe in the magic of evolution to spring up with all the tricks already there. Like everybody who doesn't have the tricks, they die quickly and then you just have this beautiful virus that comes in with a spike protein and the through mutation and selection, just like the ones that succeed and succeed big are the ones that are going to just spring into life with the tricks.
Starting point is 00:46:17 Well, no. That's called a hopeful monster. And hopeful monsters don't work. It's the job of becoming a new pandemic virus is too difficult. It involves two very difficult steps and they both have to work. One is the ability to infect a person and spread in their tissues sufficient to make an infection. And the other is to jump between individuals at a sufficient rate that it doesn't go extinct for one reason or another. Those are both very difficult jobs.
Starting point is 00:46:45 They require, as you describe, selection. And the point is selection would leave a mark. We would see evidence that it would be in animals or humans who would see both. Right. And you see this evolution in chairs of the virus gathering the tricks out. Yeah. You would see the virus, you would see the clumsy virus get better and better. And yes, I am a full believer in the power of that process.
Starting point is 00:47:07 And in fact, I believe it, what I know from studying the process is that it is much more powerful than most people imagine, that what we teach in the Evolution 101 textbook is to clumsy a process to do what we see it doing. And that actually people should increase their expectation of the rapidity with which that process can produce just jaw-dropping adaptations. That said, we just don't see evidence that it happened here, which doesn't mean it doesn't
Starting point is 00:47:36 exist, but it means in spite of immense pressure to find it somewhere, there's been no hint, which probably means it took place inside of a laboratory. So inside the laboratory, gain a function research on viruses, and I believe most of that kind of research is doing this exact thing that you're referring to, which is accelerated evolution. And just watching evolution do its thing on a bunch of viruses and seeing what kind of tricks get developed. The other method is engineering viruses. So manually adding on the tricks.
Starting point is 00:48:16 What do you think we should be thinking about here? So mind you, I learned what I know in the aftermath of this pandemic emerging. I started studying the question. And I would say based on the content of the genome and other evidence in publications from the various labs that were involved in generating this technology, a couple of things seem likely. This SARS-CoV-2 does not appear to be entirely the result of either a splicing process or serial passaging. It appears to have both things in its past, or it's at least highly likely that it does.
Starting point is 00:48:59 So for example, the foreign cleavage site looks very much like it was added in to the virus, and it was known that that would increase its infectivity in humans and increase its tropism. The virus appears to be excellent at spreading in humans and minks and ferrets. Now minks and ferrets are very closely related to each other, and ferrets are very likely to have been used in a serial passage experiment, the reason being that they have an ACE2 receptor that looks very much like the human ACE2 receptor. And so, were you going to passage the virus or its ancestor through an animal in order to
Starting point is 00:49:40 increase its infectivity in humans, which would have been necessary? Ferrets would have been necessary. Ferrets would have been very likely. It is also quite likely that humanized mice were utilized, and it is possible that human airway tissue was utilized. I think it is vital that we find out what the protocols were. If this came from the Wuhan Institute, we need to know it, and we need to know what the protocols were exactly
Starting point is 00:50:05 because they will actually give us some tools that would be useful in fighting SARS-CoV-2 and hopefully driving into extinction, which ought to be our priority. It's a priority that does not, it is not apparent from our behavior, but it really is, it should be our objective. If we understood where our interests lie, we would be much more focused on it. But those protocols would tell us a great deal. If it wasn't the Wuhan Institute, we need to know that. If it wasn't nature, we need to know that. And if it was some other laboratory,
Starting point is 00:50:36 we need to figure out what and where so that we can determine what we can determine about, about what was done. You're opening up my mind about why we should investigate, why we should know the truth of the origins of this virus. So for me personally, let me just tell the story of my own kind of journey. When I first started looking into the lably hypothesis, what became terrifying to me and important to understand and obvious is the sort of like Sam Harris way of thinking, which is it's obvious that a lab leak of a deadly
Starting point is 00:51:17 virus will eventually happen. My mind was it doesn't even matter if it happened in this case. It's obvious there's going to be happen in the future. So why the hell are we not freaking out about this? And COVID-19 is not even that deadly relative to the possible future viruses. It's the way I disagree with someone this, but he thinks about this way, about AGI as well,
Starting point is 00:51:41 about artificial intelligence. It's a different discussion, I think, but with viruses, it seems like something that could happen on the scale of years, maybe a few decades. AGI is a little bit farther out for me, but the terrifying thing seemed obvious that this will happen very soon for a much deadlier virus
Starting point is 00:52:00 as we get better and better at both engineering viruses and doing this kind of evolutionary-driven research, gain-of-function research. Okay, but then you started speaking out about this as well, but also started to say, no no, we should hurry up and figure out the origins now because it will help us figure out how to actually respond to this particular virus, how to treat this particular virus, what is in terms of vaccines, in terms of antiviral drugs, in terms of just hand, although the number of responses we should have, okay, I still am much more freaking out about the future. Maybe you can break that apart a little bit, which are you most focused on now?
Starting point is 00:52:54 Which are you most freaking out about now in terms of the importance of figuring out the origins of this virus? I am most freaking out about both of them because they're both really important. And, you know, we can put bounds on this. Let me say first that this is a perfect test case for the theory of close calls because as much as COVID as a disaster, it is also a close call from which we can learn much. You are absolutely right. If we keep playing this game in the lab, if we are not, if we are, especially if we do it under pressure, and when we are told that a virus is going to leap from nature any day, and that the more we know, the better we'll be able to fight it. We're going to create
Starting point is 00:53:33 the disaster all the sooner. So yes, that should be an absolute focus. The fact that there were people saying that this was dangerous back in 2015, ought to tell us something. The fact that there were people saying that this was dangerous back in 2015, Autotelus something, the fact that the system bypassed a ban and offshored the work to China, Autotelus, this is not a Chinese failure. This is a failure of something larger and harder to see. But I also think that there's a, there's a clock ticking with respect to SARS-CoV-2 and COVID, the disease that it creates. And that has to do with whether or not we are stuck with it permanently. So if you think about the cost to humanity of being stuck with influenza, it's an immense
Starting point is 00:54:16 cost year after year. And we just stop thinking about it because it's there some years you get to flu, most years you don't. Maybe you get the vaccine to prevent it. Maybe the vaccine isn't particularly well targeted. But imagine just simply doubling that cost. Imagine we get stuck with SARS-CoV-2 and its descendants going forward and that it just settles in and becomes a fact of modern human life.
Starting point is 00:54:42 That would be a disaster, right? The number of people we will ultimately lose is incalculable. The amount of suffering will be caused is incalculable. The loss of well-being and wealth incalculable. So that ought to be a very high priority driving this extinct before it becomes permanent. And the ability to drive extinct goes down the longer we delay effective responses. To the extent that we let it have this very large canvas, large numbers of people who have the disease in which mutation and selection can result in adaptation that we will not be able to counter the greater its ability to figure out features of our immune system and use them to its advantage. So I'm feeling the pressure of driving an extinct.
Starting point is 00:55:27 I believe we could have driven it extinct six months ago, and we didn't do it because of very mundane concerns among a small number of people. And I'm not alleging that they were brazen about, or that they were callous about deaths that would be caused. I have the sense that they were working from a kind of autopilot in which you know let's say you're in some kind of a corporation, a pharmaceutical corporation, you have a portfolio of therapies that in the context of a pandemic might be very lucrative. Those therapies have competitors. You of course want to position your product so that it succeeds and the competitors don't. And lo and behold, at some point, through
Starting point is 00:56:12 means that I think those of us on the outside can't really into it, you end up saying things about competing therapies that work better and much more safely than the ones you're selling that aren't true and do cause people to die in large numbers. But it's some kind of autopilot, at least part of it is. So there's a complicated coupling of the autopilot of institutions, companies, governments. And then there's also the geopolitical game theory thing going on, where you want to keep secrets. It's a Chernobyl thing where if you messed up, there is a big incentive, I think,
Starting point is 00:56:58 to hide the fact that you messed up. So how do we fix this? And what's more important to fix? The autopilot, which is the response that we often criticize about our institutions, especially the leaders in those institutions, Anthony Fauci and so on, some of the members of the scientific community. And the second part is the game with China of hiding the information in terms of on the fight between nations. Well, in our live streams on Dark Horse, Heather and I have been talking from the beginning
Starting point is 00:57:35 about the fact that although yes, what happens began in China, it very much looks like a failure of the international scientific community. That's frightening, but it's also hopeful in the sense that actually if we did the right thing now, we're not navigating a puzzle about Chinese responsibility. We're navigating a question of collective responsibility for something that has been terribly costly to all of us. So that's not a very happy process. But as you point out, what's at stake is in large measure at the very least the strong possibility this will happen again and that at some point it will be far worse.
Starting point is 00:58:17 So just as a person that does not learn the lessons of their own errors doesn't get smarter and they remain in danger. We collectively humanity has to say, well, there sure is a lot of evidence that suggests that this is a self-inflicted wound. When you have done something that has caused a massive self-inflicted wound, self-inflicted wound, it makes sense to dwell on it exactly to the point that you have learned the lesson that makes it very, very unlikely that something similar will happen again. I think this is a good place to kind of ask you to do almost like a thought experiment
Starting point is 00:58:57 or to steal man the argument against the lably hypothesis. So if you were to argue, you said 95% chance that it the virus leave from a lab, there's a bunch of ways I think you can argue that even talking about it is bad for the world. So, if I just put something on the table, to say that, the one who would be racism versus Chinese people, that talking about that it leaked from a lab,
Starting point is 00:59:41 there's a kind of immediate kind of blame, and it can spiral down into this idea that's somehow the people are responsible for the virus and this kind of thing. Is it possible for you to come up with other steel-man arguments against talking or against the possibility of the lab leak hypothesis? Well, so I think steel manning is a tool that is extremely valuable, but it's also possible to abuse it.
Starting point is 01:00:13 I think that you can only steal man a good faith argument. And the problem is we now know that we've not been engaged in opponents who are wielding good faith arguments because privately their emails reflect their own doubts. And what they were doing publicly was actually a punishment, a public punishment for those of us who spoke up with, I think, the purpose of either backing us down or more likely warning others not to engage in the same kind of behavior. And obviously for people like you and me who regard science as our likely best hope for navigating difficult waters, shutting down people who are using those
Starting point is 01:00:54 tools honorably is itself dishonorable. So I don't really, I don't feel that it is, I don't feel that there's anything to steal man. And I also think that immediately at the point that the world suddenly with no new evidence on the table switched years with respect to the lab leak. At the point of Nicholas Wade had published his article and suddenly the world was going to admit that this was at least a possibility. If not a likelihood, we got to see something of the rationalization process
Starting point is 01:01:27 that had taken place inside the institutional world, and it very definitely involved the claim that what was being avoided was the targeting of Chinese scientists. And my point would be, I don't want to see the targeting of anyone. I don't want to see the targeting of anyone. I don't want to see racism of any kind. On the other hand, once you create license to lie in order to protect individuals when the world has a stake in knowing what happened, then it is inevitable that that process, that license to lie will be used by the thing that captures institutions for its own purposes. So my sense is it may be very unfortunate if the story of what happened here can be used against
Starting point is 01:02:16 Chinese people. That would be very unfortunate. And as I think I mentioned, Heather and I have taken great pains to point out that this doesn't look like a Chinese failure. It looks like a failure of the international scientific community. So I think it is important to broadcast that message along with the analysis of the evidence. But no matter what happened, we have a right to know. And I frankly do not take the institutional layer at its word, that its motivations are honorable, and that it was protecting good-hearted scientists at the expense of the world. That explanation does not add up.
Starting point is 01:02:51 Well, this is a very interesting question about whether it's ever OK to lie at the institutional layer to protect the populace. I think both you and I are probably on the same have the same sense that it's a slippery slope even if it's an effective mechanism in the short term in the long term is going to be destructive. This happened with masks. This happened with other things. If you look at just history pandemics, there's an idea that panic is destructive amongst the populace. So you want to construct a narrative, whether it's a lie or not, to minimize panic. But you're suggesting that almost in all cases, and I think that was the lesson from
Starting point is 01:03:46 the pandemic in the early 20th century, that lying creates distrust and distrust in the institutions is ultimately destructive. That's your sense that lying is not okay. Well, okay. You know, there are obviously places where complete transparency is not a good idea, right? To the extent that you broadcast a technology that allows one individual to hold the world hostage, right? Obviously, you've got something to be navigated, but in general, I don't believe that the scientific system should be lying to us in the case of this particular lie,
Starting point is 01:04:30 the idea that the well-being of Chinese scientists outweighs the well-being of the world is preposterous. As you point out, one thing that rests on this question is whether we continue to do this kind of research going forward. And the scientists in question all of them, American, Chinese, all of them, were pushing the idea that the risk of a zonotic spillover event causing a major and highly destructive pandemic was so great that we had to risk this. Now if they themselves have caused it, and if they are wrong, as I believe they are, about the likelihood of a major world pandemic spilling out of nature in the way that they
Starting point is 01:05:11 wrote into their grant applications, then the danger is, you know, the call is coming from inside the house, and we have to look at that. And yes, whatever we have to do to protect scientists from retribution, we should do. But we cannot protecting them by lying to the world. And even worse, by demonizing people like me, like Josh Rogan, like Yuri Dagan, the entire drastic group on Twitter by demonizing us for simply following the evidence is to set a terrible precedent. You're demonizing people for using the scientific method to evaluate evidence that is available to us in the world. What a terrible crime it is to teach that lesson, right?
Starting point is 01:06:06 Thou shalt not use scientific tools? No, I'm sorry. Whatever your license to lie is, it doesn't extend to that. Yeah, I've seen the attacks on you, the pressure on you has a very important effect on thousands of world-class biologists actually. So at MIT, colleagues, mine, people I know, there's a slight pressure to not be allowed to
Starting point is 01:06:36 one speak publicly and to actually think. Like, do you even think about these ideas? It sounds kind of ridiculous, but just in the privacy of your own home to read things, to think, it's many people, many world-class biologists that I know will just avoid looking at the data. There's not even that many people that are publicly opposed in gain-of a function research. They're also like, it's not worth it. It's not worth the battle. And there's many people that kind of argue that those battles should be fought in private. In, you know, with colleagues in the, in the privacy of the scientific community, that the public is somehow not maybe intelligent enough to be able to deal with the complexity of the scientific community that the public is somehow not maybe intelligent enough to be able to deal with the complexities of this kind of discussion. I don't know, but the final
Starting point is 01:07:31 result is combined with the bullying of you and all the different pressures in the academic institutions is that it's just people are self-sensoring and silencing themselves, and silencing the most important thing, which is the power of their brains. Like, these people are brilliant, and the fact that they're not utilizing their brain to come up with solutions outside of the conformist line of thinking is tragic. It's tragic. Well, it is. I also think that we have to look at it and understand it for what it is. For one thing, it's kind of a cryptic totalitarianism. Somehow, people's sense of what they're allowed to think about, talk about, discuss is causing them to self-sensor.
Starting point is 01:08:18 And I can tell you it's causing many of them to rationalize, which is even worse. They're blinding themselves to what they can see. But it is also the case, I believe, that what you're describing about what people said, and a great many people understood that the lab leak hypothesis could not be taken off the table, but they didn't say so publicly. And I think that their discussions with each other about why they did not say what they understood, that's what capture sounds like on the inside. I don't know exactly what force captured the institutions. I don't think anybody knows for sure out here in public.
Starting point is 01:08:58 I don't even know that it wasn't just simply a process, but you have these institutions. They are behaving towards a kind of somatic obligation. They have lost sight of what they were built to accomplish. And on the inside, the way they avoid going back to their original mission is to say things to themselves like the public can't have this discussion. It can't be trusted with it. Yes, we need to be able to talk about this, but it has to be private. Whatever it is they say to themselves, that is what capture sounds like on the inside.
Starting point is 01:09:30 It's an institutional rationalization mechanism, and it's very, very deadly. And at the point you go from lab leak to repurposed drugs, you can see that it's very deadly in a very direct way. Yeah, I see this in my field of with things like autonomous weapon systems. People in AI do not talk about the use of AI in weapon systems. They kind of avoid the idea that AI is used in the military. It's kind of funny. There's this kind of discomfort and they're like, they all hurry, you know, like something scary happens and a bunch of sheep kind of funny. There's this kind of discomfort. And they're like, they all hurry,
Starting point is 01:10:05 like something scary happens in a bunch of sheep kind of like run away. That's what it looks like. And I don't even know what to do about it. Because, and then I feel this natural pull every time I bring up autonomous weapon systems to go along with the sheep. There's a natural kind of pull towards that direction.
Starting point is 01:10:24 Because it's like, what can I do as one person? Now, there's currently nothing destructive happening with autonomous weapon systems. So we're like in the early days of this race that in 10, 20 years might become a real problem. Now, we're the discussion we're having now. We're not facing the result of that in the space of viruses. Like for many years avoiding the conversations here. I don't know what to do that in the early days, but I think we have to, I guess, create institutions where people can stand out. People can stand out and like, basically be individual thinkers and break out into all
Starting point is 01:11:04 kinds of spaces of ideas that allow us to think freely, freedom of thought, and maybe that requires a decentralization of institutions. Well, years ago, I came up with a concept called Cultivated Insicurity, and the idea is, let's just take the example of the average Joe. The average Joe has a job somewhere and their mortgage, their medical insurance, their retirement, their connection with the economy is to one degree or another dependent on their relationship with the employer. That means that there is a strong incentive, especially in any industry where it's not easy to move from one employer to the next. There's a strong incentive to stay in your employer's good graces, right? So it creates a very top-down
Starting point is 01:11:59 dynamic, not only in terms of who gets to tell other people what to do, but it really comes down to who gets to tell other people how to think. So that's extremely dangerous. The way out of it is to cultivate security. To the extent that somebody is in a position to go against the grain and have it not be a catastrophe for their family and their ability to earn, you will see that behavior a lot more. So I would argue that some of what you're talking about is just a simple predictable consequence of the concentration of the sources of well-being
Starting point is 01:12:38 and that this is a solvable problem. You got a chance to talk with Joe Roganiuschi. Yes, I did. And I just saw the episode was released. And Ivermectin is trending on Twitter. Joe told me it was an incredible conversation. I look forward to listening to you today. Many people have probably, but the time this is released, I've already listened to it. I think it would be interesting to discuss a post-mortem. How do you feel how the conversation went? It may be broadly, how do you see the story as it's unfolding of Ivermectin from the origins, from before COVID-19 through 2020 to today.
Starting point is 01:13:25 I very much enjoyed talking to Joe and I'm undescribably grateful that he would take the risk of such a discussion that he would, as he described it to an emergency podcast on the subject, which I think that was not an exaggeration. This needed to happen for various reasons that he took us down the road of talking about the censorship campaign against Ivermectin,
Starting point is 01:13:52 which I find utterly shocking and talking about the drug itself. And I should say we had Pierre Corey available. He came on the podcast as well. He is of course the face of the FL triple C, the front line COVID-19 critical care alliance. These are doctors who have innovated ways of treating COVID patients and they happened on Ivermectin and have been using it. And I hesitate to use the word advocating
Starting point is 01:14:23 for it because that's not really the role of doctors or scientists, but they are advocating for it in the sense that there is this pressure not to talk about its effectiveness for reasons that we can go into. So maybe step back and say, what is Ivermectin? And how much studies have been done to show its effectiveness? how much studies have been done to show its effectiveness. So, Ivermectin is an interesting drug. It was discovered in the 70s by a Japanese scientist named Satoshi Omura,
Starting point is 01:14:53 and he founded in soil near a Japanese golf course. So, I would just point out in passing that if we were to stop self-silencing over the possibility that Asians will be demonized over the possible lab leak in Wuhan and to recognize that actually the natural course of the story has a likely lab leak in China. It has an unlikely hero in Japan. The story is naturally not a simple one. But in any case, Amura discovered this molecule. He sent it to a friend who was at Merck, a scientist named Campbell. They won a Nobel Prize for the discovery of the Ivermectin molecule in 2015.
Starting point is 01:15:45 for the discovery of the Ivermectin molecule in 2015. Its initial use was in treating parasitic infections. It's very effective in treating the, the worm that causes river blindness, the pathogen that causes elephantitis, scabies, the very effective anti-parasite drug. It's extremely safe. It's on the whose list of essential medications. It's safe for children. It has been administered something like four billion times in the last four decades.
Starting point is 01:16:12 It has been given away in the millions of doses by Merck in Africa. People have been on it for long periods of time. And in fact, one of the reasons that Africa may have had less severe impacts from COVID-19 is that Ivermectin is widely used there to prevent parasites. And the drug appears to have a long lasting impact. So it's an interesting molecule. It was discovered some time ago, apparently, that it has antiviral properties. And so it was tested early in the COVID-19 pandemic to see if it might work to treat humans with COVID. It turned out to have very promising evidence that it did treat humans was tested in tissues. It was tested at a very high dosage, which confuses people. They
Starting point is 01:16:59 think that those of us who believe that Ivermectin might be useful in confronting this disease, are advocating those high doses, which is not the case. But in any case, there have been quite a number of studies. A wonderful meta-analysis was finally released. We had seen it in pre-print version, but it was finally peer-reviewed and published this last week. It reveals that the drug as clinicians have been telling us those who have been using it, it's highly effective at treating people with a disease, especially if you get to them early.
Starting point is 01:17:31 And it showed an 86% effectiveness as a prophylactic to prevent people from contracting COVID. And that number, 86% is high enough to drive SARS-CoV-2 to extinction if we wished to deploy it. First of all, the meta-analysis is the... I have a mech-diffic-COVID-19 real-time meta-analysis of 60 studies. Or there's a bunch of meta-analysis there. Because I was really impressed by the real-time meta-analysis that keeps getting updated. I don't know, is it the same kind of... The one at IVM meta.com.
Starting point is 01:18:08 Well, I saw it. It's a C19 ever meta-meta. Yeah, Yacco. No, this is not that meta-analysis step. So that is, as you say, a living meta-analysis where you can watch as evidence. Which is super cool, by the way. It's really cool.
Starting point is 01:18:20 And they've got some really nice graphics that allow you to understand, well, what is the evidence? It know, it's concentrated around this level of effectiveness, et cetera. So anyway, it's great site. Well worth paying attention to. No, this was a meta analysis. I don't know any of the authors, but one second author is Tess Laurie of the bird group, bird being a group of analysts and doctors in Britain that is playing a role similar to the FL triple C here in the US. So, anyway, this is a meta-analysis that TestLauri and others did of all of the available evidence and it's quite compelling.
Starting point is 01:19:01 People can look forward on my Twitter, I will put it up and people can find it there. So what about dose here? That in terms of safety, what do we understand about the kind of dose required to have that level of effectiveness? And what do we understand about the safety of that kind of dose? So let me just say I'm not a medical doctor. I'm a biologist. I'm on Ivermectin in lieu of vaccination.
Starting point is 01:19:29 In terms of dosage, there is one reason for concern, which is that the most effective dose for prophylaxis involves something like weekly administration. And that is because that is not a historical pattern of use for the drug. It is possible that there is some long-term implication of being on it for, being on it weekly for a long period of time. There's not a strong indication of that, the safety signal that we have over people using the drug over many years and using it in high doses. In fact, Dr. Corey told me yesterday that there are cases in which people have made
Starting point is 01:20:06 calculation errors and taken a massive overdose of the drug and had no ill effect. So, anyway, there's lots of reasons to think the drug is comparatively safe, but no drug is perfectly safe. And I do worry about the long-term implications of taking it. I also think it's very likely because the drug is administered in a dose something like, let's say, 15 milligrams for somebody, my size, once a week after you've gone through the initial, the initial double dose that you take 48 hours apart, it is apparent that if the amount of drug in your system is sufficient to be protective at the end of the week, then it was probably far too high at the beginning of the week.
Starting point is 01:20:52 So there's a question about whether or not you could flatten out the intake so that the amount of Ivermectin goes down, but the protection remains. I have a little doubt that that would be discovered if we looked for it. But that said, it does seem to be quite safe, highly effective at preventing COVID. The 86% number is plenty high enough for us to drive SARS-CoV-2 to extinction in light of its R-NOT number of slightly more than two. And so why we are not using it as a bit of a mystery.
Starting point is 01:21:26 So even if everything you said now turns out to be not correct, it is nevertheless obvious that it's sufficiently promising. It always has been in order to merit rigorous scientific exploration investigation, doing a lot of studies, and certainly not censoring the science or the discussion of it. So before we talk about the various vaccines for COVID-19, I'd like to talk to you about censorship. Given everything you're saying, why did YouTube and other places censor discussion of Evermectin? Well, this is a question about why they say they didn't, and there's a question about why
Starting point is 01:22:15 they actually did it. Now it is worth mentioning that YouTube is part of a consortium. It is partnered with Twitter, Facebook, Reuters, AP, Financial Times, Washington Post, some other notable organizations, and that this group has appointed itself the arbiter of truth. In effect, they have decided to control discussion ostensibly to prevent the distribution of misinformation
Starting point is 01:22:50 Now how have they chosen to do that in this case they have chosen to Simply utilize the recommendations of the who and the CDC and apply them as if they are synonymous with scientific truth problem and apply them as if they are synonymous with scientific truth. Problem, even at their best, the who in CDC are not scientific entities. They are entities that are about public health. And public health has this, whether it's right or not, and I believe I disagree with it. But it has this self-assigned right to lie that comes from the fact that there is game theory that works against, for example, a successful vaccination campaign, that if everybody else takes a vaccine and therefore the herd becomes immune through
Starting point is 01:23:40 vaccination and you decide not to take a vaccine, then you benefit from the immunity of the herd without having taken the risk. So people who do best are the people who opt out. That's a hazard. And the who and CDC has public health entities effectively oversimplify stories in order that that game theory does not cause a predictable tragedy of the comments. With that said, once that right to lie exists, then it turns out to serve the interest of, for example, pharmaceutical companies, which have emergency use authorizations that require that they're not be a safe and effective treatment and have immunity from liability for harms caused by their product. So that's a recipe for disaster.
Starting point is 01:24:27 You don't need to be a sophisticated thinker about complex systems to see the hazard of immunizing a company from the harm of its own product at the same time that that product can only exist in the market if some other product that works better, somehow fails to be noticed. So somehow YouTube is doing the bidding of Mark and others, whether it knows that that's what it's doing, I have no idea.
Starting point is 01:24:55 I think this may, and be another case of an autopilot, that thinks it's doing the right thing because it's parroting the corrupt wisdom of the who and the CDC, but the who in the CDC have been wrong again and again in this pandemic. And the irony here is that with YouTube coming after me, well, my channel has been right where the who and CDC have been wrong consistently over the whole pandemic. So how is it that YouTube is censoring us because the who and CDC disagree with us when in fact in past disagreements we've been right and they've been wrong?
Starting point is 01:25:29 There's so much to talk about here. So I've heard this many times actually on the inside of YouTube and with colleagues that I've talked about talked with is they kind in a very casual way, say their job is simply to slow or prevent the spread of misinformation. And they say, like, that's an easy thing to do. Like, to know what is true or not is an easy thing to do. And so, from the YouTube perspective, I think they basically outsource
Starting point is 01:26:08 of the task of knowing what is true and not to public institutions that on a basic Google search claim to be the arbiters of truth. So, if you're a YouTube who are exceptionally profitable and exceptionally powerful in terms of controlling what people get to see or not, what would you do? Would you take a stand, a public standing against the WHO who CDC, or would you instead say, you know what, let's open the damn and let any video on anything fly?
Starting point is 01:26:53 What do you do here? If you say you were put, if Brent Weinstein was put in charge of YouTube for a month in this most critical of times, or YouTube actually has incredible amounts of power to educate the populace, to give power of knowledge to the populace, such that they can reform institutions. What would you do? How would you run YouTube? Well, unfortunately, or fortunately, this is actually quite simple. The founders, the American founders, settled on a counter
Starting point is 01:27:26 intuitive formulation that people should be free to say anything. They should be free from the government blocking them from doing so. They did not imagine that in formulating that right, that most of what was said would be of high quality. Nor did they imagine it would be free of harmful things. What they correctly reasoned was that the benefit of leaving everything so it can be said exceeds the cost, which everyone understands to be substantial. What I would say is they could not have anticipated the impact, the centrality of platforms like YouTube, Facebook, Twitter, etc. If they had, they would not have limited the first amendment as they did. They clearly understood that the power of the federal government was so great
Starting point is 01:28:18 that it needed to be limited by granting explicitly the right of citizens to say anything. by granting explicitly the right of citizens to say anything. In fact, YouTube, Twitter, Facebook may be more powerful in this moment than the federal government of their worst nightmares could have been. The power that these entities have to control thought and to shift civilization is so great that we need to have those same protections. It doesn't mean that harmful things won't be said, but it means that nothing has changed about the cost
Starting point is 01:28:48 benefit analysis of building the right to censor. So if I were running YouTube, the limit of what should be allowed is the limit of the law. All right. If what you are doing is legal, then it should not be YouTube's place to limit what gets said or who gets to hear it. That is between speakers and audience. Will harm come from that? Of course it will. But will net harm come from it? No, I don't believe it will. I believe that allowing everything to be said does allow a process in which better ideas do come to the fore and win out. So you believe that in the end end when there's complete freedom to share
Starting point is 01:29:26 ideas, that truth will went out. So what I've noticed, just as a brief side comment, that certain things become viral, regardless of their truth, I've noticed that things that are dramatic truth. I've noticed that things that are dramatic and or funny, like things that become memes are not, don't have to be grounded in truth. And so that what worries me there is that we basically maximize for drama versus maximize for truth in a system where everything is free. And that's worrying in the time of emergency. Well, yes, it's all worrying in time of emergency to be sure. But I want you to notice that what you've happened on is actually an analog for a much deeper and older problem. Human beings are the we are not a blank slate, but we are the blankest slate that nature has ever devised. And there's a reason for that.
Starting point is 01:30:26 It's where our flexibility comes from. We have effectively, we are robots in which a large fraction of the cognitive capacity has been or of the behavioral capacity has been offloaded to the software layer, which gets written and rewritten over evolutionary time. That means effectively that much of what we are, in fact the important part of what we are, is housed in the cultural layer and the conscious layer and not in the hardware, hard coding layer. That layer is prone to make errors, right? And anybody who's watched a child grow up knows that children make absurd errors all the
Starting point is 01:31:10 time, right? That's part of the process, as we were discussing earlier. It is also true that as you look across a field of people discussing things, a lot of what is said is pure nonsense. It's garbage. But the tendency of garbage to emerge and even to spread on the short term does not say that over the long term, what sticks is not the valuable ideas. So there is a high tendency for novelty to be created in the cultural space, but there's also a high tendency for it to go extinct. And you have to keep that in mind. It's not like the genome, right? Everything is happening
Starting point is 01:31:48 at a much higher rate. Things are being created. They're being destroyed. And I can't say that, you know, I mean, obviously we've seen totalitarianism arise many times, and it's very destructive each time it does. So it's not like, hey, freedom to come up with any idea you want hasn't produced a whole lot of carnage. But the question is over time, does it produce more open, fairer, more decent societies? And I believe that it does. I can't prove it, but that does seem to be the pattern. I believe so as well. The thing is, in the short term, freedom of speech, absolutely freedom of speech can be quite destructive. But you nevertheless have to hold on to that, because in the long term,
Starting point is 01:32:35 I think you and I, I guess, are optimistic in the sense that, good ideas will win out. I don't know how strongly I believe that it will work, but I will say I haven't heard a better idea. Yeah. I would also point out that there's something very significant in this question of the hubris involved in imagining that you're going to improve the discussion by censoring, which is the majority of concepts that the fringe are nonsense. That's automatic. But the heterodoxy at the fringe, which is indistinguishable at the beginning
Starting point is 01:33:16 from the nonsense ideas, is the key to progress. So if you decide, hey, the fringe is 99% garbage, let's just get rid of it. Right? Hey, that's a strong win. We're getting rid of 99% garbage for 1% something or other. And the point is, yeah, but that 1% something or other is the key. You're throwing out the key. And so that's what YouTube is doing. Frankly, I think at the point that it started censoring my channel, you know, in the immediate aftermath of this major reversal
Starting point is 01:33:45 of a lab leak, it should have looked at itself and said, well, what the hell are we doing? Who are we censoring? We're censoring somebody who was just right, right? In a conflict with the very same people on whose behalf we are now censoring, right? That should have caused them to wake up. So you said one approach, if you are on YouTube,
Starting point is 01:34:02 was to just basically let all videos go that do not violate the law. Well, I should fix that. Okay. I believe that that is the basic principle. Eric makes an excellent point about the distinction between ideas and personal attacks, doxing, but he's other things. So I agree there's no value in allowing people to destroy each other's lives, even if there's
Starting point is 01:34:24 a technical legal defense for it. Now, how you draw that line, I don't know. But, you know, what I'm talking about is, yes, people should be free to traffic and bad ideas, and they should be free to expose that the ideas are bad, and hopefully that process results in better ideas winning out. Yeah, there's an interesting line between, you interesting line between ideas like the Earth is flat, which I believe you should not censor. And then you start to encroach on personal attacks.
Starting point is 01:34:55 So not doxing yes, but not even getting to that. There's a certain point where it's like, that's no longer ideas, that's more, that's somehow not productive, even if it's, it feels like believing the earth is flat, it's somehow productive, because maybe, there's a tiny percentage of it is. You know, like it just feels like personal attacks, it doesn't, well, you know, I'm torn on this because there's assholes in this world, there's
Starting point is 01:35:26 fraudulent people in this world. So sometimes personal attacks are useful to reveal that. But there's a line you can cross. Like there's a comedy where people make fun of others. I think that's amazing. That's very powerful and that's very useful, even if it's painful. But then there's like, once it gets to be, yeah, there's a certain line. It's a gray area where you cross where it's no longer in any possible world productive.
Starting point is 01:35:55 And that's a really weird gray area for YouTube to operate in. And it feels like it should be a crowdsource thing where people vote on it. But then again, do you trust the majority to vote on what is crossing the line and not? I mean, this is where this is really interesting on this particular, like the scientific aspect of this. Do you think YouTube should take more of a stance, not censoring, but to actually have scientists within YouTube, having these kinds of discussions, and then be able to almost speak out in a transparent way. This is what we're going to let this video stand, but here's all these other opinions,
Starting point is 01:36:38 almost like take a more active role in its recommendation system in trying to present a full picture to you. Right now they're not. The recommender systems are not human fine-tuned. They're all based on how you click and there's this clustering algorithms. They're not taking an active role on giving you the full spectrum of ideas in the space of science. They just sense or not. Well, at the moment it's going to be pretty hard to compel me that these people should be trusted with any sort of curation or comment on matters of evidence because they have demonstrated that they are incapable of doing it well. You could make such an argument, and I guess I'm open to the idea of institutions that would look something like YouTube
Starting point is 01:37:26 that would be capable of offering something valuable. I mean, and even just the fact of them literally curating things and putting some videos next to others implies something. So yeah, there's a question to be answered, but at the moment, no. At the moment, what it is doing is quite literally putting not only
Starting point is 01:37:46 individual humans in tremendous jeopardy by censoring discussion of useful tools and making tools that are more hazardous than has been acknowledged seem safe. But it is also placing humanity in danger of a permanent relationship with this pathogen. I cannot emphasize enough how expensive that is. It's effectively incalculable. If the relationship becomes permanent, the number of people who will ultimately suffer and die from it is indefinitely large. Yeah, there's currently the algorithm is very rabbit hole driven,
Starting point is 01:38:20 meaning if you click on the flat earth videos, that's all you're going to be presented with, and you're not going to be nicely presented with arguments against the flat earth and the flip side of that. If you watch like quantum mechanics videos or no, general relativity videos, it's very rare you're going to get in a recommendation. Have you considered the earth as flat? And I think you should have both. Same with vaccine.
Starting point is 01:38:50 Videos that present the power in the incredible, like biology, genetics, biology about the vaccine, you're rarely going to get videos from well-respected scientific minds presenting possible dangers of the vaccine and the vice-versus true as well Which is if you're looking at the dangers of vaccine on YouTube You're not going to get the highest quality of videos recommended to you and I'm not talking about like manually inserted CDC videos They're like the most untrustworthy things you can possibly watch about how everybody should take the vaccine is the safest thing ever. No, it's about incredible. Again, MIT colleagues of mine, incredible biologists, virologists that I'll talk about the details of how the MRNA vaccines work and all those kinds of things. I think maybe this is me with the AI hat on is I think the algorithm
Starting point is 01:39:47 can fix a lot of this. And YouTube should build better algorithms and trust that to and the couple of complete freedom of speech to expand the what people are able to think about. Present always varied views, not balanced in some artificial way, hard coded way, but balanced in a way that's crowdsourced. I think that's an algorithm problem that could be solved, because then you can delegate it to the algorithm as opposed to this hard code censorship of basically creating artificial boundaries on what can and can be discussed. Instead, creating a full spectrum of exploration that can be done and trusting the intelligence of people to do the exploration. Well, there's a lot there. I would say we have to keep in mind that we're talking about a publicly held company with shareholders and obligations to them
Starting point is 01:40:46 and that that may make it impossible. And I remember many years ago, back in the early days of Google, I remember a sense of terror at the loss of general search. It used to be that Google, if you searched, came up with the same thing for everyone. And then it got personalized. And for a while, it was possible to turn off the personalization, which was still not
Starting point is 01:41:12 great because if everybody else is looking at a personalized search, and you can tune in to one that isn't personalized, what are, you know, that doesn't tell you why the world is sounding the way it is. But nonetheless, it was at least an option. And then that vanished. And the problem is, I think this is literally deranging us. That in effect, I mean, what you're describing is unthinkable. It is unthinkable that in the face of a campaign to vaccinate people in order to reach herd immunity, that YouTube would give you videos on hazards of vaccines, when this is, you know, how hazardous the vaccines are is an unsettled question.
Starting point is 01:41:54 Why is it unthinkable? That doesn't make any sense for a company perspective. If intelligent people if intelligent people in large amounts are open-minded and are thinking through the hazards and the benefits of a vaccine, a company should find the best videos to present what people are thinking about. Well, let's come up with a hypothetical. Okay, let's come up with a very deadly disease for which there's a vaccine that is very safe, though not perfectly safe. And we are then faced with YouTube trying to figure out what to do for somebody searching on vaccine safety. Suppose it is necessary in order to drive the pathogen
Starting point is 01:42:41 to extinction, something like smallpox, that people get on board with the vaccine. But there's a tiny fringe of people who thinks that the vaccine is a mind control agent. All right, so should YouTube direct people to the only claims against this vaccine, which is that it's a mind control agent, when in fact, the vaccine is very safe, whatever that means. If that were the actual configuration of the puzzle,
Starting point is 01:43:15 then YouTube would be doing active harm, pointing you to this other video potentially. Now, yes, I would love to live in a world where people are up to the challenge of sorting that out, but my basic point would be if it's an evidentiary question and there is essentially no evidence that the vaccine is a mind control agent and there's plenty of evidence that the vaccine is safe, then well, you look for this video, we're going to give you this one, puts it on a par, right? So for the mind that's tracking how much thought is there behind its safe versus how much thought is there behind its mind control agent will result in artificially elevating this. Now in the current case what we've seen is not this at all.
Starting point is 01:44:01 We have seen evidence obscured in order to create a false story about safety. And we saw the inverse with Ivermectin. We saw a campaign to portray the drug as more dangerous and less effective than the evidence clearly suggested it was. So we're not talking about a comparable thing, but I guess my point is the algorithmic solution that you point to creates a problem of its own, which is that it means that the way to get exposure is to generate something fringy. If you're the only thing on some fringe, then suddenly YouTube would be recommending those things and that's obviously a gameable system at best.
Starting point is 01:44:43 Yeah, but the solution to that, and no, you're creating a thought experiment, maybe playing a little bit of a devil's advocate, I think the solution to that is not to limit the algorithm in the case of a super deadly virus. It's for the scientists to step up and become better communicators, more charismatic, is fight the battle of ideas,
Starting point is 01:45:04 so create better videos. You know, like if the virus is truly deadly, you have a lot more ammunition, a lot more data, a lot more material to work with in terms of communicating with the public. So be better at communicating and stop being, you have to start trusting the intelligence of people and also being transparent and playing the game
Starting point is 01:45:27 of the internet, which is like, what is the internet hungry for, I believe, authenticity? Stop looking like you're full of shit. The scientific community, if there's any flaw that I currently see, especially the people that are in public office that like Anthony Fauci, they look like they're full of shit and I know they're brilliant. Why don't they look more authentic? So they're losing that game and I think a lot of people observing this entire system now, younger scientists are seeing this and saying, okay, if I want to continue being a scientist in
Starting point is 01:46:03 the public eye and I want to be effective on my job, I'm going to have to be a lot more authentic. So they're learning less than the evolutionary system is working. So there's just a younger generation of minds coming up that I think will do a much better job and is better of ideas that when the much more dangerous virus comes along, they'll be able to be better communicators. At least that's the hope. The using the algorithm to control that
Starting point is 01:46:29 is I feel like is a big problem. So you're going to have the same problem with a deadly virus as with the current virus. If you're at YouTube, draw hard lines by the PR and the marketing people versus the broad community of scientists? Well, in some sense, you're suggesting something that's close-kin to what I was saying about, you know, freedom of expression ultimately provides an advantage to better ideas.
Starting point is 01:46:58 So I'm in agreement broadly speaking, but I would also say there's probably some sort of, you know, let's imagine the world that you propose where YouTube shows you the alternative point of view. That has the problem that I suggest, but one thing you could do is you could give us the tools to understand what we're looking at, right? You could give us, so first of all, there's something, I think, myopic, solubstic, narcissistic, about an algorithm that serves shareholders by showing you what you want to see rather than what you need to know. That's the distinction. It's flattering you, you know, playing to your blind spot is something that algorithm
Starting point is 01:47:39 will figure out, but it's not healthy for us all to have Google playing to our blind spot. It's very, very dangerous. So what I really want is analytics that allow me, or maybe options and analytics. The options should allow me to see what alternative perspectives are being explored. Right? So here's the thing I'm searching and it leads me down this road. Let's say it's Ivermectin.
Starting point is 01:48:04 I find all of this evidence that Ivermectin works. I find all of these discussions and people talk about various protocols and that and that. I could say, all right, what is the other side? I could see who is searching, not as individuals, but what demographics are searching alternatives. Maybe you could even combine it with something Reddit-like, where effectively, let's say that there was a position that, I don't know, that a vaccine is a mind-control device. And you could have a steel man this argument,
Starting point is 01:48:38 competition effectively. And the better answers that steel man in as well as possible would rise to the top. And so you could read the top three or four explanations about why this really credibly is a mine-control product. And you can say, well, that doesn't really add up. I can check these three things myself and they can't possibly be right. And you could dismiss it. And then as an argument that was credible, let's say plate tectonics before that was an accepted concept, you'd say, wait a minute, there is evidence for plate tectonics.
Starting point is 01:49:09 Crazy as it sounds that the continents are floating around on liquid. Actually, that's not so implausible. You know, we've got these subduction zones. We've got geology that is compatible. We've got puzzle piece continents that seem to fit together. Wow, that's a surprising amount of evidence for that position. So I'm going to file some Bayesian probability with it that's updated for the fact that actually the steel man argument's better than I was expecting. Right. So I could imagine something like that where A, I would love the search to be indifferent
Starting point is 01:49:38 to who's searching. Right. The the solipsistic thing is too dangerous. So the search could be general. So we would all get a sense for what everybody else was seeing too. And then some layer that didn't have anything to do with what YouTube points you to or not, but allowed you to see the general pattern of adherence to searching for information and again, a layer in which those things could be defended so you could hear what a good argument sounded like rather than just hear a caricatured argument. Yeah, and also the reward people, creators that have demonstrated that like a track record of open-mindedness and correctness, correctness as much as it could be measured over a long term and sort of, I mean, a lot of this maps to incentivizing good long-term behavior, not immediate kind of dopamine rush kind of signals.
Starting point is 01:50:40 I think ultimately the algorithm on the individual level should optimize for personal growth, long-term happiness, just growth intellectually, growth in terms of lifestyle, personally, and so on as opposed to immediate. I think that's going to build a better side, not even just like truth, because I think truth is a complicated thing. As more just you growing as a person, exploring the space of ideas, changing your mind often, increasing the level to which you open mind and the knowledge base you're operating from, the willingness to empathize with others, all those kinds of things the algorithm should
Starting point is 01:51:23 optimize for. Like creating a better human at the individual level, that you're all, I think that's a great business model because the person that's using this tool will then be happier with themselves for having used it and will be a lifelong, quote unquote, customer. I think it's a great business model to make a happy, open-minded, knowledgeable, better human being. It's a terrible business model under the current system. What you want is to build the system
Starting point is 01:51:54 in which it is a great business model. Why is a terrible model? Because it will be decimated by those who play to the short term. I don't think so. I mean, I think we're living it. We're living it. Well, no, because if you have the alternative
Starting point is 01:52:09 that presents itself, it points out the emperor as no close. I mean, it points out the YouTube is operating in this way, Twitter is operating in this way, Facebook is operating in this way. How long term would you like the wisdom to prove that? Well, even a week is better once currently happening. Right, but the problem is, you know, if a week loses out to an hour, right?
Starting point is 01:52:36 And I don't think it loses out. It loses out in the short term. That's my point. You see a great communicator and you basically say, look, here's the metrics. And a lot of it is like how people actually feel. Like this is what people experience with social media. They look back at the previous month and say, I felt shitty and a lot of days because of social media.
Starting point is 01:52:59 Right. Like, if you look back at the previous few weeks and say, wow, I'm a better person because of that month happened. That's, they're immediately choose the product that's going to lead to that. That's what love for products looks like. If you love, like a lot of people love their Tesla car. Like that's, or iPhone, or like beautiful design.
Starting point is 01:53:22 That's what love looks like. You look back, I'm a better person for having used this thing. Well, you got to ask yourself the question though. If this is such a great business model, why isn't it evolving? Why don't we see it? Honestly, it's competence. It's like people are just, it's not easy to build new,
Starting point is 01:53:41 it's not easy to build products, tools, systems on new ideas. It's kind of a new idea. We've gone through this. Everything we're seeing now comes from the ideas of the initial birth of the internet. There's just needs to be new sets of tools that are incentivizing long-term personal growth and happiness. That's it. Right, but what we have is a market that doesn't favor this. Right. I mean, for one thing, we had an alternative to Facebook that looked, you own your own data. It wasn't exploitative. Facebook bought a huge interest in it, and it died. I mean, who do you know who's on diaspora? The execution there was not good, but it could have gotten better. Right. I don't think that the argument that why hasn't somebody done it, a good argument
Starting point is 01:54:35 for it's not going to completely destroy all of Twitter and Facebook when somebody does it, or Twitter will catch up on pivot to the algorithm. This is not what I'm saying. There's obviously great ideas that remain unexplored because nobody has gotten to the foothill that would allow you to explore them. That's true. But, you know, an internet that was non-predatory is an obvious idea. And many of us know that we want it.
Starting point is 01:55:01 And many of us have seen prototypes of it. And we don't move because there's no audience there. So the network effects cause you to stay with the predatory internet. But let me just, I wasn't kidding about build the system in which your idea is a great business plan. So in our upcoming book, Heather and I in our last chapter, explore something called the fourth frontier. And fourth frontier has to do with sort of a 2.0 version of civilization, which we freely admit, we can't tell you very much about. It's something that would have to be, we would have to prototype
Starting point is 01:55:35 our way there, we would have to effectively navigate our way there. But the result would be very much like what you're describing. It would be something that effectively liberates humans meaningfully. And most importantly, it has to feel like growth without depending on growth. In other words, human beings are creatures that, like every other creature, is effectively looking for growth. We are looking for underexploited or unexploded opportunities. And when we find them, our ancestors, for example, they happened into a new valley that was unexplored by people. Their population would grow until it had carrying capacity. So there would be this great feeling of,
Starting point is 01:56:15 there's abundance until you had carrying capacity, which is inevitable. And then zero sum dynamics would set in. So in order for human beings to flourish long-term, the way to get there is to satisfy the desire for growth without hooking it to actual growth, which only moves in fits and starts. And this is actually, I believe, the key to avoiding these spasms of human tragedy when in the absence of growth, people do something that causes their population to experience growth,
Starting point is 01:56:45 which is they go and make war on or commit genocide against some other population, which is something we obviously have to stop. By the way, this is a hunter-gatherer's guide to the 21st century, co-authored with your wife Heather being released to September. I believe you said you're going to do a little bit of a preview videos on each chapter leading up to the release. So I'm looking forward to the last chapter as well as all the previous ones. I have a few questions on that. So you have, you generally have faith to clarify that technology could be the thing that empowers this kind of future.
Starting point is 01:57:26 Well, if you just let technology evolve, it's going to be our undoing. Right. One of the things that I fault my libertarian friends for is this faith that the market is going to find solutions without destroying us. And my sense is I'm a very strong believer in markets. I believe in their power, even above some market fundamentalists. But what I don't believe is that they
Starting point is 01:57:53 should be allowed to plot our course. Markets are very good at figuring out how to do things. They are not good at all about figuring out what we should do, what? What we should want. We have to tell markets what we want, and then they can tell us how to do it best. And if we adopted that kind of pro-market, but in a context where it's not steering, where human well-being is actually the driver, we can do remarkable things, and the technology that emerges would naturally be enhancing of human well-being. Perfectly so, no, but overwhelmingly so. But at the moment,
Starting point is 01:58:30 markets are finding our every defective character and exploiting them and making huge profits and making us worse to each other in the process. Before we leave COVID-19, let me ask you about a very difficult topic, which is the vaccines. So I took the Pfizer vaccine, the two shots. You did not. You have been taking Ivermectin. Yep. So, one of the arguments against the discussion of Ivermectin is that it prevents people from being fully willing to get the vaccine. How would you compare Ivermectin and the vaccine for COVID-19?
Starting point is 01:59:21 All right. That's a good question. I would say, first of all, there are some hazards with the vaccine that people need to be aware of. There are some things that we cannot rule out and for which there is some evidence. The two that I think people should be tracking is the possibility, some would say, a likelihood that a vaccine of this nature say a likelihood that a vaccine of this nature, that is to say very narrowly focused on a single antigen, is an evolutionary pressure that will drive the emergence of variants that will escape the protection that comes from the vaccine.
Starting point is 01:59:59 So this is a hazard. It is a particular hazard in light of the fact that these vaccines have a substantial number of breakthrough cases. So one danger is that a person who has been vaccinated will shed viruses that are specifically less visible or invisible to the immunity created by the vaccines. So we may be creating the next pandemic by applying the pressure of vaccines at a point that it doesn't make sense to. The other danger has to do with something called antibody dependent enhancement, which is something that we see in certain diseases like dengue fever. You may know that dengue one gets a case and then their second case is much more devastating. So break bone fever is when you get your second case of Denge, and Denge effectively utilizes the immune response that is produced by prior exposure to attack the body in ways that
Starting point is 02:00:56 it is incapable of doing before exposure. So this is apparently, this pattern has apparently blocked past efforts to make vaccines against your own viruses, whether it will happen here or not. It is still too early to say, but before we even get to the question of harm done to individuals by these vaccines, we have to ask about what the overall impact is going to be. And it's not clear in the way people think it is that if we vaccinate enough people, the pandemic will end. It could be that we vaccinate people and make the pandemic worse.
Starting point is 02:01:29 And while nobody can say for sure that that's where we're headed, it is at least something to be aware of. So, don't vaccines usually create that kind of evolutionary pressure to create a deadlier, different strains of the virus. So, isn't, so, is there something particular with these mRNA vaccines that's uniquely dangerous in this regard? Well, it's not even just the mRNA vaccines. The mRNA vaccines and the adenovector DNA vaccine all share the same vulnerability, which is they are very narrowly focused on one sub-unit of the spike protein. So, that is a very concentrated evolutionary signal.
Starting point is 02:02:08 We are also deploying it in mid-pandemic, and it takes time for immunity to develop. So part of the problem here, if you inoculated a population before encounter with a pathogen, then there might be substantially enough immunity to prevent this phenomenon from happening. But in this case, we are inoculating people as they are encountering those who are sick with the disease. And what that means is that the disease is now faced with a lot of opportunities to effectively evolutionarily practice escape strategies. So one thing is the timing, the other thing is the narrow focus. Now in a traditional vaccine, you would typically not have one antigen, right? You would have basically a virus full of antigen, and the immune system would
Starting point is 02:02:56 therefore produce a broader response. So that is the case for people who have had COVID, right? They have an immunity that is broader because it wasn't so focused on one part of the spike protein. So anyway, there is something unique here. So these platforms create that special hazard. They also have components that we haven't used before in people. So for example, the lipid nanoparticles that coat the RNAs are distributing themselves
Starting point is 02:03:23 around the body in a way that will have unknown consequences. So anyway, there's reason for concern. Is it possible for you to steal men the argument that everybody should get vaccinated? Of course. The argument that everybody should get vaccinated is that nothing is perfectly safe. that everybody should get vaccinated is that nothing is perfectly safe. Phase three trials showed good safety for the vaccines. Now that may or may not be actually true, but what we saw suggested high degree of efficacy and a high degree of safety for the vaccines that inoculating people quickly and therefore dropping the landscape of available victims for the pathogen to a very low number so that herd immunity drives it to extinction requires
Starting point is 02:04:13 us all to take our share of the risk. And that because driving it to extinction should be our highest priority that really people shouldn't think too much about the various nuances because overwhelmingly fewer people will die if the population is vaccinated from the vaccine than we'll die from COVID if they're not vaccinated. And with the vaccine, as a current being deployed, that is a quite a likely scenario the most likely scenario that everything, you know, the virus will fade away. In the following sense, the probability that a more dangerous strain will be created is non-zero, but it's not 50 percent.
Starting point is 02:04:59 It's something smaller. And so the most likely, well, I don't know, maybe you disagree with that. But the scenario where most likely it's seen out that the vaccine is here is that the virus, the effects of the virus will fade away. First of all, I don't believe that the probability of creating a worse pandemic is low enough to discount. I think the probability is fairly high. And frankly, we are seeing a wave of variance that we will have to do a careful analysis to figure out what exactly that has to do with campaigns of vaccination, where they have been, where they haven't been, where the variants emerged from.
Starting point is 02:05:34 But I believe that what we are seeing is a disturbing pattern that reflects that those who were advising caution may well have been right. The data here, by the way, and the small tangent is terrible. Terrible. Right, and why is it terrible? Is another question, right? This is where I start getting angry. Yeah.
Starting point is 02:05:52 It's like, there's an obvious opportunity for exceptionally good data, for exceptionally rigorous, like even the website for self-reporting side effects for not side effects, but negative effects, right? At first events. At first events side effects, but negative effects. First events. First events are for the vaccine.
Starting point is 02:06:09 Like, there's many things I could say from both the study perspective, but mostly, let me just put on my hat of like HTML and like web design. Like, it's like the worst website. It makes it so unpleasant to report. It makes it so unclear what you're reporting If somebody actually has serious effect like if you have very mild effects, what are the incentives for you to even use that crappy website with Many pages and forms that don't make any sense if you have adverse effects What are the incentives for you to use that website? What is the trust that you have that this information will be used?
Starting point is 02:06:47 Well, all those kinds of things. And the data about who's getting vaccinated, anonymized data, about who's getting vaccinated where, when with vaccine coupled with the adverse effects, all of that we should be collecting. Instead, we're completely not. And we're doing it in a crappy way, and using that crappy data to make conclusions that you then twist, you're basically collecting in a way that can arrive at whatever conclusions you want.
Starting point is 02:07:16 And the data is being collected by the institutions, by governments. And so therefore, it's obviously, they're going to try to construct any kind of narratives they want based on this crappy data. It reminds me of much of psychology, the field that I love, but is flawed and many fundamental ways. So rent over, but coupled with the dangers that you're speaking to, we don't have even the data to understand the dangers. Yeah, I'm going to pick up on your rant and say we estimates of the degree of underreporting in VAERS are that it is 10% of the real to 100% and that's the one percent for reporting.
Starting point is 02:07:58 Yeah, the fair system is the system for reporting adverse events. So in the US, we have above 5,000 unexpected deaths that seem in time to be associated with vaccination. That is an undercount almost certainly, and by a large factor. We don't know how large I've seen estimates 25,000 dead in the US alone. estimates 25,000 dead in the US alone. Now, you can make the argument that, okay, that's a large number, but the necessity of immunizing the population to drive SARS-CoV-2 to extinction is such that it's an acceptable number. But I would point out that that actually does not make any sense. And the reason it doesn't make any sense is actually there's several reasons. One, if that was really your point, that yes, many, many people are going to die, but many more
Starting point is 02:08:51 will die if we don't do this. Were that your approach? You would not be inoculating people who had had COVID-19, which is a large population. There's no reason to expose those people to danger. Their risk of adverse events in the case that they have them is greater. So there's no reason that we would be allowing those people to face a risk of death if this was really about an acceptable number of deaths arising out of this set of vaccines. I would also point out there's something incredibly bizarre and I would I struggle to find language that is strong enough for the horror of vaccinating children in this case because children suffer a greater risk of long-term effects because they are going to live longer and because this is
Starting point is 02:09:41 earlier in their development. Therefore, it impacts systems that are still forming, they tolerate COVID well. And so the benefit to them is very small. And so the only argument for doing this is that they may cryptically be carrying more COVID than we think. And therefore, they may be integral to the way the virus spreads to the population. But if that's the reason that we are inoculating children, and there has been some revision in the last day or two about the recommendation on this because of the adverse events that have shown up in children. But to the extent that we were vaccinating children, we were doing it to protect old, infirm people
Starting point is 02:10:18 who are the most likely to succumb to COVID-19. What society puts children in danger? Rob's children of life to save old, infirm people. That's upside down. So there's something about the way we are going about vaccinating, who we are vaccinating. What dangers we are pretending don't exist that suggests that to some set of people, vaccinating people is a good in and of itself. But that is the objective of the exercise, not herd immunity. And the last thing I'm sorry, I kind of want to prevent you from jumping in here. But the second reason, in addition to the fact that we're exposing people to danger that we should not be exposing them to.
Starting point is 02:11:02 By the way, as a tiny tangent, another huge part of the soup that should have been part of it, that's an incredible solution is large scale testing. Mm-hmm. But that might be another couple of hours, but the conversation, but there's these solutions, they're obvious, they were available from the very beginning. So you could argue that I've erected, it's not that obvious, but maybe
Starting point is 02:11:26 the whole point is you have aggressive, very fast research, at least the meta analysis and then large scale production and deployment. Okay, at least that possibly should be seriously considered coupled with a serious consideration of large scale deployment of testing, at home testing, it could have accelerated the speed at which we reached that herd immunity. But I don't even want to... Well, let me just say, I am also completely shocked that we did not get on high quality testing early and that we are still suffering from this even now because just the simplicity to track where the virus moves between people would tell us a lot about its mode of transmission, which
Starting point is 02:12:16 would allow us to protect ourselves better. Instead, that information was hard one and for no good reason. So I also find this mysterious. You've spoken with Eric Weinstein, your brother, on his podcast, The Portal about the ideas that eventually led to the paper you published titled The Reserved Capacity Hypothesis. I think first can you explain this paper and the ideas that let up to it? Sure, easier to explain the conclusion of the paper. There's a question about why a creature that can replace its cells with new cells grows feeble and inefficient with age.
Starting point is 02:13:05 We call that process, which is otherwise called aging. We call it senescence. And senescence in this paper, it is hypothesized, is the unavoidable downside of a cancer prevention feature of our bodies, that each cell has a limit on the number of times that can divide. There are a few cells in the body that are exceptional, but most of our cells can only divide a limited number of times. That's called the hayflick limit.
Starting point is 02:13:38 And the hayflick limit reduces the ability of the organism to replace tissues. It therefore results in a failure over time of maintenance and repair. And that explains why we become decrepit as we grow old. The question was, why would that be, especially in light of the fact that the mechanism that seems to limit the ability of cells to reproduce is something called a telomere. Telomere is a, it's not a gene, but it's a DNA sequence at the ends of our chromosomes that is just simply repetitive.
Starting point is 02:14:16 And the number of repeats functions like a counter. So there's a number of repeats that you have after development is finished. And then each time the cell divides, a little that you have after development is finished. And then each time the cell divides, a little bit of telomere is lost. And at the point that the telomere becomes critically short, the cell stops dividing, even though it still has the capacity to do so. Stops dividing, and it starts transcribing different genes than it did when it had more telomere. So, what my work did was it looked at the fact that the telomeric shortening was being studied by two different groups
Starting point is 02:14:48 as being studied by people who were interested in counteracting the aging process, and it was being studied in exactly the opposite fashion by people who were interested in tumor agenesis and cancer. The thought being because it was true that when one looked into tumors, they always had telomerase active. That's the enzyme that lengthens our telomeres. So those folks were interested in bringing about a halt to the lengthening of telomeres in order to counteract cancer. And the folks who were studying the senescence process were interested in lengthening telomeres in order to generate greater repair capacity. And my point was, evolutionarily speaking, this looks like a plyotropic effect, that the genes which create the tendency of the cells to be limited in their capacity to replace themselves
Starting point is 02:15:46 cells to be limited in their capacity to replace themselves, are providing a benefit in youth, which is that we are largely free of tumors and cancer at the inevitable late-life cost that we grow feeble and inefficient and eventually die, and that matches a very old hypothesis in evolutionary theory by somebody I was fortunate enough to know, George Williams, one of the great 20th century evolutionists who argued that senescence would have to be caused by plyotropic genes that cause early life benefits at unavoidable late life costs. And although this isn't the exact nature of the system, he predicted it matches what he was expecting in many regards to a shocking degree. That said, the focus of the paper is about the, let me just read the abstract.
Starting point is 02:16:34 We observed that captive rodent breeding protocols designed, this is the end of the abstract. We observed that captive rodent breeding protocols designed to increase reproductive output, simultaneously exert strong selection against reproductive senescence, and virtually of that captive rodent breeding protocols designed to increase reproductive output, simultaneously exert strong selection against reproductive senescence and virtually eliminate selection that would otherwise favor tumor suppression. This appears to have greatly elongated the telomeres of laboratory mice. With their telomeric failsafe effectively disabled, these animals are unreliable models of normal senescence and tumor formation.
Starting point is 02:17:05 So basically using these mice is not going to lead to the right kinds of conclusions. Safety tests employing these animals likely overestimate cancer risks and underestimate tissue damage and consequent accelerated senescence. So I think, especially with your discussion with Eric, the conclusion of this paper has to do with the fact that we shouldn't be using these mice to test the safety or to make conclusions about cancer or senescence. Is that the basic takeaway? Basically saying that the length of these telomeres is an important variable to consider. Well, let's put it this way. I think there was a reason that the world of scientists who was working on telomeres did not spot the pliotropic relationship that was the key argument in my paper. The reason they didn't
Starting point is 02:18:07 spot it was that there was a result that everybody knew, which seemed inconsistent. The result was that mice have very long telomeres, but they do not have very long lives. Now, we can talk about what the actual meaning of don't have very long lives is, but in the end, I was confronted with a hypothesis that would explain a great many features of the way mammals and indeed vertebrates age, but it was inconsistent with one result. And at first, I thought, maybe there's something wrong with the result. Maybe this is one of these cases where the result was achieved once through some bad protocol, and everybody else was repeating it. It didn't turn out to be the case.
Starting point is 02:18:49 Many laboratories had established that mice had ultra long telomeres. And so I began to wonder whether or not there was something about the breeding protocols that generated these mice. And what that would predict is that the mice that have long telomeres would be laboratory mice, and that wild mice would not. And Carol Grider, who agreed to collaborate with me, tested that hypothesis and showed that it was indeed true that wild derived mice, or at least mice, that had been in captivity for a much shorter period of time, did not have ultra-long telomeres. at least mice that had been in captivity for a much shorter period of time did not have ultra long telomeres. Now, what this implied, though, as you read, is that our reading protocols generate lengthening of telomeres. And the implication of that is that the animals that have these very long telomeres will be hyperprone to create tumors.
Starting point is 02:19:41 They will be extremely resistant to toxins because they have effectively an infinite capacity to replace any damaged tissue. And so, ironically, if you give one of these ultra-long telomere lab mice a toxin, if the toxin doesn't outright kill it, it may actually increase its lifespan because it functions as a kind of chemotherapy. So the reason the chemotherapy works is that dividing cells are more vulnerable than cells that are not dividing. And so if this mouse has effectively had its cancer protection turned off and it has
Starting point is 02:20:17 cells dividing too rapidly and you give it a toxin, you will slow down its tumors faster than you harm its other tissues. And so you'll get a paradoxical result that actually some drug that's toxic seems to benefit the mass. Now, I don't think that that was understood before I published my paper. Now I'm pretty sure it has to be. And the problem is that this actually is a system that serves pharmaceutical companies that have the difficult job of bringing compounds
Starting point is 02:20:46 to market, many of which will be toxic, maybe all of them will be toxic, and these mice predispose our system to declare these toxic compounds safe. And in fact, I believe we've seen the errors that it result from using these mice a number of times, most famously with viox, which turned out to do conspicuous heart damage. Why do you think this paper on this idea has not gotten significant traction? Well, my collaborator, Carol Grider, said something to me that rings in my ears to this day. She initially, after she showed that laboratory mice have anomalously long telomeres and that wild mice don't have long telomeres, I asked her
Starting point is 02:21:31 where she was going to publish that result so that I could cite it in my paper. And she said that she was going to keep the result in house rather than publish it. And at the time, I was a young graduate student. I didn't really understand what she was saying. But in some sense, the knowledge that a model organism is broken in a way that creates the likelihood that certain results will be reliably generateable, you can publish a paper and make a big splash with such a thing, or you can exploit the fact that you know
Starting point is 02:22:04 how those models will misbehave and other people don't. So there's a question, if somebody is motivated cynically and what they want to do is appear to have deeper insight into biology because they predict things better than others do, knowing where the flaw is so that your predictions come out true is advantageous. At the same time, I can't help but imagine that the pharmaceutical industry, when it figured out that the mice were predisposed to suggest that drugs were safe, didn't leap to fix the problem, because in some sense, it was the perfect cover for the difficult job of bringing drugs to market and then discovering their actual toxicity profile.
Starting point is 02:22:48 This made things look safer than they were, and I believe a lot of profits have likely been generated downstream. So to kind of play devil's advocate, it's also possible that this particular, the length of the telomeres is not a strong variable for the conclusions, for the drug development and for the conclusions that Carol and others have been studying, is that is that possible for that to be the case? That this, like, so one reason she and others could be ignoring this is because it's not a strong variable. Well, I don't believe so. And in fact, at the point that I went to publish my paper, Carol published her result. She did so in a way that did not make a huge splash.
Starting point is 02:23:30 Did you, like, well, I apologize if I don't know how, what was the emphasis of her publication of that paper? Was it purely just kind of showing data? Was there more? Because in your paper, there's a kind of more of a philosophical statement as well. Well, my paper was motivated by interest in the evolutionary dynamics around senescence.
Starting point is 02:23:54 I wasn't, you know, pursuing grants or anything like that. I was just working on a puzzle I thought was interesting. Carol has, of course, gone on to win a Nobel prize for her co-discovery with Elizabeth Greider of T'Lomeray, the enzyme that lengthens T'Lomeray's. But anyway, she's a heavy hitter in the academic world. I don't know exactly what her purpose was. I do know that she told me she wasn't planning to publish, and I do know that I discovered
Starting point is 02:24:22 that she was in the process of publishing very late. And when I asked her to send me the paper to see whether or not she had put evidence in it that the hypothesis had come from me, she gradually sent it to me. And my name was no or mentioned. And she has she broke contact at that point. What it is that motivated her, I don't know, but I don't think it can possibly be that this result is unimportant. The fact is, the reason I called her in the first place, an established contact that generated our collaboration was that she was a leading light in the field of telemeric studies.
Starting point is 02:24:59 And because of that, this question about whether the model organisms are distorting the understanding of the functioning of telomeres is central. Do you feel like you've been as a young graduate student? Do you think Carol or do you think the scientific community broadly screwed you over in some way. I don't think of it in those terms, probably partly because it's not productive, but I have a complex relationship with this story. On the one hand, I'm livid with Carol Grider for what she did. She absolutely pretended that I didn't exist in this story, and I don't think I was a threat to her. My interest was as an evolutionary biologist, I had made an evolutionary contribution.
Starting point is 02:25:47 She had tested a hypothesis, and frankly, I think it would have been better for her if she had acknowledged what I had done. I think it would have enhanced her work. And, you know, I was... Let's put it this way. When I watched her Nobel lecture, and I should say there's been a lot of confusion
Starting point is 02:26:04 about this Nobel stuff, I've never said that I should have gotten a Nobel Prize. People have misportraided that. My in listening to her lecture, I had one of the most bizarre emotional experiences of my life because she presented the work that resulted from my hypothesis. She presented it as she had in her paper with no acknowledgement of where it had come from and she had, in fact, portrayed the distortion of the telomeres as if it were a lucky fact because it allowed testing hypotheses that would otherwise not be testable. You have to understand as a young scientist to watch work that you have done presented in what's surely the most important lecture of her career, right?
Starting point is 02:27:01 It's thrilling. It was thrilling to see, you know see her figures projected on the screen there. To have been part of work that was important enough for that felt great. And of course, to be erased from the story felt absolutely terrible. So anyway, that's sort of where I am with it. My sense is what I'm really troubled by in the story is the fact that as far as I know, the flaw with the mice has not been addressed. And actually Eric did some looking into this. He tried to establish by calling the Jack's lab and trying to ascertain what had happened with the colonies, whether any change in protocol had occurred,
Starting point is 02:27:47 and he couldn't get anywhere. There was seemingly no awareness that it was even an issue. So I'm very troubled by the fact that as a father, for example, I'm in no position to protect my family from the hazard that I believe lurks in our medicine cabinets. I'm even though I'm aware of where the hazard comes from. It doesn't tell me anything useful about which of these drugs will turn out to do damage if
Starting point is 02:28:11 they're ultimately tested, and that's a very frustrating position to be in. On the other hand, there's a part of me that's even still grateful to Carol for taking my call. She didn't have to take my call and talk to some young graduate student who had some evolutionary idea that wasn't in her wheelhouse specifically and yet she did. And for a while she was a good collaborator. Well, can I have to proceed carefully here because it's a complicated complicated topic. So she took the call and you kind of saying that she basically erased credit, you know, pretend you didn't exist in some kind of, in a certain sense. Let me phrase it this way. As a research scientist at MIT, I've had, especially just part of a large set of collaborations, I've had a lot of students come to me and talk to me about ideas, perhaps less interesting than what we're discussing here in the space of AI that I've been thinking about anyway.
Starting point is 02:29:29 In general, with everything I'm doing with robotics, people will have told me a bunch of ideas that I'm already thinking about. The point is taking that idea, see, this is different because the idea has more power in the space that we're talking about here in robotics is like your idea means shit until you build it. Like, so the engineering world is a little different, but there's a kind of sense that I probably forgot a lot of brilliant ideas that have been told to me.
Starting point is 02:30:02 Do you think she pretended you don't exist? Do you think she was so busy that she kind of forgot, you know, that she has like the stream of brilliant people around her, there's a bunch of ideas that are swimming in the air and you just kind of forget people that are a little bit on the periphery on the idea generation, like, or is it some mix of both? free on the IJ generation like or is it some mix of both? It's not a mix of both. I know that because we corresponded. She put a graduate student on this work. He emailed me excitedly when the results came in. So there was no ambiguity about what had happened. What's more, when I went to publish my work, I actually sent it to Carol in order to get her feedback, because I wanted to be a good collaborator to her.
Starting point is 02:30:50 And she absolutely panned it, made many critiques that were not valid, but it was clear at that point that she became an antagonist. And none of this adds that she couldn't possibly have forgotten the conversation. I believe I even sent her tissues at some point in part, not related to this project, but as a favor, she was doing another project that involved telomeres, and she needed samples that I could get a hold of because of the Museum of Zoology that I was in. So this was not a one-off conversation. I certainly know that those sorts of things
Starting point is 02:31:26 can happen, but that's not what happened here. This was a relationship that existed and then was suddenly cut short at the point that she published her paper by surprise without saying where the hypothesis had come from and began to be a opposing force to my work. Is there a bunch of trajectories you could have taken through life? Do you think about the trajectory of being a researcher of then going to war in the space of ideas of publishing further papers along this line?
Starting point is 02:32:04 I mean, that's often the dynamic of that fascinating space, is you have a junior researcher with brilliant ideas and a senior researcher that starts out as a mentor that becomes a competitor. I mean, that happens. But then the way to, it's an almost an opportunity to shine is to publish a bunch more papers in this place They did to tear it apart to dig into like really
Starting point is 02:32:31 Make it a war of ideas. Did you consider that possible trajectory? I did a Couple things to say about it one this work was not Central for me. I took a year on the Telemir project was not central for me. I took a year on the telomere project because something fascinating occurred to me. I pursued it and the more I pursued it, the clearer it was. There was something there, but it wasn't the focus of my graduate work. And I didn't want to become a telomere researcher. What I want to do is to be an evolutionary biologist who upgrades the toolkit of evolutionary concepts so that we can see more clearly how organisms function and why. And D. Lamir's was a proof of concept, right?
Starting point is 02:33:15 That paper was a proof of concept that the toolkit in question works. As for the need to pursue it further, I think it's kind of absurd and you're not the first person to say, maybe that was the way to go about it, but the basic point is, look, the work was good. The, it turned out to be highly predictive. Frankly, the model of senescence that I've presented is now widely accepted. And I don't feel any misgivings at all about having spent a year on it said my piece and moved on to other things which frankly I think are
Starting point is 02:33:51 bigger. I think there's a lot of good to be done and it would be a waste to get overly narrowly focused. There's so many ways through the space of science and the most common ways to just publish a lot. It's published a lot of papers, these incremental work and exploring the space kind of like ants looking for food. You're tossing out a bunch of different ideas. Some of them could be brilliant breakthrough ideas, nature, some of their more confidence kind of publications, all those kinds of things. Did you consider that kind of path in science? Of course, I considered it, but I must say the experience of having my first encounter
Starting point is 02:34:35 with the process of peer review be this story, which was frankly a debacle from one end to the other with respect to the process of publishing. It did not, it was not a very good sales pitch for trying to make a difference to publication. And I would point out part of what I ran into. And I think frankly part of what explains Carol's behavior is that in some parts of science, there is this dynamic where PIs parasitize their underlings. And if you're very, very good, you rise to the level where one day, instead of being parasitized, you get to parasitize others. Now, I find that scientifically spicable. And it wasn't the culture of the lab I grew up in
Starting point is 02:35:20 at all. My lab, in fact, the P.I. PI Dick Alexander who's now gone, but who was an incredible mind and a great human being. He didn't want his graduate students working on the same topics he was on, not because it wouldn't have been useful and exciting, but because in effect, he did not want any confusion about who had done what because he was a great mentor. And the idea was actually a great mentor is not stealing ideas. And you don't want people thinking that they are. So anyway, my point would be I wasn't up for being parasitized. I don't like the idea that if you are very good, you get parasitized until it's
Starting point is 02:36:06 your turn to parasitize others. That doesn't make sense to me. A crossing over from evolution into cellular biology may have exposed me to that. That may have been par for the course, but it doesn't make it acceptable. And I would also point out that my work falls in the realm of synthesis. My work generally takes evidence accumulated by others and places it together in order to generate hypotheses that explain sets of phenomena that are otherwise intractable. And I am not sure that that is best done with narrow publications that are read by few. And in fact, I would point to the very conspicuous example of Richard Dawkins, who I must say I've learned a tremendous amount from, and I greatly admire. Dawkins has almost no publication record in the sense of peer-reviewed papers in journals.
Starting point is 02:37:06 What he's done instead is done synthetic work, and he's published it in books, which are not peer-reviewed in the same sense. And frankly, I think there's no doubting his contribution to the field. So my sense is if Richard Dawkins can illustrate that one can make contributions to the field without using journals as the primary mechanism for distributing what you've come to understand, then it's obviously a valid mechanism, and it's a far better one from the point of view of And that does require a lot of both broad and deep thinking is exceptionally valuable. You could also, I mean, I'm working with something with Andrew Huberman now. You can also publish synthesis.
Starting point is 02:37:52 That's like review papers. They're exceptionally valuable for the communities. It brings the community together, tells a history, tells a story where the community has been. It paints a picture of where the path lays for the future. I think it's really valuable. And Richard Dawkins is a good example somebody that does that in book form
Starting point is 02:38:10 that he kind of walks the line really interestingly. You have like somebody who like Neil deGrasse Tyson who's more like a science communicator. Richard Dawkins sometimes a science communicator, but he gets like close to the technical to where it's a little bit, it's not shying away from being really a contribution to science. He's made real contributions.
Starting point is 02:38:34 In book form. He's fascinating. Roger Prenner also, similar kind of idea, that's interesting. That's interesting. Synthesis does not, especially synthesis work, work the synthesizes ideas does not necessarily need to be peer reviewed. It's peer reviewed by peers reading it. Well, and reviewing it. That's it. It is reviewed by peers, which is not synonymous with peer review. And that's the thing is people don't understand that the two things aren't the same, right? Peer review is an anonymous process that happens before publication in a place where there is a power dynamic, right? I mean, the joke, of course, is that peer review is actually peer preview, right? Your biggest competitors get to see your work before it sees the light of day and decide
Starting point is 02:39:25 whether or not it gets published. And again, when your formative experience with the publication apparatus is the one I had with the telomere paper, there's no way that that seems like the right way to advance important ideas. And what's the harm in publishing them so that your peers have to review them in public where they actually, if they're going to disagree with you, they actually have to take the risk of saying, I don't think this is right. And here's why, right, with their name on it.
Starting point is 02:39:55 I'd much rather that. It's not that I don't want my work reviewed by peers, but I want it done in the open, you know, for the same reason you don't meet with dangerous people in private. You meet at the cafe. I want the work reviewed out in public. Can I ask you a difficult question? Sure. There is popularity in martyrdom.
Starting point is 02:40:18 This popularity in pointing out that the emperor is not close. That can become a drug in itself. I've confronted this in scientific work I've done at MIT, where there are certain things they're not done well. People are not being the best version of themselves. And particular aspects of a particular field are in need of a revolution and part of me wanted to point that out versus doing the hard work of publishing papers and doing the revolution basically just pointing out, look,
Starting point is 02:41:06 you guys are doing it wrong and they're just walking away. Are you aware of the drug of martyrdom, of the ego involved in it, that it can cloud your thinking? Probably one of the best questions I've ever been asked So let me let me try to sort it out First of all, we are all mysteries to ourself at some level So it's possible there's stuff going on in me that I'm not aware of that's driving But in general, I would say one of my better
Starting point is 02:41:41 strengths is that I'm not especially ego driven my better strengths is that I'm not especially ego driven. I have an ego, I clearly think highly of myself, but it is not driving me. I do not crave that kind of validation. I do crave certain things. I do love a good Eureka moment. There is something great about it and there's something even better about the phone calls you make next when you share it. Right? It's pretty, it's pretty fun. Right? I really like it. I also really like my subject. Right? There's something about a walk in the forest when you have a toolkit in which you can actually look at creatures and see something deep. Right? I like it. That drives me. And I could entertain myself for the rest of my life. Right? If I If I was somehow isolated from the rest of the world, but I was in a place that was biologically
Starting point is 02:42:32 interesting, you know, hopefully I would be with people that I love and pets that I love, believe it or not. But, you know, if I were in that situation and I could just go out every day and look at cool stuff and figure out what it means, I could be all right with that. So I'm not heavily driven by the ego thing as you put it. So I am completely the same except instead of the past I would put robots. So it's not, it's the Eureka, it's the exploration of the subject that brings you joy and fulfillment. It's not the ego. Well, there's more to say.
Starting point is 02:43:09 No, I really don't think it's the ego thing. I will say I also have kind of a secondary passion for robot stuff. I've never made anything useful, but I do believe, I believe I found my calling. But if this wasn't my calling, my calling would have been inventing stuff. I really enjoy that too. So I get what you're saying about the analogy quite, quite well. As far as the martyrdom thing, I, I understand the drug you're talking about, and I've seen it more than I felt it.
Starting point is 02:43:41 I do, if I'm just to be completely candid and that this question is so good, it deserves a candid answer. I do like the fight. Right. I like fighting against people. I don't respect and I like winning. But I have no interest in martyrdom. One of the reasons I have no interest in martyrdom is that I'm having too good a time, right? I very much enjoy my life and such a good answer. I have a wonderful wife. I have amazing children. I live in a lovely place.
Starting point is 02:44:16 I don't want to exit any quicker than I have to. That said, I also believe in things and a willingness to exit if that's the only way is not exactly inviting martyrdom, but it is an acceptance that fighting is dangerous and going up against powerful forces means who knows what will come of it, right? I don't have the sense that the thing is out there that used to kill inconvenient people. I don't think that's how it's done anymore. It's primarily done through destroying them reputationally, which is not something I relish the possibility of. But there's a difference between a willingness to face the hazard rather than a desire to face it because of the thrill. For me, the thrill is in fighting when I'm in the right.
Starting point is 02:45:09 I think I feel that that is a worthwhile way to take what I see as the kind of brutality that is built into men and to channel it to something useful. If it is not channeled into something useful, it will be channeled into something else. So, it damn will better be channeled to something useful, right? If it is not channeled into something useful, we channeled into something else. So, at damn, we'll better be channeled into something useful. It's not motivated by fame and popularity those kinds of things.
Starting point is 02:45:32 It's, you know, you're just making me realize that enjoying the fight, fighting the powerful idea that you believe is right, Fighting the powerful and idea that you believe is right is a kind of It's a kind of optimism for the human spirit. It's like we can win this It's almost like you're turning into action and the personal action this hope for for the humanity by saying like we can win this and that makes you feel Good about like the rest of humanity that if there's people like me
Starting point is 02:46:17 Then we're going to be okay Even if you're like your ideas are might be wrong or not, but if you believe they're right and You're fighting the powerful against all odds, they were going to be okay. If I were to project, I mean, because I enjoy the fight as well, I think that's the way I, that's what brings me joy, is it's almost like it's optimism in action. Well, it's a little different for me. And again, I think, you know, I recognize you. You're a familiar, your construction is familiar, even if it isn't mine, right? For me, I actually expect us not to be okay, and I'm not okay with that. But what's really important, if I feel like what I've said is I don't know of any reason that it's too late.
Starting point is 02:47:09 As far as I know, we could still save humanity and we could get to the fourth frontier or something akin to it. But I expect us to not, I expect us to fuck it up, right? I don't like that thought, but I've looked into the abyss and I've done my calculations. And the number of ways we could not succeed are many and the number of ways that we could manage to get out of this very dangerous phase of history is small. But the thing I don't have to worry about is that I didn't do enough, right, that I was a coward, that I prioritized other things. At the end of the day, I think I will be able to say to myself, and in fact, the thing that allows me to sleep
Starting point is 02:47:52 is that when I saw clearly what needed to be done, I tried to do it to the extent that it was in my power. And, you know, if we fail, as I expect us to, I can't say, well, geez, that's on me. And frankly, I regard what I just said to you as something like the personality defect. I'm trying to free myself from the sense that this is my fault. On the other hand, my guess is that personality defect is probably good for humanity. It's a good one for me to have. The externalities of it are positive. So I don't feel too bad about it.
Starting point is 02:48:29 Yeah, it's funny. So yeah, our perspective on the world are different, but they rhyme, like you said, because I have also looked into the abyss and it kind of smiled nervously back. So I have a more optimistic sense that we're going to win more than likely we're going to be okay. Right there with your brother. I'm hoping you're right. I'm expecting me to be right. But back to Eric, because you had a wonderful conversation. In that conversation, he played the big brother role. And he was very happy about it. His self-congratulatory body. Can you talk to the ways in which Eric made your better man throughout your life? Yeah, hell yeah. I mean for one thing, you know, Eric and I are
Starting point is 02:49:19 interestingly similar in some ways and radically different in some other ways and it's often a matter of fascination to people who know us both because almost always people meet one of us first and they sort of get used to that thing and then they meet the other and it throws the model into chaos. But I had a great advantage, which is I came second. So although it was kind of a pain in the ass to be born into a world that had Eric in it because he's a force of nature, right? It was also
Starting point is 02:49:47 Terrifically useful because a he was a very Awesome older brother who you know made interesting mistakes learned from them and conveyed the wisdom of what he had discovered and that was You know, I don't know who else ends up so lucky as to have that kind of person blazing the trail. It also probably, my hypothesis for what birth order effects are is that they're actually adaptive. The reason that a second born is different than a first born is that they're not born into a world with the same niches in it.
Starting point is 02:50:26 And so the thing about Eric is he's been completely dominant in the realm of fundamental thinking, like what he's fascinated by is the fundamental of fundamentals, and he's excellent at it, which meant that I was born into a world where somebody was becoming excellent in that. And for me, to be anywhere near the fundamental of fundamentals was going to be pointless. Right? I was going to be playing second fiddle forever. And I think that that actually drove me to the other end of the continuum between fundamental and emergent. And so I became fascinated with biology and have been since I was three years old.
Starting point is 02:51:03 Right? I think Eric drove that. And I have to thank him for it years old, right? I think Eric drove that, and I have to thank him for it because, you know, I mean, I never thought of, so Eric drives towards the fundamental and you drive towards the emergent, the physics and the biology. Right, opposite ends of the continuum. And as Eric would be quick to point out
Starting point is 02:51:23 if he was sitting here, I treat the emergent layer, I seek the fundamentals in it, which is sort of an echo of Eric's style of thinking but applied to the very far complexity. He's overpoweringly argues for the importance of physics, the fundamental of the fundamental. He's not here to defend himself. Is there an argument to be made against that? Or biology, the emergent, the study of the thing that emerged when the fundamental acts at the universal, at the cosmic scale and builds the beautiful thing that is us is much more important. Like psychology, biology, the systems that we're actually interacting with in this human world are much more important to understand than low-level theories of quantum mechanics
Starting point is 02:52:22 and general relativity. Yeah, I can't say that one is more important. I think there's probably a different time scale. I think understanding the emergent layer is more often useful, but the bang for the buck at the far fundamental layer may be much greater. So for example, the fourth frontier, I'm pretty sure it's going to have to be fusion-powered. I don't think anything else will do it, but once you had fusion power, assuming we didn't just dump
Starting point is 02:52:50 fusion power on the market the way we would be likely to if it was invented usefully tomorrow. But if we had fusion power and we had a little bit more wisdom than we have, you could do an awful lot. And that's not going to come from people like me, who, you know, look at dynamics. Can I argue against that, please? I think the way to unlock fusion power is through artificial intelligence. It's, so I think most of the breakthrough ideas
Starting point is 02:53:23 in the futures of science will be developed by AI systems. And I think in order to build intelligent AI systems, you have to be a scholar of the fundamental of the emergent, of biology, of the neuroscience, of the way the brain works, of intelligence, of consciousness. And those things, at least directly, don't have anything to do with physics. Well, you're making me a little bit sad because my addiction to the a-ha moment thing is incompatible with, you know, outsourcing that job.
Starting point is 02:53:57 Like, outsourcing. I don't want to outsource that thing to the moment. You know, and actually I've seen this happen before because some of the people who trained Heather and me were phrylogenetic systematists, Arnold Klugey in particular. And the problem with systematics is that to do it right when your technology is primitive, you have to be deeply embedded in the philosophical and the logical, right? Your method has to be based in the highest level of rigor. Once you can sequence genes, genes can spit so much data at you that you can overwhelm
Starting point is 02:54:36 high-quality work with just lots and lots and lots of automated work. And so in some sense, there's like a generation of phylogenetic systematists who are the last of the greats because what's replacing them as sequencers. So anyway, maybe you're right about the AI and I guess I'm- I'm going to use you sad. I like figuring stuff out. Is there something that you disagree with the Aricon that you've been trying to convince them?
Starting point is 02:55:04 You fail so far, but you will eventually succeed. You know, that is a very long list. Eric and I have have tensions over certain things that recur all the time. And I'm trying to think what would be, you know, the ideal in the space of science and the space of philosophy, politics, family, love, robots. Well, all right. Let me, I'm just going to use your podcast to make a bit of a cryptic war and just say there are many places in which I believe that I have butted heads with Eric over the course of decades and I have seen him move in my direction substantially over time.
Starting point is 02:55:47 You've been winning. He might, he might win. I wouldn't battle here or there, but you've been winning the war. I would not say that. It's quite possible he could say the same thing about me. And in fact, I know that it's true. There are places where he's absolutely convinced me. But in any case, I do believe it's at least, you know, it may not be a totally even fight,
Starting point is 02:56:04 but it's more even than some will imagine. But yeah, we have, you know, there are things I say that drive him nuts, right? Like when something, you know, like you heard me talk about the, what was it? It was the autopilot that seems to be putting a great many humans in needless medical jeopardy over the COVID-19 pandemic. And my feeling is we can say this almost for sure. Anytime you have the appearance of some captured gigantic entity that is censoring you on YouTube and you know handing down dictates from the who and all of that. It is sure that there will be a certain amount of collusion, right? There's going to be some embarrassing emails in
Starting point is 02:56:53 some places that are going to reveal some shocking connections and then there's going to be an awful lot of emergence that didn't involve collusion, right? In which people were doing their little part of a job and some thing was emerging and you never know what the admixture is. How much are we looking at actual collusion and how much are we looking at an emergent process, but you should always walk in with the sense that it's going to be a ratio. And the question is, what is the ratio in this case? I think this drives Eric Nutz because he is very focused on the people.
Starting point is 02:57:23 I think he's focused on the people who have a choice and make the wrong one. And anyway, he makes us know the ratio of the distraction to that. I think he takes it almost as an offense because it grants cover to people who are harming others. And I think it offends him morally. And if I had to say, I would say it alters his judgment on the matter. But anyway, certainly useful just to leave open
Starting point is 02:57:55 the two possibilities and say it's a ratio, but we don't know which one. Brother to brother, do you love the guy? Hell yeah, hell yeah, and you know I'd love him if he was just my brother but he's also awesome. So I love him and I love him for who he is. So let me ask you about back to your book, Hunter Gatherer's Guide to the 21st century. I can't wait both of the book and the videos you do on the book. That's really exciting that there's like a structured organized way to present this. I kind of, from an evolutionary biology perspective, a guide for the future. Using our past at the fundamental of the emergent way to present a picture of the future.
Starting point is 02:58:45 Let me ask you about something that I think about a little bit in this modern world, which is monogamy. So I personally value monogamy. One girl, writer, die. There you go. Writer, not. It's exactly it. But that said, I don't know what's the right way to approach this, but from an evolutionary biology perspective or from just looking at modern society, that seems to be an idea that's not, that seems to be an idea that's not what's the right way to put it flourishing? It is waning. It's waning. So I suppose based on your reaction you're also a supporter of
Starting point is 02:59:35 monogamy or the you value monogamy. Are you and I just delusional? What can you say about monogamy from the context of your book, from the context of evolutionary biology, from the context of being human? Yeah, I can say that I fully believe that we are actually enlightened. And that although monogamy is waning, that it is not waning because there is a superior system, it is waning for predictable other reasons. So let us just say there is a lot of pre-transfalacy here where people go through a phase where they recognize that actually we know a lot about the evolution of monogamy and we can tell from the fact that humans are somewhat sexually dimorphic that there has been a lot of polygamy, and we can tell from the fact that humans are somewhat sexually dimorphic that there has been a lot of polygamy in human history. And in fact,
Starting point is 03:00:31 most of human history was largely polygamous. But it is also the case that most of the people on earth today belong to civilizations that are at least nominally monogamous and have practiced monogamy. And that's not anti-evolutionary. What that is is part of what I mentioned before, where human beings can swap out their software program. And different mating patterns are favored in different periods of history. So I would argue that the benefit of monogamy, the primary one that drives the evolution of monogamous patterns in humans,
Starting point is 03:01:09 is that it brings all adults into child rearing. Now the reason that that matters is because human babies are very labor intensive in order to raise them properly. Having two parents is a huge asset and having more than two parents having an extended family, also very important. But what that means is that for a population that is expanding, a monogamous mating system
Starting point is 03:01:37 makes sense. It makes sense because it means that the number of offspring that can be raised is elevated. It's elevated because all potential parents are involved in parenting, whereas if you sideline a bunch of males by having a polygina system in which one male has many females, which is typically the way that works, what you do is you sideline all those males, which means the total amount of parental effort is lower, and the population can't grow. So what I'm arguing is that you should expect to see populations that face the possibility of expansion
Starting point is 03:02:10 endorse monogamy, and at the point that they have reached carrying capacity, you should expect to see polygene break back out. And what we are seeing is a kind of false sophistication around polyamory, which will end up breaking down into polygene, which will not be in the interest of most people. Really, the only people whose interests could be argued to be in would be the very small number of males at the top who have many partners and everybody else suffers. Is it possible to make the argument focusing on those males at the quote unquote top with the many female partners, is it possible to say that that's a suboptimal life that a single partner is the optimal life?
Starting point is 03:02:55 Well, it depends what you mean. I have a feeling that you and I wouldn't have to go very far to figure out that what might be evolutionarily optimal doesn't match my values as a person that I'm sure it doesn't match yours either. You know, try to, can we try to dig into that gap between those two? Sure. I mean, we can do it very simply. Selection might favor your engaging in war against a defenseless enemy or genocide. Right? It's not hard to figure out how that might put your genes at advantage.
Starting point is 03:03:33 I don't know about you, Lex. I'm not getting involved in no genocide. It's not going to happen. I won't do it. I will do anything to avoid it. So some part of me has decided that my conscious self and the values that I hold trump my evolutionary self. And once you figure out that in some extreme case, that's true. And then you realize that that means it must be possible in many other cases. And you start going through all of the things that selection would favor. And you realize that a fair fraction of the time, actually, you're not up for this. You don't want to be some robot on a mission that involves genocide when necessary. You want to be your own person and accomplish things that you think are valuable. And so, among those are not advocating, you know, let's suppose you're in a position to be one of those males at the top of a polygene system. We both know why that would be rewarding, right? But we also both recognize it.
Starting point is 03:04:30 Do we? Yeah. Sure. Lots of sex. Yeah. Okay. What else? Lots of sex and lots of variety, right? So, look, every red-blooded American slash Russian male could understand why that's appealing right on the other hand. It is up against an alternative, which is having a partner with whom one is bonded, especially closely, right? And so a love, right. Well, you know, I don't want to I don't want to straw man the polygamy position. Obviously, polygamy is complex and there's nothing that stops a man presumably from loving multiple partners and from them loving him back. But in terms of, if love is your thing, there's a question
Starting point is 03:05:17 about, okay, what is the quality of love if it is divided over multiple partners, right? And what is the net consequence for love in a society when multiple people will be frozen out for every individual male, in this case, who has it? And what I would argue is, and you know, this is weird to even talk about, but this is partially me just talking from personal experience.
Starting point is 03:05:43 I think there actually is a monogamy program in us, and it's not automatic. But if you take it seriously, you can find it, and frankly, marriage, and it doesn't have to be marriage, but whatever it is that results in the lifelong bond with a partner has gotten a very bad rap. You know, it's the butt of too many jokes. partner has gotten a very bad rap. You know, it's the butt of too many jokes. But the truth is, it's hugely rewarding. It's not easy. But if you know that you're looking for something, right? If you know that the objective actually exists and it's not some utopian fantasy that can't be found, if you know that there's some real world, you know, warts and all version of it, then you might actually think, hey, that is something I want and you might pursue it and my guess is you'd be very happy when you find it.
Starting point is 03:06:28 Yeah, I think there is getting to the fundamentals of the emergent. I feel like there is some kind of physics of love. So one, there's a conservation thing going on. So if you have like many partners, yeah, in theory, you should be able to love all of them deeply. But it seems like in reality reality that love gets split. Yep.
Starting point is 03:06:49 Now, there's another law that's interesting in terms of monogamy. I don't know if it's at the physics level, but if you are in a monogamous relationship by choice, And almost as in slight rebellion to social norms, that's much more powerful. Like if you choose that one partnership, that's also more powerful. If like everybody's in a monogh, there's this pressure to be married and this pressure of society, that's different because that's almost like a constraint on your freedom that is enforced by something other than your own ideals. It's by somebody else. When you yourself choose to, I guess, create these constraints, that enriches that love. So there's some kind of love function, like E equals mc squared,
Starting point is 03:07:39 but for love that I feel like if you have less partners and is done by choice, that I feel like if you have less partners and is done by choice, they can maximize that. And that love can transcend the biology, transcend the evolutionary biology forces that have to do much more with survival and all those kinds of things. It can transcend to take us to a richer experience which we have the luxury of having exploring, of happiness, of joy, of fulfillment, all those kinds of things. Totally agree with us. And there's no question that by choice when there are other choices, imbues it with meaning that it might not otherwise have. I would also say, you know, I'm really struck by and I have a hard time not feeling terrible sadness over what younger people are coming to think about this topic. I think
Starting point is 03:08:37 they're missing something so important and so hard to phrase that and they don't even know that they're missing it. They might know that they're unhappy, but they don't understand what it is they're even looking for because nobody's really been honest with them about what their choices are. And I have to say, if I was a young person or if I was advising a young person, which I used to do, again, a million years ago when I was a college professor four years ago, but I used to, you know, talk to students. I knew my students really well and they would ask questions about this and they were always curious because Heather and I seem to have a good relationship and many of them knew both of us. So they
Starting point is 03:09:12 would talk to us about this. If I was advising somebody, I would say, do not bypass the possibility that what you are supposed to do is find somebody worthy. Somebody who can handle it, somebody who you are compatible with, and that you don't have to be perfectly compatible. It's not about dating until you find the one. It's about finding somebody who's underlying values and viewpoint are complementary to your sufficient that you fall in love. If you find that person, opt out together. Get out of this damn system that's telling you what's sophisticated to think about love and romance and sex, ignore it together, right? That's the key. And I believe you'll end up laughing in the end if you do it. You'll discover, wow, that's a hellscape that I opted out of.
Starting point is 03:10:07 And this thing I opted into, complicated, difficult, worth it. Nothing that's worth it is ever not difficult. So we should even just skip the whole statement about difficult. Yeah, all right. I just, I want to be honest. It's not like, oh, it's nonstop joy. No, it's freaking complex but but worth it. No question in my mind. Is there advice outside of love that you can give to young people? You were a million years ago a professor. Is there advice you can give to young people, high schoolers, college students, Is there advice you can give to young people, high schoolers, college students,
Starting point is 03:10:46 about career, about life? Yeah, but it's not, they're not going to like it. It's not easy to operationalize. And this was a problem when I was a college professor too. People would ask me what they should do. Should they go to graduate school? I had almost nothing useful to say because the job market and the market of pre-job training and all of that. These things are all so distorted and corrupt that I didn't want to point anybody to anything, all right, because it's all broken. I would tell them that. But I would say that results in a kind of meta-level advice that I do think is useful. You don't know what's coming. You don't know where the opportunities will be. You should invest in tools rather than knowledge, right? To the extent that
Starting point is 03:11:33 you can do things, you can repurpose that no matter what the future brings to the extent that, you know, if you as a robot guy, right, you've got the skills of a robot guy. Now, if civilization failed and the stuff of robot building disappeared with it, you'd still have the mind of a robot guy, and the mind of a robot guy can retool around all kinds of things, whether you're forced to work with, you know, fibers that are made into ropes, right? Your mechanical mind would be useful in all kinds of places. So investing tools like that that can be easily repurposed and investing combinations of tools, right? If civilization keeps limping along, you're going to be up against all sorts of people who have studied the things that
Starting point is 03:12:22 you study, right? If you think, hey, computer programming is really, really cool. And you pick up computer programming. Guess what? You just enter the large group of people who have that skill and many of them will be better than you, almost certainly. On the other hand, if you combine that with something else that's very rarely combined with it, if you have, I don't know if it's carpentry and computer programming. If you take combinations of things that are even if they're both common, but they're
Starting point is 03:12:51 not commonly found together, then those combinations create a rarefied space where you inhabit it. And even if the things don't even really touch, but nonetheless, they create a mind in which the two things are live and you can move back and forth between them and step out of your own perspective by moving from one to the other. That will increase what you can see and the quality of your tools. And so anyway, that isn't useful advice. It doesn't tell you whether you should go to graduate school or not, but it does tell you the one thing we can say for certain about the future is that it's uncertain
Starting point is 03:13:25 and so prepare for it. And like you said, there's cool things to be discovered in the intersection of fields and ideas. And I would look at grad school that way, actually, if you do go or I see, I mean, this is such a like every course in grad school, undergrad two, was like this little journey that you're on that explores a particular field. And it's not immediately obvious how useful it is, but it allows you to discover intersections between that thing and some other thing. So you're bringing to the table
Starting point is 03:14:03 this, these pieces of knowledge, some of which, when intersected, might create a niche that's completely novel, unique and will bring you joy. I have that, I mean, I took a huge number of courses in theoretical computer science. Most of them seem useless, but they totally changed the way I see the world.
Starting point is 03:14:23 In ways that are, I'm not prepared or is a little bit difficult to kind of make explicit but Taking together They've allowed me to see for example the world of robotics totally different and different from many of my colleagues and friends and so on and I think that's a good way to see if you go to grad school as an opportunity to explore intersections of fields, even if the individual fields seem useless. Yeah, and useless doesn't mean useless, right?
Starting point is 03:14:57 Useless means not directly applicable. Not directly. A good useless course can be the best one you ever took. Yeah, I took James Joyce, of course on James Joyce, and that was truly useless. Well I took immunobiology in the medical school when I was at Penn as I guess I would have been a freshman or a sophomore. I wasn't supposed to be in this class. It blew my goddamn mind,
Starting point is 03:15:25 and it still does, right? I mean, we had this, I don't even know who it was, but this great professor who was like a highly placed in the world of immunobiology. You know, the course is called immunobiology, not immunobiology. Immunobiology. It had the right focus, and as I recall it, the professor stood sideways to the chalkboard, staring off into space, literally stroking his beard with this bemused look on his face through the entire lecture. And you know, you had all these medical students who were so furiously writing notes that I don't even think they were noticing the person delivering this thing. But, you know, I got what this guy was smiling about. It was like so, what he was
Starting point is 03:16:05 describing, you know, adaptive immunity is so marvelous, right, that it was like almost a privilege to even be saying it to a room full of people who were listening, you know. But anyway, yeah, I took that course and, you know, low and behold, COVID. Well, yeah, suddenly it's front and center and wow, am I glad I took it. But anyway, yeah, useless courses are great. And actually Eric gave me one of the greater pieces of advice, at least for college that anyone's ever given, which was, don't worry about the pre-rex. Take it anyway, right?
Starting point is 03:16:39 But now I don't even know if kids can do this now because the pre-rex are now enforced by a computer. But back in the day, if you didn't mention that you didn't have the pre-rex are now enforced by a computer. But back in the day, if you didn't mention that you didn't have the pre-rex, nobody stopped you from taking the course, then what he told me, which I didn't know, was that often the advanced courses are easier in some way. The material is complex, but it's not like intro bio where you're learning a thousand things at once, right? It's like focused on something so if you dedicate yourself you can pull it off. Yeah, stay with an idea for many weeks at a time and it's ultimately rewarding and not as difficult as it looks. Can I ask you a ridiculous question?
Starting point is 03:17:17 Please. What do you think is the meaning of life? Well, I feel terrible having to give you the answer. I realize you asked the question, but if I tell you, you're going to again feel bad. I don't want to do that. But look, there's two, there can be a disappointment. It's no, it's going to be a horror, right? Because we actually know the answer to the question.
Starting point is 03:17:43 Oh no. It's completely meaningless. There is nothing that we can do that escapes the heat death of the universe or whatever it is that happens at the end. And we're not going to make it there anyway. But even if you were optimistic about our ability to escape every existential hazard indefinitely, ultimately it's all for not, and we know it. Right? That said, once you stare into that abyss and then it stares back and laughs or whatever happens,
Starting point is 03:18:14 right, then the question is, okay, given that, can I relax a little bit? Right? And figure out, well, what would make sense if that were true? Right? And I think there's something very clear to me. I think if you do all of the, you know, if I just take the values that I'm sure we share and extrapolate from them, I think the following thing is actually a moral imperative. Being a human and having opportunity is absolutely fucking awesome. Right? A lot of people don't make use of the opportunity and a lot of people don't have opportunity, right? They get to be human, but they're too constrained by keeping a roof over their heads to really be free. But being a free human is
Starting point is 03:18:57 Fantastic and being a free human on this beautiful planet crippled as it may be is Unparalleled. I mean what could be better? How lucky are we that we get that, right? So if that's true, that it is awesome to be human and to be free, then surely it is our obligation to deliver that opportunity to as many people as we care. And how do you do that? Well, I think I know what job one is. Job one is we have to get sustainable. The way to get the maximum number of humans to have that opportunity to be both here and free is to make sure that there isn't a limit
Starting point is 03:19:35 on how long we can keep doing this. That effectively requires us to reach sustainability. And then at sustainability, you could have a horror show of sustainability, right? You could have a totalitarian sustainability. That's not the objective. The objective is to liberate people. And so the question and the whole fourth frontier question, frankly, is how do you get to a
Starting point is 03:19:58 sustainable and indefinitely sustainable state in which people feel liberated, in which they are liberated to pursue the things that actually matter, to pursue beauty, truth, compassion, connection, all of those things that we could list as unalloyed goods. Those are the things that people should be most liberated to do in a system that really functions. And anyway, my point is I don't know how precise that calculation is, but I'm pretty sure it's not wrong. It's accurate enough.
Starting point is 03:20:30 And if it is accurate enough, then the point is okay, well, there's no ultimate meaning, but the proximate meaning is that one. How many people can we get to have this wonderful experience that we've gotten to have, right? And it's no way that's so wrong that if I invest my life in it, that I'm making some big error. Yeah, for that. Life is awesome and we want to spread the awesome as much as possible. Yeah, you sum it up that way, spread the awesome.
Starting point is 03:20:56 Spread the awesome. So that's the fourth frontier. And if that fails, if the fourth frontier fails, the fifth frontier will be defined by robots. And hopefully they'll learn the lessons of the mistakes that the humans made and build a better world. I hope we're very happy here and that they do a better job with the place than we did. Brett, I can't believe it took us this long to talk. As I mentioned to you before that we haven't actually spoken, I think at all. and I've always felt that we're already friends. I don't know how that works, because I've listened to your podcast a lot. I've also sort of loved your brother. And so it was like we've known each other for the longest time.
Starting point is 03:21:40 And I hope we can be friends and we can talk often again. And I hope that you get a chance to meet some of my robot friends as well and fall in love and I'm so glad that you love robots as well so we get to share in that love so I can't wait for us to interact together. So we went from talking about some of the worst failures of humanity, some of the most beautiful aspects of humanity. What else can you ask for from a conversation? Thank you so much for talking today. You know, Lex, I feel the same way towards you and I really appreciate it. This has been
Starting point is 03:22:15 a lot of fun and I'm looking forward to our next one. Thanks for listening to this conversation with Bright Weinstein and thank you to Jordan Harbredger show, ExpressVPN, Magic Spoon, and Forsegmatic. Check them out in the description to support this podcast. And now, let me leave you with some words from Charles Darwin. Ignorance more frequently begets confidence than does knowledge. It is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.
Starting point is 03:22:48 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.