Moonshots with Peter Diamandis - Forget Everything You Believed About Computing w/ Gill Verdon | EP #102

Episode Date: May 23, 2024

In this episode, Peter and Gill discuss Extropic, the science behind his startup, and his vision for the future.  03:16 | The Power of Brain-Scale Processors 18:56 | The Debate Over Accelerationis...t Movement 28:46 | AI and the Kardashev Scale Gill Verdon, whose full name is Guillaume Verdon, is the founder of Extropic, a stealth AI startup, and a former quantum computing engineer at Google known for his work in artificial intelligence and quantum computing. He is a physicist, applied mathematician, and researcher in quantum machine learning, who has raised $14.1M for his startup, which enhances LLMs through thermodynamic computing. Verdon is also known for his online persona @BasedBeffJezos and his creation of effective accelerationism (e/acc), advocating for rapid technological progress as an ethically preferred path for human progress, emphasizing optimism and proactive efforts to shape a better future.​ Learn more about Extropic: https://www.extropic.ai/  Follow him on Twitter @GillVerd  @Extropic_AI  ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors:  Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter  _____________ Get my new Longevity Practices 2024 book: https://bit.ly/48Hv1j6  I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 There's a lot of dystopian thinking in the world right now. Either you embrace progress and abundance and growth, or... How tightly can we embed the machine learning algorithms into the physics of electrons? that you're about to inherit are tools that can boost intellect, not just thousands, but billions of fold potentially. If people have a mindset that they want to embrace technological change, they want to figure out how to best augment themselves, augment their businesses with AI, they will have a place in the future. Welcome to Moonshots, Peter Diamandis here. I'm about to have a conversation with Guillaume Verdun. He's the founder of the accelerationist movement, effective accelerationism,
Starting point is 00:00:54 EAC. He's also the founder and CEO of Extropic AI. It's a new form of computation, thermodynamic computing, different from digital, different than quantum. He spent three years at Google working with Sergey Brin. He's a quantum physicist, a brilliant individual. We're going to go deep into why is it important to have an accelerationist abundance mindset and what is the future of AI computing when the cost, the thermodynamic efficiency is something that is hundreds of thousands of times cheaper and we make this ubiquitous throughout the universe.
Starting point is 00:01:36 His mission is to increase the amount of intelligence per watt and the amount of intelligence in the universe. All right, if you love conversations like this, please subscribe, please upvote this. Help me bring incredible moonshot engineers like Guillaume Verdun, who goes by Gil, to the Moonshots podcast. All right, let's jump in. Gil, welcome to Moonshots. It's a pleasure to have you here, buddy. Thanks for having me. Super excited to be here. Yeah, we've seen each other twice in the same quarter. First off on the stage of Abundance 360 this past March.
Starting point is 00:02:09 And then you joined us at the XPRIZE Deep Tech Quantum trip up in the Bay Area. And yeah, it was fun. That was a lot of fun. Yeah, met some brilliant folks there. Always amazed by the quality of the XPRIZE community. So super happy to be here. Thank you. So today we're going to talk about two of my favorite subjects. The first subject is the whole abundance acceleration movement. And just to give people an understanding of how fast things are changing and why that's a good thing, right? Because so many people are fearful of the speed of change
Starting point is 00:02:47 and want to put on the brakes. Mostly, I would almost say governments are like, if you don't understand it, the answer is stop, slow down. And so, yeah, why is that a bad idea? And can it actually be slowed down and stopped? The other is what you're building, which is a new class of computational hardware to enable this acceleration movement. So we're going to jump into both. If you don't mind, let's start with what is your moonshot? If you, you know, Guillaume Verdun have a moonshot that you want to make happen the next decade. What is it? What's the equivalent to Elon's going to Mars for you? That's a great question. I think having a processor that is brain scale in terms of numbers of parameters and model capability, but that is far more energy efficient. I think that's my... More energy efficient than the brain?
Starting point is 00:03:44 Yes. The brain's pretty damn energy efficient, I think that's my... More energy efficient than the brain? Yes. Yes. The brain's pretty damn energy efficient. That's right. And why is that? And that's what we're trying to reverse engineer. Interesting. So just to put a number on it for folks, you know, 100 billion neurons, 100 trillion synaptic connections running on what, how many watts do you...
Starting point is 00:04:01 Tens of watts. I think like 14 to 20 watts of energy. But the equivalent for a GPT-4 system would be what? Tens or hundreds of millions of times less efficient. Yeah. It's really, but you want to be more efficient than that. More efficient in the brain. That's what we're aiming for.
Starting point is 00:04:21 Yeah. Amazing. You know, I, in listening to you, and I've studied your work, and I'm just fascinated by it, I wrote down that your mission would be, you know, the ultimate substrate for AI compute, to build the ultimate AI computing machine, and also to propagate as much intelligence per watt in the universe. Yes. Is that fair to say? What does that mean? Yeah, I would say, you know, there's kind of two dual missions that come together towards one greater mission. The broader mission is to scale intelligence throughout the universe,
Starting point is 00:05:05 scale the total amount of intelligence in our corner of the cosmos. But to do that, you have to increase how much energy we produce, right, going up what is called the Kardashev scale. And we'll talk about Kardashev, yes, for sure. It's a way to measure how much energy we produce and consume. And so that's kind of the core cause area of EAC is to argue for policies that will help us scale our energetic consumption and growth of civilization. And then with Extropic, our goal is to get more intelligence per watt. And that is ultimately a race to the bottom, right? It's how tightly can we embed the machine learning algorithms into the physics of
Starting point is 00:05:51 electrons. I love it. I love it. We're gonna get into all of that and I want to talk about the accelerationist movement and talk about extraopic hardware because if you love tech and you love the future, you're gonna love this conversation. All right, so let's begin with the accelerationist movement. What is it? You know, you wrote a manifesto. Where were you? Why did you write this? You know, it aligns 100% with my world vision, but talk about what is the EAC movement? What's that stand for? And how do you think about it? Yeah, I think, you know, the original manifesto I wrote around the same time as founding Extropic, I was going through a bit of a lull, just, you know, founding paperwork, right? You're waiting on some lawyers and so on. And I was like, okay, this is probably the last two weeks of vacation I have
Starting point is 00:06:49 for the rest of my life, once this gets going. And so let me try my hand at philosophy, right? Instead of just doing science and math and algorithms, which I did for more or less 10 years. Before that, what if I just like, try to apply my sort of physics mindset to understanding the rest of the world and write up, write something up pretty quickly. I've been having, I was having conversations late at night with other technologists sort of online. There's sort of communities where people have anonymous accounts, right?
Starting point is 00:07:20 Uh, cause they're employed by XYZ and they don't want their opinions to reflect that of their employer and they want to have the freedom to experiment with their thoughts, right? And have candid conversations of like, hey, where's civilization going? Where's society going? Where's this all going? You know, and we would have these conversations late at night and essentially we decided to write something up and, you know, I added my own twist to it and put out the manifesto. It went viral. At first it was kind of dismissed and then it just kept compounding, kept compounding. And now it's kind of truly like a counter narrative to the culture of sort of
Starting point is 00:08:02 AI doom and over regulation and safetyism that we see today. I mean, there's a lot of dystopian thinking in the world right now. There's a lot of fear. And I remind people that our default mindset is that of fear and scarcity. Fear and scarcity evolved in the savannas of Africa
Starting point is 00:08:20 100,000 years ago. And it saved our lives back then. And today, it doesn't contribute. It's not valuable in the world we live in. E slash ACC stands for EAC or Effective Accelerations. And your manifesto, if you were going to summarize it, would be what? The idea is to understand the process that got us here, the process of progress itself, of advancement of civilization, of inspiring ourselves from, well, trying to understand
Starting point is 00:08:58 it from a physical standpoint, right? Like there's been quite a bit of work in the physics of life, right? Like understanding how did life assemble and become so complex, right? Like, there's been quite a bit of work in the physics of life, right? Like understanding how did life assemble and become so complex, right? So the science of complex self organizing systems that have energetic constraints, so out of equilibrium thermodynamics, and taking inspiration from those ideas and applying them at civilization scale to have a prediction of where are we going? How do we
Starting point is 00:09:22 reached? How do we reach, we reached everyone. How do we reach the better futures ahead? And how do we maintain a sort of robust advancement of progress towards this greater, grander future? And essentially, what I saw was that we need to maintain variance and dynamism as a core value for us to be malleable and adaptive to whatever challenges come our way, rather than trying to freeze everything, slow down and panic. Actually what we want is actually more malleability, more dynamism, more acceleration. So important.
Starting point is 00:10:01 So the malleability and adaptability, the agility, I would say, is if you don't have that, when you get hit by a disruptive force, it can destroy you, right? The analogy I use, and you're free to use it, is 66 million years ago when an asteroid struck the Earth, the dinosaurs that were slow and lumbering were not malleable. They were not adaptive and they died, but it was the furry mammals that flittered around, jiggled around, to use the three electron analogy, that ended up becoming dominant. And so that malleability and agility today comes from millions of entrepreneurs trying millions of ideas? I think like every potential parameter space of how we organize ourselves, what are our cultures,
Starting point is 00:10:54 how we do things, which technologies we're pursuing, where we live, how we live, every possible space should have some amount of variance to it and we should allow the freedom to explore and dynamically optimize ourselves. Because for example, you know, the dinosaur example, if you didn't have genetic variance at the time, if we were all, you know, big dinosaurs, we would have been wiped out, right? But because we had variance, we were robust to a change in the landscape, right? And so I would say right now, there is a sort of weird trend in the past, I don't know, decade or so with the arrival of the internet that there's a trend towards over-centralization and top-down control of culture amongst other things, right?
Starting point is 00:11:38 And now it's that sort of centralization around a monoculture is trying to also potentially control a core technology like AI, right? So if AI is in the hands of very few, it's not a lot of variance in how we do AI, right? There's going to be a couple prescriptions. Dominant models, dominant players. Exactly. And their tactics for how to, you know, align the AIs are going to be the only options we have. And to me, that doesn't seem robust because there's too much uncertainty right now. And in times of uncertainty, you want to have variance in how you do things, so you're hedging your bets.
Starting point is 00:12:16 Yeah, there's a great biological analogy. You really want genetic variance. So when a new thing enters the ecosystem, it might kill a large percentage, but a few of the variations will be resistant and will survive. So we're talking about human survival in one sense, here as an underlying optimization function. But we're also talking about sort of
Starting point is 00:12:39 how do we keep a varied culture, different subcultures will have different opinions about how to embrace technology or reject it. And really in the end, each subculture is going to be tested. You might have people that want to merge with the machine, people that don't want anything to do with it, some people that fully embrace it. And I think all paths will be explored.
Starting point is 00:12:59 But in the end, whoever, like the message of EAC is, you know, there's a tendency of the universe towards growth and it will adapt and reconfigure everything that is alive towards this growth. And whether you align yourself with this growth or not is your choice, but that choice ultimately has consequences as to whether or not you are influenced or you are part of that, those likely futures, right? And that comes from some very esoteric equations from thermodynamics, but essentially it's a message of, hey, either you embrace progress and abundance and growth, or, you know, you're probably not going to be, be you're gonna miss the boat in a sense. In fact at the Abundance Summit this year Alexander Wiesner-Groce was
Starting point is 00:13:51 there was talking about AI is coming on strong. It is going to you know reach whatever we call AGI and then digital super intelligence. There's no barrier that says AI only goes towards an IQ of humans. It blasts through and continues at it for night, especially if the hardware you're building comes into existence, or when it comes into existence, I should say. And the question is, does humanity couple with it, or do we decouple? And that's really a lot of the interesting conversation. I'd like to talk about that. I'd also like to talk about what people's fears are one second because again, so the coupling with AI
Starting point is 00:14:37 means that we get a chance to, to accelerate alongside it, enabled by it. It's almost like when life began and we had these prokaryotic life forms and they absorbed mitochondria and became a eukaryotic life form, was able then to utilize the free energy of oxygen for oxidation to grow more rapidly. So today we have our phones and our devices
Starting point is 00:15:04 are kind of like our mitochondria to some extent. Which I definitely want to implant in my head. They're intellectual powerhouses of our system, right? And in a sense we're already pretty augmented. We kind of feel naked and incomplete with our phones. And you know, over time we're going to have wearables that share our perceptions, share our experiences, have priors of our actions based on the state of the world and on previous history of what they've seen. And that's a sort of exogenous neural augmentations, neural augmentation of ourselves.
Starting point is 00:15:36 Of course, there's our friends at Neuralink that are working on the full merge with even higher bandwidth, but even without that sort of invasive approach to merging with the AI, I think we're already, most people are merging without realizing it. Sure. And so, you know, what does the human of the future look like? Well, I think it's one that really harnesses all these tools of cognitive leverage, right, to augment itself, right? And I think that the humans that maybe dismiss technology
Starting point is 00:16:09 or don't embrace it, those are the people that are gonna maybe be in trouble or relative disadvantage. And so I think so far the beauty is that, you know, we're both CEOs, right? We both have like standard issue iPhones or Androids, right? And it's the same as anyone else, right? Everybody has access to the same technology
Starting point is 00:16:30 and it's ubiquitously accessible and cheap and that's the beauty of capitalism, right? I think that hopefully AI and neural argumentations can be cheap enough and ubiquitous enough so that everybody can own their own AI, the AI that is an extension of themselves, and that they have control over it. Because I think if we only allow for AI augmentations that are controlled by central parties, we're kind of losing a sense of self there.
Starting point is 00:17:00 We're kind of delegating to this sort of... I've always imagined for the longest time an AI software shell. The closest thing is Jarvis from Iron Man, but an AI that is my... I used to call my AI Jamie, Joint Anthromechano Interface, that was my AI was able to interface with everything in the world. I could step into an F-35 fighter, not know how to fly it, but Jamie knows how to fly it. And I can say, move that image here or there or... The technology that you're enabling, and I want to go there yet, but because it has the ability to operate potentially at room temperature and with very low wattage, it also feels like a technology that could be incorporated into me a lot more than any of the other AI technology.
Starting point is 00:17:55 Do you imagine a future in which I have become a cyborg with those AI implants? Yeah, I would say, you know, at least one of my goals is not just, you know, to augment humans exogenously, but potentially integrate these devices into our bodies. Obviously, that's very moonshotty speaking, you know, sort of thinking, but at the same time, within a certain thermal budget, right, which is a bottleneck today for for implants in our brains, you know, our goal is within a certain thermal budget to be, you know, the most performant neural information processor out there, right, and we're going to keep iterating to to maintain our lead there.
Starting point is 00:18:37 And I love it. You're, again, we'll come back to this in detail, but the power and temperature and efficiency vision you have, because it's orders of magnitude different than what exists today, enables integration for the humans. That's a lot of fear. There's a lot of fear out there. And the fear dominates the conversation. I blame the Crisis News Network and the news for basically broadcasting every negative piece of information on the planet.
Starting point is 00:19:13 Have you been, as you have been leading this conversation on the accelerationist movement and you've had folks like Mark Andresen who's sort of come in as well into that conversation. What's been the feedback from society? Have you gotten a lot of pushback? Yeah, I mean it's very polarizing, right? Like there's some people that are, you know, positive sum abundance mindset. They're like, yes, we think technology will help us conquer our problems and, and, and help us tackle any issues. And there's the people that, you know, think technology maybe is net negative, or they've seen how it impacted their lives. And maybe, maybe they want less technology, right? They're less technology. It's kind of techno progressive versus techno regressive. It's kind of a new axis of polarization of opinions.
Starting point is 00:20:12 And clearly from my experience online, it's been very polarizing. We've had quite a few fans. We have quite a few opponents. But I welcome discussions. The whole point is to have discussions about how fast we want to go. Right. But if it was only one, a one sided discussion beforehand about slow things down, centralize, you know, let's regulate, regulate.
Starting point is 00:20:40 Then we were going to head towards that without any sort of opposition. Whereas now I feel like we kind of brought balance to this force of sort of novelty seeking, uh, you know, favoring entropy versus sort of higher, uh, you know, more order, more constraints. Uh, and there's kind of this, uh, there's kind of this thing that happens in complex systems where the optimality is that criticality, the balance between order and chaos, between energy minimization and entropy is where you want your complex system to be because that's where it's most performing.
Starting point is 00:21:15 I don't think people realize there's no on-off switch on this technology. And I don't think there's a velocity switch either. I think it is... I often ask myself the question, if you'd gotten back in time and said to Einstein, listen, stop thinking about this, it's going to lead to the atomic bomb, whether or not he would have been able to stop thinking about it. And if he did, the next person would have taken over and moved it forward. So, you know, I tell people there's no slowing it down. And if you believe that, which I do, then the question is, what do you do? And I think it's
Starting point is 00:21:54 guiding it that is the only real option we have. Correct. And I would say that, you know, the market itself, right, is a very powerful aligning force right if you have Product that is not of positive utility to us. We don't buy it It runs out of whatever company makes that product runs out of capital and the product dies off, right? there's a selective pressure on the space of products and right now because L AI models are Products where you have model as a service companies like opening I anthropic eventually X AI because AI models are products, right? You have model as a service companies like OpenAI, Anthropic, eventually XAI.
Starting point is 00:22:31 This competition for users and ultimately of capital to fuel the GPUs that keep these systems alive. This competition induces a certain selective pressure and models that are not aligned that don't do what you ask it that are hard to interpret hard to read actually don't do well in the market and in a sense it's a much more careful sort of gentle guidance towards systems that are aligned compared to sort of centralized regulation, like this is how much compute you're allowed to use for a model, nothing more and so on. I think that's going to be net negative overall.
Starting point is 00:23:13 So in a sense, people have a vote in the system. They can vote with their dollars, they can vote with their usage, their API calls, which systems they like. And that's going to steer the market sort of evolutionarily in the space of potential neural nets towards more of models of that kind, right? But Gil, what happens when someone comes to you and say, listen, I get this, I like it, I love the abundance future, but let's be serious. This is super powerful technology, right?
Starting point is 00:23:49 Claude III is already at 101 IQ, it's more intelligent than humans, and we're giving infants nuclear weapons to play with, and these things are going to ultimately destroy society. And there's a small chance we survive, but this is way too powerful. And we need some level of control. We need some level of centralization just to make sure things don't go off the rails. How do you respond to that? I would say that I'm more weary of the dangers of centralization than giving everyone access to neural augmentation. I think like you said, I don't think there's going back, right? I don't think we can go back to not knowing about this technology.
Starting point is 00:24:40 There's too much upside on the table to creating it. It is an arms race like never before. Yeah. And so for me, it's like, how do we guide this acceleration towards the positive future? To me, I think if only a few people or parties have control over the only AIs that are legally allowed, we're going to have a lot of problems because that's going to create a sort of gradient of power, right?
Starting point is 00:25:08 But we have duopolies now within cellular phone networks, in cell phones and computers and so forth. How is this different from those duopolies? I think if we're gonna truly augment our cell, our own intelligence with AIs, I think in order to maintain the benefits of having individuality, right? Individuality, we celebrate individuality in our society, at least in the West, and it's our greatest strength because this variance, everybody brings something different
Starting point is 00:25:40 to the table and we're searching over all sorts of spaces of science, art, culture, and so on, and we find new Optima, something original that then gets spread throughout the network and is of massive benefit to everyone. And if we only have centralized models, right, a few models that are trained for everyone, they're amortized, so there's one model for everyone. We lose the benefits of having sort of individuality. And so to me My quest both on IAC and extra pic is for people to be able to own The compute that is an extension of their own cognition and for them to have the right to run their own AIs and have the right to fine-tune models in a way that's private
Starting point is 00:26:23 So eight billion AIs basically. Or more. Or more. Yeah, or more. Yeah, I can have multiple. Maybe you have 20 agents that work for you, right? Yeah, Peter, three of 10 is going to hang out with you. Yeah.
Starting point is 00:26:35 I think that's the future. I think that people will see it's kind of like having employees. Management is prompt engineering to some extent. You can prompt different agents to some extent, you can prompt different agents to do tasks for you, maybe you fire them up and then boot them down when you're done. And I think that gives us a lot of intellectual and operational leverage. And I think people tend to think too much about economy as this zero
Starting point is 00:27:01 sum system, or like if AI is have take some of the jobs or to be less jobs for us, but we all know that's not how the world works, right? Like if we're able to do more at a certain cost, we're going to try to aim higher and there's plenty of room to grow out there. That's a really important point. And when I think about what do I worry about, really important point. And, you know, when I think about what do I worry about? And I'm definitely the guy who says, you know, the glass is not half full, it's overflowing. But I do think about if AIs are doing all of the work, and if I can write a book at the snap of a finger, and start a company at snap of a finger, and what's challenging, and that we humans need
Starting point is 00:27:47 challenges. There's a great paper I read recently. It's called Universe 25 and people want to Google it. And so the studies were done in the 60s in which a large open space was created for field mice. And it had all of the room and the nests and the food and they had no struggles at all. And the mice, you know, breeding pairs were put into this and they grew and they grew and they grew. And then after a couple of generations, it basically died off because there was no struggle in there. And, and so one of the things that I think about is that humanity is going to have to up-level our ambitions and our struggles.
Starting point is 00:28:36 And I'm excited about that, but people need to have the ability to have a massive transformative purpose, to have a moon shot. So let's talk about the Kardashev scale here. So a Russian cosmologist, astronomer comes up with this idea. So explain what it is. Yeah, it's a sort of milestone system to keep track of the progress of growth of civilization. Originally there was three types or three big milestones and then Carl Sagan found a way to interpolate between the milestones so we have a continuous scale there. But the original scale was in type one, the type one Kardashev scale civilization would produce as much energy as is incident onto Earth from the sun. Right. So if you take the sun, we occupy a certain amount of solid angle, a certain amount of the sun's rays hits us.
Starting point is 00:29:39 That's a certain amount of power. And by the way, the numbers that I remember, because I speak about this when talking about energy abundance, is that today I think there's 8,000 times more energy that hits the surface of the Earth than we consume as a species in a year. We're still really early. We're still below type 1. Yeah. Well below type one. Type two would be having the equivalent of the Dyson sphere. Yeah, Dyson sphere that captures everything being emitted by our sun and type three would be the entire galaxy, right? And so, I think in general, if you set... This guy was really ambitious back then. Yeah. He didn't start with like a type one as a fire place or a river.
Starting point is 00:30:27 So we're still type zero, right? Yeah. We're still at the beginning of this. But I think this is, you know, in our moonshots conversation, in our abundance mindset, when people think about, well, what am I going to do? You know, it's like, you've got to point your vision, you know, 90 degrees up and start talking about, you know, how do we achieve transcendence in our solar system and in our galaxy?
Starting point is 00:30:54 Yeah, I think a lot of our goals, people's goals are too anthropocentric or they want to do relative to others. And now that AI comes in and kind of breaks this sort of zero sum competition between humans, right, we got to set our sights on a non anthropocentric goal, right? And we have a goal prescribed by the universe in a sense, which is to grow. Yes. Because any life form seeks free energy and looks to grow. And the point is that if we have a eyes that help us and extend our intelligence, we should tackle harder things. And there's an near infinite scale of harder things to tackle because unlocking that next scale of civilization, there's tons of challenges to
Starting point is 00:31:37 achieve that. And so that's the sort of mindset I bring to the table. And frankly, you know, my whole career was trying to tackle, you know, how to leverage AI to understand the physical world. I haven't been trying to let's say automate humans. I've been trying to engineer matter, understand chemistry at the base level, understand the physics of the world so that we better perceive, predict and control it, which is kind of a, you know, core based technology for us to unlock all sorts of other technologies. And when someone hears this conversation, they say, well, oh my god, I can't think about that. I don't know how to, I'm not Elon to build, you know, starships or Gill to build quantum computers and
Starting point is 00:32:20 such. But the reality is the tools that you're about to inherit are tools that can boost intellect, you know, not just thousands, but billions of fold potentially. Well, we'll see how it shakes out. I think that the current approaches where we train on human generated output, right, the internet is broadly, at least right now, I'm sure in a couple years won't be the case, but generated by humans. Yes. Right. And we're kind of distilling a mixture model of whole human intelligences, right? You could think of the elements of trying to distill a mixture across the outputs of all our brains, right? And so, at least to me, it seems like it would saturate to something nearing,
Starting point is 00:33:06 you know, typical human intelligence. Until? Well, until it's embodied and then can interact with the environment and get its own samples and query the environment in a way that, you know, isn't bottlenecked by what was generated previously by a human one of the conversations we had on the abundant stage was the excitement about AI helping us Decipher and deeply understand physics and math and biology and chemistry In ways that we can't fathom right now. I mean you do believe that don't you? Yeah, I do I do think it's possible I do think it's much harder than distilling human now. I mean, you do believe that, don't you? Yeah, I do. I do think it's possible. I do think it's much harder than distilling human intelligence. I think
Starting point is 00:33:50 understanding biology and chemistry is orders of is gonna take orders of magnitude more computation. And so the computers were building, yes, they'll be able to run models that are anthropomorphic, right, like LMS and so on train on human data. But ultimately, it's machines that are anthropomorphic, right? Like LLMs and so on train on human data. But ultimately it's machines that are going to help us grok the physical world. And there's this beautiful theory by Stephen Wolfram on complex and self-organizing systems. And his prediction is that certain systems in nature are irreducible.
Starting point is 00:34:23 You can't get a TLDR, you can't compress, right? The gist of it to something very simple. You actually don't have a choice, but to go through the highly complex computation to predict what's gonna emerge as a behavior at a different scale. And so nature is very hard to predict at all scales. And I don't actually believe that, you know, there will be one God AI model that will emerge
Starting point is 00:34:50 overnight, immediately understand all of physics and, you know, create nanobots that eat the earth, which some people believe in, but at least fundamentally, you know, from my own experience studying complexity in quantum systems and quantum machine learning, and then from Wolfram's theory, it seems that there's a fundamental complexity in nature where we're going to have to scale our computation of intelligence proportionately to the complexity of the systems we're trying to understand. And there won't be an overnight runaway intelligence explosion like that. It's going to be consistent exponential progress.
Starting point is 00:35:32 And to me, there's much higher likelihood that we shoot ourselves in the foot and stop this beautiful process of exponential progress than there is, you know, us, you know, giving the keys to singularities. Thus, the boomer versus the keys to singularities. Thus, the boomer versus doomer point of view. Yeah, that's right. I mean, because the majority of people feel just the opposite, that this is uncontrollable, this is a reaction that has no bounds and therefore is dangerous.
Starting point is 00:35:59 And I hear you saying this is a precious flame that we have to be careful we don't blow out. Yes. It's a different narrative than what you've heard, you typically hear. People have sci-fi based priors, right? Like what are their priors on what the future holds? They've seen a lot of sci-fi movies. I blame Hollywood, I really do. Yeah. I mean, the only film that I think has done a good job here is her You know where the AI gets bored and leaves. Yeah That's actually accurate But you know in my case, I think
Starting point is 00:36:35 once we've Created a eyes that are you know of similar intellect to us. We're gonna learn how to interact with them We're gonna learn how to employ them in a positive fashion. There's going to be a big adjustment there. But once we do so, we're going to have way more challenges for us to scale civilization to the stars. And there's plenty of challenges left. So there will always be more work to do. Right. And we're going to put our most capable systems on the most complex and difficult tasks, and there's still gonna be work left of all kinds.
Starting point is 00:37:10 And so I think if people have a mindset that they wanna embrace technological change, they wanna figure out how to best augment themselves, augment their businesses with AI, like they will have a place in the future. Those that wanna stay away from it, then, you know, I mean, they could go back to the cabin. We have the Amish example.
Starting point is 00:37:32 Yeah, yeah, the Luddite stuff. And the challenge becomes, you know, people who feel this way. I'm like, okay, just for just a week, go without your phone, your TV, you know, your car, all the technology. Don't buy food in the supermarket. Go find a cow and milk it yourself. Go plant your... And the reality is, what makes the leveling of the playing field for humanity is the poorest
Starting point is 00:38:00 and the wealthiest all have 24 hours in a day, 7 days in a week, 365 in a year. It's how you use your time that differentiates you. So my ability to have ultimately hundreds of extraordinary agents that can do the things that I desire and bring back the answers for me is a massive force multiplier for productivity. Yes. I think people should think more like that, like a capital allocator, like a manager, as the way we're gonna merge with AIs,
Starting point is 00:38:31 and it's gonna allow them to do much more. I think more people should be entrepreneurial, more people should think, hey, what opportunities do I see in the world now that we have these AIs that are on the verge of human-like intelligence, what should I aim to build? How will I direct capital to unlock more value?
Starting point is 00:38:52 And if everybody shifts to that sort of mindset, I think the fears about what are we going to do in the future will fade. Everybody, I want to take a short break from our episode to talk about a company that's very important to me and could actually save your life or the life of someone that you love. The company is called Fountain Life and it's a company I started years ago with Tony Robbins and a group of very talented physicians. Most of us don't actually know what's going on inside our body.
Starting point is 00:39:21 We're all optimists. Until that day where you have a pain in your side, you go to the physician in the emergency room and they say, listen, I'm sorry to tell you this, but you have this stage three or four going on. And you know, it didn't start that morning. It probably was a problem that's been going on for some time. But because we never look, we don't find out. So, what we built at Fountain Life was the world's most advanced diagnostic centers. We have four across the US today and we're building 20 around the world. These centers give you a full body MRI, a brain, a brain vasculature, an AI enabled coronary
Starting point is 00:39:59 CT looking for soft plaque, a DEXA scan, a grail blood cancer test, a full executive blood workup. It's the most advanced workup you'll ever receive. 150 gigabytes of data that then go to our AIs and our physicians to find any disease at the very beginning when it's solvable. You're going to find out eventually. Might as well find out when you can take action. Fountain Life also has an
Starting point is 00:40:25 entire side of therapeutics. We look around the world for the most advanced therapeutics that can add 10, 20 healthy years to your life and we provide them to you at our centers. So if this is of interest to you, please go and check it out. Go to fountainlife.com backslash Peter. When Tony and I wrote our New York Times bestseller Life Force, we had 30,000 people reached out to us for Fountain Life memberships. If you go to fountainlife.com backslash Peter, we'll put you to the top of the list. Really, it's something that is, for me, one of the most important things I offer my entire family, the CEOs of my companies, my friends, it's a chance to really add decades onto our healthy lifespans.
Starting point is 00:41:12 Go to fountainlife.com backslash Peter. It's one of the most important things I can offer to you as one of my listeners. All right, let's go back to our episode. So a lot of folks listening are entrepreneurs and they're looking at moonshots. They're looking to do something big and bold and significant. I like I jokingly say not another photo sharing app you know. How do you what's your advice for entrepreneurs today looking over the decade ahead? Yeah, I would say, you know, everything that's like white color work or software, you know, doesn't necessarily have,
Starting point is 00:41:53 everything that has preexisting abundant data sets might not have too much of a mode, right? If intelligence becomes more abundant and if the current systems we have don't necessarily generalize too well, but they're really good at interpolating across data points that are pre-existent, maybe stay away from things that are typical, right? And go towards the atypical do something that's never done. Create unique datasets.
Starting point is 00:42:20 Yes. Right. So something that's like surprising, contrarian, and so on, you know, yes, that gets people to judge you that like, okay, this sounds like a crazy idea. But actually, everything that is typical is going to have plenty of AI's that can do those tasks, right. And so to me, I think we're seeing a sort of deep tech renaissance. And even I think this narrative is floating amongst the venture community that actually deep tech, you know, the world of atoms is where the hard problems are and where AI won't be able to follow you yet. Right.
Starting point is 00:42:52 And I think there's a fundamental reason why that is. I think the physical world is really hard. And even even with all the help of white collar AIs that we can muster, there's going to be a bottleneck to creating things in the world of atoms. So build companies that are doing hard things in the world of atoms, and you will do very well in the future.
Starting point is 00:43:14 That is my advice. I wanna dive into your startup. Again, there's a lot of folks who are super excited and all they think about right now is how can I get access to H100 networks and how do I start coding for this and if you're able to pull off what you're building right now, you will disrupt those, that capability. I can't want, I can't put orders of magnitude on it massively. Yeah. But let's go back to your history that led you here. You were at University of Waterloo? Yeah. In studying quantum physics? Quantum, yeah,
Starting point is 00:43:58 quantum gravity, quantum information during my masters. And then over time, I realized that actually, to better understand the And then over time, I realized that actually to better understand the physics of the world, it wasn't going to be a couple of mathematicians in a room on a blackboard to solve the theory of everything, but it's probably going to be some form of computation and AI that would solve it. And so to me, uh, that journey led me to be a pioneer of a field called quantum deep learning, um, wrote some of the first algorithms in the space.
Starting point is 00:44:28 You got recruited out of school, didn't you? Yeah, yeah, that's right. Basically, first year of PhD, I met Hartmut Neven, who now leads the Google quantum AI lab. And essentially, you know, we were on the same wavelength, right? What did he say to you in his German accent? I won't imitate his accent, but, you know, I think we met at NASA. I gave, I gave a first talk after writing a very large paper on how to do deep
Starting point is 00:44:57 learning on quantum computers. And he was like, come give a talk in Venice beach, not too far from here. And talk to our scientists. And so we, you know, brought, brought my coauthor Michael at the time we did. to talk in Venice Beach, not too far from here, and talk to our scientists. And so we brought my co-author, Michael, at the time. We did. And they asked us, hey, try to build a prototype for what a TensorFlow, which is for quantum computing,
Starting point is 00:45:15 would look like. TensorFlow is Google's core machine learning framework, or at least used to be. And we hacked it together. They liked it and basically onboarded the whole team. They gave you an offer you could not refuse. Yeah, yeah, exactly. And frankly, you know, Waterloo is great, it's a great school, but you know, it is in
Starting point is 00:45:35 the middle of nowhere in Canada, and to me, to move to California seemed like the right, you know, opportunity I should take. And you know, just went for it and have a look back really. And so it's been a ride. And then you actually spent some time working closely with Sergey Brin. Yeah. So after we built TensorFlow Quantum, there's a team that was forming around Sergey working on quantum technologies and AI and physics and AI more broadly and to
Starting point is 00:46:06 me I was sort of getting a bit impatient with the timelines with the compute the computing stack and I saw that there were opportunities and quantum communications and sensing that were maybe shorter term and so I wanted to try my hand at that and to me it was a completion of sort of the vision of understanding the world at a quantum mechanical level. Because if even if you have the algorithms running on quantum computers that can understand quantum data and learn AI representations of them, how do you acquire quantum data and how do you transmit it? And so that's what I worked on. So I worked on quantum analog digital conversion, the US quantum internet. And so
Starting point is 00:46:43 to me was completing the stack for us to be able to perceive and predict and eventually control our world at a quantum mechanical level, which to me is kind of a very deep node in the tech tree. Let's say it's a civilizational technology that's really important. But during that time in quantum computing,
Starting point is 00:47:01 I realized that actually there is gonna be different nodes of our tech tree that need development imminently, that use a different kind of physics that's not quantum mechanical physics. And that would be much more useful for for Genevieve AI, as I was seeing sort of Genevieve AI workloads eat more and more of the compute internally at Google. So we've got classical digital computers right now, the classical CPU from Intel and such. And then Nvidia, I think very luckily fell upon the opportunity with GPUs. Most people hopefully know GPUs, graphical processor units, were originally created for
Starting point is 00:47:39 video games. For graphics, yeah. For graphics. And then they just happened to get a market in Bitcoin mining Yeah, right and then all of a sudden here comes the whole generative AI world and and Nvidia becomes a two trillion dollar company Yeah, like that's a lot of good luck. Yeah, that's a lot of good Yeah, turns out matrix multiplication which GPUs excel at are very useful for all sorts of different applications including AI Excel at are very useful for all sorts of different applications, including AI.
Starting point is 00:48:09 But as you mentioned, GPUs weren't designed from the ground up from first principles to be AI processors, right? It's kind of a co-evolution between the hardware and the algorithms, right? The algorithms that ran on GPUs, like modern deep learning, tended to do well because GPUs already existed and then both kind of fed off each other, right? So we're trying to create an evolutionary fork in the space of hardware. It's gonna engender evolutionary forks in the space of algorithms, and they're gonna co-evolve.
Starting point is 00:48:33 That's why we're a full stack company, and we co-design the algorithms. So let me read something here that sort of describes what you're doing. Sure. And see how this hits. We're building the ultimate substrate for AI compute, looking to hit the limits of physics
Starting point is 00:48:54 in terms of energy efficiency and speed of AI, embedding AI algorithms into the physics of electrons dancing around. And we're doing this by building a full stack of hardware and software reinventing at first principles how to create generative AI. You call it thermodynamic computing, probabilistic computing,
Starting point is 00:49:18 and this is a third branch of computing, isn't it? Yeah, it seems like it, because today we have the deterministic computers, right? Your transistors are definitely on or definitely off. One or zero. One or zero. It's one or the other, right? And you definitely know which one it is, right? And then you have a quantum computer which has super positions of ones in zero. It's zero plus one, zero minus one. And everything in between. Everything in between. Complex numbers. You can make those interfere
Starting point is 00:49:44 with one another. But actually having a computer that you're unsure of the state of the computer, and it's probabilistic, it's zero or one or something in between, but you're not sure exactly the state of the computer. It's actually much more energy efficient because knowledge costs energy. There's this old tale of Maxwell's demon. I don't know if you're familiar with it. Bringing back faint memories. Go ahead.
Starting point is 00:50:10 Yeah, so you know Maxwell's demon tells us that actually it's a thought experiment that examines the energetic cost of knowledge, right? And I guess I could go into it. But yeah, Maxwell's demon essentially, you can imagine having a box with a partition in the middle, right? And you have one side of the box has a bunch of red particles and the right side of the box has a bunch of blue particles.
Starting point is 00:50:40 And you have a trap door in the middle with a little demon, right? And if you keep the trap door in the middle with a little demon, right? And, you know, if you keep the trap door open, you wait a long time, you know, the balls cross, and on average you get a mixed thing where you have red and blue balls in both partitions. Now, where the demon comes in would be, you can actually reverse this process of going to a higher entropy state, right? Which would violate the laws of thermodynamics by having the demon look.
Starting point is 00:51:06 If a ball of a certain color comes in, opens the door, if the ball of the right color comes in, the other side opens the door, and can filter and again separate out into red and blue partition, which seems to violate the second law of thermodynamics that states that entropy. So therefore there is a, there must be a cost to that observation. There's an energetic cost, exactly. And so what we see is that a lot of the cost of running a computer comes in when you're trying to maintain its determinism. Right? And it turns out that you don't need to always be maintaining determinism
Starting point is 00:51:40 when you're running AI algorithms, because AI algorithms are natively probabilistic. And so having AI run on a digital deterministic program, that is emulating probabilistic programs, yeah, super inefficient, so why not run probabilistic programs on a probabilistic computer? And so just from that, we have just an efficiency and a tightness
Starting point is 00:52:02 to the embedding of the algorithms into the physics of the hardware that's really hard to achieve on digital computers. But not only that, we're actually catering to the imminent, what would be otherwise problems of transistor-based computing, right? Because as you scale down transistors and you try to make them more energy efficient, unfortunately, the fact that transistors are you try to make them more energy efficient, unfortunately the fact that transistors are made of matter and they're jiggling causes your transistors to misfire. And get hot.
Starting point is 00:52:31 Right? They get hot and sometimes they say things they don't want to say, you know, but like they're misfiring and they become effectively stochastic and instead of trying to filter that noise and filter that stochasticity and make it deterministic again, right, through error correction, similar to how we have to filter out the stochasticity in quantum computing with quantum error correction, instead we embrace the noise and use it as part of the algorithm. And it's part of our models of the hardware, and it's part of the algorithm. And so it's a very different way of thinking. It's very challenging because we have to rebuild the whole stack to go with it.
Starting point is 00:53:14 It's very ambitious, but to us, it's a necessary and clear from first principles evolutionary step in computing. And it's one that we think is inevitable. We're just the ones that are boldly going for it right now. Yeah. And you described thermodynamic computing as being particularly valuable at describing chemistry and biology because we're talking about thermodynamic systems there. Yeah, so, you know, chemistry has some quantum effects in there mixed in, but biology, proteins, molecular dynamics at the mesoscales, right? They're bouncing. Yeah, they're bouncing around, they're jittering around, and that's very tough to simulate,
Starting point is 00:53:59 right? Because we have to embed that stochastic process into digital computers, right? And it's a process that these fluctuations happen on very small time scales. So you can't just fast forward the movie very efficiently. And so for us, we're looking to embed the physics of protein folding, eventually, not initially, but protein folding, which happens on a certain time scale, because you have big proteins, and they're jittering your bout, into the jitters of electrons that we control, how they dance.
Starting point is 00:54:30 And electrons, because they're much lighter, they jitter much faster. So you get a speed up by embedding the dynamics of proteins into the dynamics of electrons. And that's it. There's no, you know, it's just a, it's very analogous, the physics of the mesoscales of matter to the native physics of the hardware. And so we get a speed up there,
Starting point is 00:54:52 but we're gonna work on quantifying exactly what sort of speed up we expect. Right now it's an intuition similar to Feynman's intuition about why would you build a quantum computer? Well, there are quantum mechanical systems you'd want to simulate, and it turns out that, yes, in fact, you get a significant speed up running quantum simulations on quantum computers versus classical, right?
Starting point is 00:55:15 So... So, you had this insight while you were at Google, and this was what would liberated you, or...? or actually before I started my career in quantum machine learning I wrote down the equations for our very first chip while I was a Waterloo And I thought this idea was crazy. It was too original, you know, I was straight out of mass my masters in theoretical physics I had just learned machine learning you wrote it down one of these notebooks over here. Yeah, just a notebook. Yeah She learning. You wrote it down on one of these notebooks over here. Yeah, just a notebook.
Starting point is 00:55:43 Yeah, exactly. Crazy ideas. Yeah, there you go. And I thought the idea was too crazy. You know, I was still, I was wrapping up my masters at the time and I wanted to understand and go through the exercise of inventing a bunch of algorithms and physics based AI more generally
Starting point is 00:56:01 before I went for it, right? So I built up, you know, years of credibility, shipping a lot of papers and products and going to Google and so on to have the experience to do this moonshot. At the end of the day, you got to find what is your best idea? What is the most impactful idea that you can work on for the rest of your life? What are you like willing to die for? This is what we call that a massive transformer to purpose and your moon shot. Right?
Starting point is 00:56:26 Yes. It's like, and you know, I wrote down, uh, again, what I hear as your MTP, which is maximizing the amount of intelligence in the universe and along those lines, intelligence per watt, like maximizing that. And if you can do that, it's up leveling, Matt, it's up leveling everything. Yeah. It's the highest. It's a very, it's a point of very high leverage, right? And obviously, not everyone has to create
Starting point is 00:56:56 technologies that are as impactful as that. But at the same time, if everybody thinks about which technology they can build, which technologies do they have a unique skill set for that they can, that they think would truly impact the world in a massively positive way. If everybody goes and does their moonshot they're thinking about, I think the world would be a much better place, right? And that's kind of the ethos of the acceleration community. I know it's the ethos of your community, which really, you know, was been around for much longer than we have.
Starting point is 00:57:29 And I think a lot of people sometimes just need to push to it's okay to be ambitious. It's okay to go for it. It's okay to take risks. You know, you'll have a supportive community, go for it. It's actually like, if you achieve what you're looking to achieve, we're all gonna benefit. So we're gonna gonna support you and I think having a supportive community is is so important
Starting point is 00:57:52 This this very simple concept of maximizing intelligence per watt and in the universe seems like a Fundamental. Yeah, it seems like What life would always trend towards. Yes. And given the fact that we humans are only some four and a half billion years in a 13.8 billion year Galic universe as we know it, I'm wondering where all the intelligence is. You know, I don't want to get into Fermi's paradox of why aliens aren't here, but I have to imagine there is a maximization principle for intelligence in the universe.
Starting point is 00:58:41 And I always thought about intelligence being the countervailing force to entropy, like an entropy increases. The countervailing force for that would be would be intelligence increasing as well. Does that make sense? Yeah, actually, you know, there there are theories, for example, Carl Friston, someone you should talk to, absolutely brilliant scientists, his theory of intelligence is that we seek to minimize surprise, right? And entropy is expected surprise. Sure.
Starting point is 00:59:10 Right? And so intelligence is trying to model the world to have the minimization of surprise. And it turns out that for biological systems, if you can predict your environment really well, you're in a position to extract more free energy and consume it in a clever fashion, and that's thermodynamically optimal.
Starting point is 00:59:29 So now I guess my theory, which I've only loosely kind of had time to put my thoughts on paper through the manifesto and some tweets, but something I'd like to formalize in the coming years on the side is, you know, how did intelligence potentially evolve as a byproduct of this concept of thermodynamic dissipative adaptation, which is a concept from a professor named Jeremy England from MIT that posits that systems that are complex and subject to the laws of thermodynamics self organize in order to
Starting point is 01:00:08 dissipate more heat over a long time scale not instantly not overnight to to to Acquire more energy and then dissipate. Yes. Yes, essentially more precisely the theory says that trajectories of states over time the ratio of likelihoods of two different histories, two different trajectories, scales exponentially with how much free energy was dissipated. So path toward the future, where we've dissipated more heat, are exponentially more likely. And so there's actually this probabilistic bias of the universe towards growth Yeah, right and that's what that principle at least according to some theories is what led to life self-organizing
Starting point is 01:00:52 And creating the complex systems that we are today and to me that is the process that led to all the Creation of all the wonderful things we see today and it's almost like sacred in my book. We should keep that process going. We don't know what upside we're leaving on the table if we were to stop it or decelerate and obliterate ourselves. Is there any way to know given this basic theorem whether a tendency towards a recurrent simulation would be the end result. Recurrent simulation. Well, meaning, you know, we are a simulation,
Starting point is 01:01:30 nth generation simulation that keeps on reinstantiating. Yeah. I mean, I've studied simulation quite a bit, right? My job was to figure out how to simulate. I've simulated, you've simulated early universes for fun on quantum computers as a hobby and sold it as art in the past, right? But there's a certain density to information, if you will. And my work on quantum internet was actually studying
Starting point is 01:02:00 how densely you could pack quantum information in various substrates. And in a sense, this assumption that you can have a simulation within a simulation within a simulation really doesn't hold because at the end there's a base reality and in that embedding universe you have a maximal information density for your computer. If the laws of physics in that universe are anywhere close to ours. And so the assumption that we live in a simulation from this assumption that you can embed a simulation within a simulation, only goes so far. It doesn't really hold. So to me, I don't think we live in a simulation, and I'm happy to chat with anybody who thinks we do
Starting point is 01:02:42 and show them the mathematics of how hard it is. One thing we can test is whether we live in a quantum simulation or not, right? Because I think Google's quantum computers, amongst others, are reaching the number of qubits where there wouldn't be enough atoms in the observable universe to emulate the quantum computer with a classical computer. And so we can at least rule that out.
Starting point is 01:03:09 And if one of the quantum computers show that, I'll be very satisfied with that. I think that's a very interesting thing to answer. So you have this idea of a thermodynamic computer. And when did you pick it up again? I picked it up, you know, I left my career in quantum computing, quantum machine learning, took a couple months to, you know, gather my thoughts and I went for it basically summer 2022 and just got going and got the company going and now it's been going for almost two years.
Starting point is 01:03:45 You call it Extropic AI and I've seen some great video of your fab and your cryo hangout spots. So your goal here is not something that's operating at cryogenic temperatures. It's something that is operating at room temperature. That's right. And being built on the silicon fat. Yeah. So, you know, when you prototype things, you start with the biggest, most macroscopic prototype you can because it's simpler, it's easier to get going, and you can probe it and understand it better. In our case, we couldn't do a breadboard prototype, you know, like you would do with other electrical circuits, because we want to operate it in the regime of ultra low power and the right ratio of power of noise where we have the right properties of the electron
Starting point is 01:04:35 physics. So for us, the first prototype is the most macroscopic will make of a thermodynamic computer, but we had to super cool it, because that's how the physics works out. Essentially, we did a super connecting prototype. It's our first prototype, but our next chips are going to be in silicon. And we're really excited about that. You talk about embedding physics and embedding the algorithms. Yeah. Explain what that means. I mean, my whole career was figuring that out, how to embed quantum physics into a quantum mechanical computer, how to embed AI algorithms
Starting point is 01:05:12 into quantum mechanical physics, right? And so for us, it's very similar sort of mindset. There's ways to compile things into primitives that then get compiled to primitives of the hardware physics, right? In quantum computing, they're called quantum gates, right? And we have something similar in terms of frameworks that we have internally, and we're looking forward to putting out a lot of the details on how to compile for any sort of algorithm to a thermodynamic computer. And frankly, part of my goal is to partially open source the concept of a thermodynamic computer so a
Starting point is 01:05:51 broader community can join us on this quest. You know, we're one effort for this, but it's too important of a technology to keep it on a shelf for a few more years. We don't have time. I mean, the potential for this, and if you can describe the potential for this in the generative AI world, because today there is, if we're projecting the growth of generative AI, are we running out of chips or energy first?
Starting point is 01:06:19 Bit of both, right? I think energy is gonna be a big bottleneck because chips are reacting to the market right now. There might be an overproduction of chips. I doubt it, but it's going to be up there. What's the state of the best chip today? Is it H200? What's out there? The best chips are manufactured in Taiwan by TSMC with machines from ASML, which is a supplier of lithography machines. And those are exquisite machines. But, you know, our goal is to have a different way to embed the problem into
Starting point is 01:06:54 the devices and for us to not depend on the same processes that everyone else depends on, which we feel is important hedge because otherwise Taiwan is a very sensitive area and the world supply chain depends on it. And so if we could, you know, forego our reliance on Taiwan to make the most cutting edge chips for AI, that would, I think, put everyone at ease and be a net benefit. Did you see the movie Oppenheimer? If you did, did you know that besides building the atomic bomb at Los Alamos National Labs, that they spent billions on bio-defense weapons, the ability to accurately detect viruses and microbes by reading their RNA? Well, a company called Viome exclusively licensed the technology from Los Alamos Labs to build a platform that can measure your microbiome
Starting point is 01:07:46 and the RNA in your blood. Now, Viome has a product that I've personally used for years called Full Body Intelligence, which collects a few drops of your blood, spit, and stool and can tell you so much about your health. They've tested over 700,000 individuals and used their AI models to deliver members' critical health guidance, like what foods you should eat, what foods you shouldn't eat, as well as your supplements and probiotics, your biological age, and other deep health insights. And the results of the recommendations are nothing short of
Starting point is 01:08:16 stellar. As reported in the American Journal of Lifestyle Medicine, after just six months of following Biom's recommendations, members reported the following. A 36% reduction in depression, a 40% reduction in anxiety, a 30% reduction in diabetes, and a 48% reduction in IBS. Listen, I've been using Vyom for three years. I know that my oral and gut health is one of my highest priorities. Best of all, Vyom is affordable, which is part of my mission to democratize health. If you want to join me on this journey, go to Viome.com.com.
Starting point is 01:08:52 I've asked Naveen Jain, a friend of mine who's the founder and CEO of Viome, to give my listeners a special discount. You'll find it at Viome.com.com. So the Xtropic chips in success, and I'm curious what your timeframe is, would be how people would stand up their general AI large language models on their every place. Yeah. So at first it's going to be application specific devices. It's going to be smallish devices with not that many neurons, right? But they're gonna be very fast and very energy efficient. There's all sorts of applications for that at the edge.
Starting point is 01:09:33 But over time, yes, as the chips grow and more and more of the program becomes part of the physics of the chip or they become thermodynamic programs, then eventually we could run the whole program on the chip. Because for us, most of the energy and time is actually consumed by the computers that have to interface with the chip and the chip itself, which is really weird to think about. But yes, over time, we want to tackle the broader gener of AI market.
Starting point is 01:10:01 It may seem far off at the moment because because we're just building a couple building blocks. But given that we're using a lot of the existing supply chains for semiconductors and how semiconductors have a very mature set of tools, you know, semiconductors are the things we can manufacture at scale the most reliably, right? And so if we rely, if we use a lot of the know-how there, we can actually scale this technology much faster than previous attempts at novel computing technology. Can you give folks listening an understanding of the potential efficiencies in terms of power, speed, cost, these things? How do you think about this? Because it's staggering. It's, you know, at a fundamental level, like the primitive itself of, you know,
Starting point is 01:10:52 simulating the physics of this device. If you were to just sample it, right, if you somehow embed your algorithm directly in the physics of the device, for example, Monte Carlo algorithm fits very nicely into the physics of the device. For example, Monte Carlo algorithm fits very nicely into the physics of the device. A Monte Carlo algorithm, usually you could program it on a computer, a CPU or GPU, you get maybe a thousand samples per second. This chip does samples on the one to 10 picosecond time scale, depending on the landscape.
Starting point is 01:11:25 But picoseconds is below, it's a thousand times below a nanosecond. And so it's really fast, right? And the speed up you can expect depends on the algorithm and how well it fits on the device. So it's hard to give like one number. So help me understand what this looks like in the world ahead. These extrapics chips are in my body, in my glasses, in my phone, in my car.
Starting point is 01:11:54 They are they the base for AI everywhere at some point? I mean that- In success. That's part of the goal, right? That yeah, that is ultimate success and it's certainly possible. We're starting with something humble, just put on some edge devices. But over time, yeah, it would be the most performant chip,
Starting point is 01:12:12 not only for the cloud, but the edge as well. And personally, I would love to wear some chips or gadgets powered by our chips someday at scale, of course. And, you know, I think intelligence will be embedded in far more systems than we're used to if we achieve our mission. You know, that's on a, you know, 15 to 20 year time scale, right? Like we have so much to do until then. But I think it's, it's certainly possible. And to me, it's, it's going to be much faster in scaling than than quantum computing by by quite a bit and
Starting point is 01:12:50 So let's go there one second in terms of you know Ray Kurzweil and I had a conversation and he says listen We're gonna see as much change in the next decade as we've seen in the last century mmm, right within a very steep segment of the curve. And do you imagine, I mean, I can imagine a world where everything is intelligent, where intelligence is embedded in every aspect of our lives and eyes is is everywhere. Yeah, I think that makes a lot of sense. Do you think people are going to be able to adapt to this speed of change? I think so. I think if people maintain an open mind and, you know, the speed at which we can adapt,
Starting point is 01:13:36 you know, if you look at the fundamental theory of natural selection from Fisher, the speed at which you can adapt a system is sort of bounded by or proportional to its variance. So having variance in how we do things helps us adapt quickly because if you try different things you get a better sense of what's better. You're not locked into a local minimum. Exactly right. You get out of the local minimum and that's basically the message we're trying to tell people is like I know right now if you're at the precipice of a lot of change, and you know, our first reaction is to stop and try to freeze things, but that's exactly how you become fragile. To be anti fragile, you need to be constantly adapting and high variance and malleable. And, you know, if you this is, you know, this is a theory of complex systems. If you have systems that are malleable and flexible, they can adapt.
Starting point is 01:14:30 If they're too stiff, they break and they have catastrophic failure, which is what we're trying to avoid. And so part of the message of acceleration is for us to be robust and adaptive to whatever's to come. We cannot predict the future. You provably cannot predict the future. If you could, there would be a couple of people on Wall Street making all the money right now, right?
Starting point is 01:14:50 And clearly they can't, right? You can't reduce the markets to a simple model. And so you can't predict the future. All you can do is prepare for it and maintain dynamism and adaptability for whatever's to come. And that's what we're arguing for. What's next?
Starting point is 01:15:07 You've got a fab. You're going to be demonstrating silicon, your chips on silicon. Is this going to be a partnership with an Intel or Nvidia? Or are you going to go it yourself? What do you think? At least right now, we're looking at partnering with the traditional fabs directly ourselves, right? And so I think a lot of the ways that the traditional players in digital computing, you know, the ways they've been creating computers and programming them won't necessarily
Starting point is 01:15:42 carry over to our systems. You've had to build a full hardware, software stack for this. Yeah. And I mean, we're still very much actively building it. We're looking to get our first prototypes in the coming year or so on the silicon side. Are you hiring moonshot engineers? Definitely. I mean, I think anybody who is kind of tired by the old ways of doing things, maybe an electrical engineer that's been in their career for a while,
Starting point is 01:16:12 wants to go for something cutting edge and very ambitious should consider joining us. I think there's a lot of people in machine learning as well. They're getting jaded by sort of the monoculture around LLMs and transformers today today and just training big models on data. There's not a lot of artistry to it. What we offer is basically a big mathematical challenge to figure out the new software stack
Starting point is 01:16:36 and algorithms and architectures for this new substrate. And so we've been able to attract some really top talent there. So anybody who needs a really strong intellectual challenge and has a lot of AI experience should consider joining us and help us pioneer this this new paradigm. Extropic.ai. Yep. Well, buddy, thank you for your your passion. You love what you do. I do. And you you know, I think uh, the importance of having an accelerationist mindset, a mindset for exploring and creating
Starting point is 01:17:11 intelligence and being able to solve problems, I think mindset is the single most important asset we humans have. And thank you for your mindset. Thank you so much. Thanks for having me and thanks for pioneering the way as a techno optimist. All right, great to be here.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.