Moonshots with Peter Diamandis - Why I'm Leaving My Company Immediately (Stability AI) w/ Emad Mostaque | EP #93

Episode Date: March 29, 2024

In this episode, Peter and Emad discuss Emad's stepping down as CEO of StabilityAI, his next steps into decentralized AI, and why there is so much urgency to work on decentralization NOW.  Emad Most...aque is the former CEO and Co-Founder of Stability AI, a company that funds the development of open-source music and image-generating systems such as Dance Diffusion, Stable Diffusion, and Stable Video 3D.  Follow Emad’s journey on X: https://twitter.com/EMostaque  ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors:  Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/   AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter  _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog Learn more about Abundance360: https://www.abundance360.com/summit  Get my new Longevity Practices book: https://www.diamandis.com/longevity My new book with Salim Ismail, Exponential Organizations 2.0: The New Playbook for 10x Growth and Impact, is now available on Amazon: https://bit.ly/3P3j54J _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Imagine being the first person to ever send a payment over the internet. New things can be scary, and crypto is no different. It's new, but like the internet, it's also revolutionary. Making your first crypto trade feels easy with 24-7 support when you need it. Go to Kraken.com and see what crypto can be. Not investment advice. Crypto trading involves risk of loss. See Kraken.com slash legal slash ca-pru dash disclaimer
Starting point is 00:00:26 for info on Kraken's undertaking to register in Canada. Earn the points. Share the journey. With the TD Aeroplan Visa Infinite Card, earn up to 50,000 Aeroplan points. Conditions apply. Offer ends June 3rd, 2024. Visit tdaeroplan.com for details.
Starting point is 00:00:45 These organizations are telling you that they're building something that could kill you and something that could ruin all our freedom and liberty. And they're saying it's a good thing, you should back them because it's cool. They don't care about the revenue. They have the political power, people are scared of them. Power should not be invested in any one individual. If I can accelerate this over the next period, I don't have to make an impact. I should not have any power.
Starting point is 00:01:09 Whereas again, you see everyone else trying to get more and more power. The only way that you can beat it to create the standard that represents humanity is decentralized intelligence. It's collective intelligence. The data sets and norms from that will be ones that help children, that help people suffering, that reflect our moral upstanding and the best of us and gathers the best of us to do it. A week ago, Imad Mustaq was on my stage at Abundance 360 talking about the future of
Starting point is 00:01:46 open source AI, democratized, decentralized AI. The day after A360, he stepped down as CEO of Stability. Now, five days later, I've sat down with Imad to talk about why he's stepping down, what he's doing next, the future of AI. He takes the gloves off. He talks about the dangers of centralized AI and the potential for decentralized democratized AI to be the only avenue that truly uplifts all of humanity. All right. If you liked this episode, please subscribe. Let's jump in. If you're a mood shot entrepreneur, this is an episode you're not going to want to miss. All right. Now on to Imad. Good morning, Imad. Good to see you, my friend. As always, Peter.
Starting point is 00:02:36 So you and I were on stage literally last week at the 2024 Abundance Summit talking about the whole open source AI movement. You were beginning to talk about decentralized AI. You were talking about where stability was, the speed of the development of the different products. And the day after the Abundance Summit was over, the news hit that you had stepped down as CEO of Stability and stepped off the board. So let's begin with the obvious question, why? What happened? And I have huge respect for you and I know a lot of the issues in the past, but I'd like you to have a chance to share with entrepreneurs out there and folks interested in AI exactly your side of what's happening. Yeah, thanks. I think that Elon Musk once characterized being a CEO as staring into the abyss and chewing glass
Starting point is 00:03:31 because you are looking at a very uncertain future, having to make decisions, and the chewing glass is all the problems that come to you all the time. And it's required to steer the ship when things are incredibly uncertain. And Stability is a pretty unique company at a unique time. We hired our first developer and researcher two years ago. And then in those two years, we built the best models of almost every type except for large language, image, audio, 3D, et cetera, and had over 300 million downloads of the various models we created and supported, which was a bit crazy.
Starting point is 00:04:09 And then Genitive AI is crazy. In terms of usually in a startup, you don't have to deal with global leaders and policy debates about the future of humanity and AGI and everything else. At the same time of building code. At the same time as building code, yes. And especially building code
Starting point is 00:04:29 to fraction all the resources of our competitors. Like we had certain teams who offered triple their entire packages to move to other companies. I was grateful that only a couple of researchers before I kind of in now Centennial's gonna leave left for other companies and that was just
Starting point is 00:04:49 startups no one left run on the big company which I think is testament to their kind of loyalty in the mission but you know what we've seen over the last year is or last half year in particular is the question of governance in AI is something that's incredibly important. And who manages, owns, controls this technology and how is it distributed? So we saw everything from open AI to congressional testimonies to other things. And as you know, one of my things has always been, how do we get this technology in the hands of people all around the world?
Starting point is 00:05:23 And then who governs it? And how can we then take this technology to have an impact from education to healthcare to others so stability is a company that built great base models and we go when we got to that point the revenue is going up which is always nice you know finding the business model and again i think it was always curious to see that as a deep tech company two years in people were were asking us, you know, why aren't you profitable? And I was like, it takes a bit of time and investment to get to profitability. I think openly, I mean, they took, they're not there yet, but they have eight years. I think all the comparisons are perhaps unfair.
Starting point is 00:05:59 So reviewing everything and kind of looking at it, I was like, do I really want to be a CEO? I think the answer is no. I think there is a lot imbued in that in tech, but it's a very interesting position. Just to dive into that a second, because we've had this conversation, because there was a lot of pressure asking for you to step down as CEO. asking for you to step down as CEO. And I think founders typically want to see themselves or feel they need to be the CEO. And I've heard you say recently that you view yourself more as a founder and strategist than a CEO.
Starting point is 00:06:38 Is that a fair assessment? Yeah, I think everyone's got their own skill sets, right? So I'm particularly great at taking creatives, developers, researchers, others, and achieving their full potential in designing systems. But I should not be dealing with, you know, HR and operations and business development and other elements. There are far better people than me to do that. So now, for example, our most popular thing, Stable Diffusion, and Comfy UI, the system around it, is the most widely used image software models in the world. There are great media CEOs that can take that amplified route to make hundreds of millions
Starting point is 00:07:13 of revenue. So they should come in and meet on that. So why now, pal? Is there anything that specifically tipped for you that has... I mean, because you have done an extraordinary job. This has been your baby. I mean, you have to feel a whole slew of emotional elements. And I've had to step down as CEO on two occasions over the 27 companies. I've had to sell a company for pennies on the dollar. And it takes an emotional hardship on you.
Starting point is 00:07:53 I've had calls for me to step down as CEO since the, well, since 2022, you know? But I always thought, you know, what's best for the company and the mission? And when I look at the world right now there's a few things a the company has momentum it has spread it's turning into a business like last year i said let's not enter into large revenue contracts because technology isn't mature yet and our processes aren't mature yet and you have to deliver so we did a lot of experimental things we're setting up and again now it's ramping on a business side. On a technology side, the technology is maturing. Diffusion transformers, such as Stable Diffusion 3 and Sora,
Starting point is 00:08:31 are going to be the next big thing. And again, stability has got a great place there. But I think there's also the macro on this. So if you look at the OpenAI CEO thing, you know, Sam Altman said, the board can fire me any time. This is the governance of OpenAI. And then they fired him.
Starting point is 00:08:48 And then he is back on and he appoints himself back on the board. There's clearly no governance at OpenAI. I mean, I respect the people on the board greatly. I think there's some great individuals, but who should manage the technology that drives humanity and teaches every child and manages our government?
Starting point is 00:09:09 Who's really needing on that, that can build these models and do those things? As you know, I've always wanted to build the science models and the health team's done that. I want to be doing the education work. And then my concept of a national model for every country owned by the people of the country. All tied together, I think it needs to be by a Web3, not crypto or necessarily token framework. That's something that's a brand new kind of challenge, and one that I think there's only a window of a year or two to do. If you have highly capable models, let's put aside AGI for now, which we can discuss later, really accelerating, then no one will be able to keep up with that unless you build in a decentralized manner and distributed manner for data, talent, distribution, standards, and more.
Starting point is 00:09:52 So there's only a small window of time here to do that. And realistically, yeah, successful companies and these things are all great, but Genetify is a bit bigger than the classical norms, just like the whole life cycle of the company was a lot faster than the classical norms. So that's why I felt now is the right time to make that change and hopefully play my part in making sure this technology is distributed as widely as possible and governed properly. Pretty much I think I'm the only real independent agent that has built state-of-the-art models in the world right now. only real independent agent that has built state-of-the-art models in the world right now. Yeah, we've seen a lot of turbulence with open AI. We just saw Mustafa from Inflection become part of Microsoft. And I am curious, I mean, you had a now famous conversation with Satya a couple of days after stepping down. Was that investigatory on your part or was that just a touch base with an old friend?
Starting point is 00:10:54 That's just true trolling, actually, to let off some steam. That picture was, I think, from a year or two, a year or so ago. Okay. year or two year or so ago but you know i think sati is an amazing ceo and uh you know he responds again like the top ceos incredibly quickly when you message um he's got a great vision but there is again this concern about consolidation in tech we didn't take money from any trillion dollar companies that's stability you know we remained and retained full independence you know um and you know to the detriment of some of the elements that we've taken very big checks and other things um and even though you have good intentions you have
Starting point is 00:11:38 to remember that companies are slight slow dumb ais that over optimize for various things and that's more certainly in the best interest of humanity when you have infrastructure it's like this is the airports the railways the roads of the future AI isn't infrastructure which I'm talking about isn't infrastructure but it should be and should it be consolidated under the control of a few private companies with unclear objective functions again the people in the companies may be great i don't think so and this is a key concern and part of that was that commentary again he's doing an amazing job he's consolidating a lot of power for the good of the company and also he has a i think genuinely good heart and mission to bring technology to the world but it is a bit concerning
Starting point is 00:12:21 right especially with the new types of structure. You're speaking of Sam in this case? Satya here. Oh, Satya. Again, I think the commentary was always interesting. It's like Satya playing 4D chess, assembling the AI Avengers. I mean, he is. He's building an amazing mass of talent,
Starting point is 00:12:39 covering the bases, and Microsoft is doing incredibly well here, right? If you asked who's doing best in Gen 2 AI, people would say Microsoft. But there has to be concerns about consolidation of talent and power and reach. Before I get to your vision going forward, because it's so important and what you're doing next, I just, again, as a founder, as a CEO of a moonshot company. We have a lot of those listening here. Can I ask, how are you feeling right now? Because the decision to step down has to have huge emotional, are you feeling relief? Are you feeling anxiety? What's the feeling after making that momentous decision?
Starting point is 00:13:30 It was a big feeling of relief, you know, because there's a Japanese concept of ikigai. I know ikigai and I love it, yes. Yeah, do what you like and do what you believe you're adding value and other people do too, you know. Like realistically, realistically again I think I was an excellent research leader strategist other things but I didn't communicate properly or hire the right other leaders in certain other areas of the company and they're better people to do that and so I wasn't doing what I was best at a lot or I could have the most measurable value
Starting point is 00:14:00 and it was tying down you know there's a lot of legacy you know technical organization or other debt especially when you grow so so so fast and you know we were lucky that we had high retention in the important areas and we could execute in spite of all of that in spite of going the big company route i think you know moonshot founders you have to do because you don't have the resources at the start and you have to guide the ship you know as it goes out from port but there does come that transition point there and there is a competing thing where you typically take on vc money which has its own objective function versus your overall mission so again if you look at the generative ai world right now how many credible intelligent independent voices are there there they've had the ability to build models and design things
Starting point is 00:14:47 and make an impact. You know, there's not many. So I was like, that's where I can add my most leverage. And also the design space is, again, is unprecedentedly huge because the entire market has just been created. Like where does Genesify not fit and where does it not touch
Starting point is 00:15:04 and what needs to be built there we need to actually have the agency to go and build that and so i felt tired relieved um i felt that now there's a million options i want it rather than taking a long break get on with things i've just done the first thing in kind of web 3 and i've got a whole bunch of other things we're going to discuss kind of coming and catalyze stuff that can make an exponential benefit because you know like massively transformative purpose here is i want every kid to achieve their potential and give them the tools to do that and i love you for that um because you've been true to that that vision and i know on the heels of your announcement you've been true to that vision. And I know on the heels of your announcement, you've been reached out to by national leaders, by CEOs and major investment groups, and you have a lot of
Starting point is 00:15:53 opportunity ahead of you. So let's talk about where you want to go next. You mentioned publicly and you discussed on our Abundance stage stage the idea of democratized and decentralized AI. Let's define that first. What is that? Why is it important? And what do you want to do there? Yeah, I think that when I said I'm going to move to do my part in decentralizing AI, people were like, isn't that just open source? You give the technology, right?
Starting point is 00:16:25 And then anyone can use it. But it isn't. A decentralizing AI has a few important components. One is availability and accessibility. Everyone should be able to access this technology for fruits of labor. And there's some very interesting political and other elements around that. Number two is the governance of this technology. You have centralized governance because the models are the data. There's a recent Databricks model where they show that you have massive improvements from data. We all know that. Who governs the data that teaches your child or manages your health or runs your government? That's an important question I think too few are asking,
Starting point is 00:17:01 and we need data transparency and other things like that. So accessibility, you know, you've got the governance aspect of that. And then finally you have, how does it all come together? Is it a single package or is it a modularized infrastructure that people can build on and is available kind of everywhere? You know, does it require monoliths and central servers where if it goes down and you have an outage on GPT-4, you're a bit messed up or someone can attack and co-opt it? I think that those are kind of the key elements that I was looking at when I was talking about decentralizing AI. And I've come up with an infrastructure to do that, I hope,
Starting point is 00:17:41 as well. So if you don't mind, let's double click on it even further. So you mentioned we don't have long to get there, if that's a true statement. Why don't we have long to get there? And then what does getting there look like? If you had all the capital available and if the right national leaders were hearing about this, because a lot of this is supporting the populace of a nation to have AI that serves them versus
Starting point is 00:18:16 top-down. What's it look like two, five, 10 years from now? Yeah, I think you'll have both proprietary and open source AI, and they'll work in combination. The practical example I give is that this AI is like graduates, right? Very talented, slightly overenthusiastic graduates. And you've got those and consultants. But I really have been on stage with Matt Friedman last week at A360. When we were there, he said,
Starting point is 00:18:43 it's like we've discovered this new concept. What do you call it? AI Atlantis? Atlantis. Yes, yes. With 100 billion graduates that will work for free. Yes. I love that analogy.
Starting point is 00:18:52 It was a brilliant analogy. Yeah, we need to figure out how to say Atlantis. But there's a few things here. First of all is the defaults. Once a government embraces centralized technology, it's very difficult to decentralize it. And every country needs an AI strategy. A year ago, one year ago was GPT-4. Yeah, crazy.
Starting point is 00:19:15 How crazy is that? You know, at the AI safety summit in the UK, the King of England came on stage or came via video call. And he said that this is the biggest thing since fire, you know? And that was like, what, six, seven months later, where are we going to be in a year? Yeah. I think he took that from, from, the founder and the CEO of Google. AI is as powerful as fire and electricity. Yeah. I've heard the same from like jeff
Starting point is 00:19:47 bezos and a bunch of others you know and not kindle fire proper fire you know um but then if you think about it norms are going to be set in this next period like you know i'm in california la at the moment if you don't set norms on rights for actors and the movie industry then you could have a massive disruption just occurring as full-length Hollywood features come in a year or two generated. If you don't have norms around open models and ownership and governance by the people, it'll be top-down governance
Starting point is 00:20:16 because governments can't allow that to be out of control if they don't have a reasonable alternative. And I think the window is only a year or two, because every government must have a strategy by the end of the year. And so I think if you provide them a good solution that has this element of democratic governance and others, that will be immensely beneficial. I think also, it's urgent because we have the ability to make a huge difference. You know, as we kind of may probably discuss later, having all the knowledge of cancer, longevity, autism at your fingertips. We have the technology for that right now we have the
Starting point is 00:20:49 technology that no one ever is ever alone again on those things or to give every child a superior education literally in a couple of years like there is an urgency both from there's a small window but also from we must do this now because it can scale and make that impact we have dreamed of for so long. Enabling technology is finally, you know, it's finally good enough, fast enough and cheap enough. Everybody, I want to take a short break from our episode to talk about a company that's very important to me and could actually save your life or the life of someone that you love. The company is called Fountain Life and it's a company I started years ago with Tony Robbins and a group
Starting point is 00:21:25 of very talented physicians. You know most of us don't actually know what's going on inside our body. We're all optimists until that day when you have a pain in your side you go to the physician in the emergency room and they say listen I'm sorry to tell you this but you have this stage three or four going on and you, it didn't start that morning. It probably was a problem that's been going on for some time. But because we never look, we don't find out. So what we built at Fountain Life was the world's most advanced diagnostic centers. We have four across the U.S. today, and we're building 20 around the world.
Starting point is 00:22:03 These centers give you a full body MRI, a brain, a brain vasculature, an AI enabled coronary CT looking for soft plaque, a DEXA scan, a grail blood cancer test, a full executive blood workup. It's the most advanced workup you'll ever receive. 150 gigabytes of data that then go to our AIs and our physicians to find any disease at the very beginning when it's solvable. You're going to find out eventually. Might as well find out when you can take action. Found Life also has an entire side of therapeutics. We look around the world for the most advanced therapeutics that can add 10, 20 healthy years to your life, and we provide them to you at our centers. So if this is of interest to you, please go and check it out. Go to fountainlife.com backslash Peter.
Starting point is 00:22:54 When Tony and I wrote our New York Times bestseller Life Force, we had 30,000 people reached out to us for Fountain Life memberships. If you go to fountainlife.com backslash Peter, we'll put you to the top of the list. Really, it's something that is, for me, one of the most important things I offer my entire family, the CEOs of my companies, my friends. It's a chance to really add decades onto our healthy lifespans. Go to fountainlife.com backslash Peter. It's one of the most important things I can offer to you as one of my listeners. All right, let's go back to our episode. So let's talk about the objective function of democratized and decentralized AI. Is it that the compute is resonant in countries around the world? Is it that the models are owned by the citizens of the world?
Starting point is 00:23:46 Is it that data is owned? And how do you get there from here? I think that you can think of the supercomputers like universities. You don't need many universities, honestly, if someone's building good quality models. That's one of the things is like self-stability. And we did the hard task. We could have just stuck with image. We said, no, we're going to have the best 3D image, audio, biomedical, all these models. And no one else managed that apart from OpenAI to agree.
Starting point is 00:24:14 In fact, I think we have more modalities than OpenAI. Again, kind of what I kind of describe this book is accessibility and governance and a few of these other factors. So I think what it means is that this technology is available to everyone, but you see now that you don't necessarily need giant supercomputers to even run it. We showed you a language model running on a laptop, stable LM2, will run on a gigabyte on a Mac Macare faster than you can read. We're writing some poems about various things.
Starting point is 00:24:45 We see stable diffusion now at 300 images a second or consumer graphics card our video model was like five gigabytes of vram this really changes the equation because in web 2 all the intelligence was centralized on these giant servers and big data now you have big supercomputers i think you'll need less with better data training these graduates that can go out and customize to each country but they must reflect the culture of that country like the japanese stable diffusion model we had if you typed in salary man it gave you a very sad person versus the base model giving you a very happy person right so you must have graduates that reflect the local culture and then reflect the local knowledge.
Starting point is 00:25:33 And then global models, again, that reflect our global knowledge and can be accessed by anyone. But who decides what goes in that? These are some very important questions. And who vouches for the quality as well? What's your advice to a national leader? Because we're now starting to see ministers of AI in different nation states. And what's your advice to them right now in this area? I think my advice to them would be to start collecting the data sets that they would teach a graduate that was very smart through school and kind of other things.
Starting point is 00:26:10 This is national broadcast data. This is the curriculum. This is their accounting, legal, and others. And note that those data sets are infrastructure. They will enable the local populace and others to create these models because models are just data wrapped in algorithms with a bit of compute. That's the recipe. Compute algorithms and data. And it's not going to be as hard as you think to train these models, but you have to build them to get standards. So by the end of next year, probably a year after, I would estimate that a Lama 70B model or a stable diffusion model. So these are two leading models in image and language will cost about under $10,000, probably even $1,000 to train. And then it comes all about the data.
Starting point is 00:26:50 And then it becomes about the standards. You know, it's interesting. There is so much knowledge in the world that will vaporize, sublimate over the decade ahead as people die. Cultural data locked up in people's minds and stories and so forth that's never been recorded. It's an interesting time to actually capture that data and permanently store it into the national models. Yeah. And again, I think people over-focus on the models versus the data sets. I mean, it's data set, yeah. Yeah, with the exponential compute, you can recalibrate and improve the data as well.
Starting point is 00:27:32 So right now, a lot of the improvements in models are actually synthetically improving data and data quality. As you said, there's so much that can be lost, but now we can actually capture this and the concepts and the other guidance and have cross-checks. You can deconstruct laws. You can translate between contexts. You can make expert information available to everyone because, again, you have this new continent of AI Atlantis and all these graduates, soon-to-be specialists, that are on your phone.
Starting point is 00:28:03 And that's incredibly democratizing you know um because otherwise the knowledge is throughout history knowledge has always been gatekept always i want to get to uh health and education next but before we go there i know you were meeting with uh a mutual friend uh jules Urbach, the other day. And Instability announced a deal with Otoy, Endeavor, and Render Network. Are you still an advisor to that venture? Yeah, no.
Starting point is 00:28:35 This is part of the whole thing. It's the first of many Web3 kind of elements there. I think Web3 is 95% and I say 90% I'll be generous, speculative and rubbish. But there is that 5-10% of genuine people that have been thinking about questions of governance, coordination and others, and have built things that are proper. So Otoy is the bridge to the creative industry. That's why we're with Ari Emanuel and Eric Schmidt and others that are on the board. And the Render Network has a million GPUs, largely from creative professionals that are available.
Starting point is 00:29:12 And so the first thing I announced there, it was the initial 10 million, now it's 250 million, of distributed compute to create the best 3D datasets, like at Stability. We funded and worked with Allen Institute and others on Objiverse Excel, which is 10 million high quality 3D assets. We're going to
Starting point is 00:29:30 a billion, distributed. You don't need giant supercomputers. But then that is a community good that is owned by the people of the network and accessible to non-academic and others as well. Why? Because you need high quality assets to create better 3d models we have a new 3d model try for us all that can generate a 3d image from a 2d image in 0.5 seconds and that 3d model feeds into better 3d assets and then what does that mean it means we're heading towards the holodeck without the data you're not going to get there and that jules jules wants the holodeck for sure yeah so you know jules and i are on the same page of that you know and you're not going to get there without again a commons of data that can
Starting point is 00:30:10 train the graduates that then become specialized with star trek or you know star wars or any of these other ips and then also setting standards around monetization ip rights all sorts of other things um and so a network like render is really good for that but you know i've been talking to a around monetization, IP rights, all sorts of other things. And so a network like Render is really good for that. But I've been talking to a lot of people in Web3 about the different elements of the stack. Cause what I basically see is that we have the opportunity to build almost a human operating system,
Starting point is 00:30:40 models and data sets for every nation, every sector coordinated through proper Web3 principles. Again, not speculative tokens or anything like that. Making it so that every child in the world or adult can create anything they can imagine. They can be protected against the harms. And they have access to the right information at the right time to thrive. And again, that's infrastructure for everyone. It's a common good.
Starting point is 00:31:07 Access to GPUs has been sort of the limited fuel. Do you think decentralized GPU structures like Render is part of that future, is an important part of that future? I think that right now it's far more efficient to train models on these, again, big supercomputers, the university. But the rate of exponential growth, again, is insane. Last year, to train Lama 2 cost $10 million. In a year, it'll cost $10,000.
Starting point is 00:31:36 It's a thousand times improvement from algorithms, data, supercomputing speeds. And that's crazy if you think about it, right? So I don't think this will be the limiting factor. I think the GPU overhang for language models probably lasts until the end of the year, but then there's plentiful supply because all you have is, NVIDIA makes amazing GPUs at an 83% or 87% margin, right? But the actual calculations aren't complicated. Like we took Intel GPUs and we ran the stable diffusion 3 diffusion transformer training. So this is the same technology that's used in Sora.
Starting point is 00:32:09 And Stable Diffusion 3 is multimodal, so it can train Sora models with enough compute. And I think us and them are the only people kind of doing this. I think maybe PixArt as well. And then it ran faster on the Intel GPUs than the NVIDIA GPUs. But we know that it can run even faster because it's not optimized for either. It's still running fast.
Starting point is 00:32:30 So what you'll see is a commoditization of the hardware once the architectures get stabilized because GPT-4 is just a research artifact. Steel diffusion was just a research artifact. You're not in the engineering phase yet. And you've got to the point whereby this runs on MacBooks. It runs on other things. So I think it's a short-term phenomenon of the next year
Starting point is 00:32:52 because people were taking a point in time and extrapolating it without taking into account efficiencies, optimizations, and the fact that models that work on the edge and it can go to your private data will be more impactful than generalized intelligence everyone's over indexing on generalized intelligence and building ai god versus amplified human intelligence shall we say before we leave stability it's now in
Starting point is 00:33:20 the hands of uh of the chair and your uh past cto what's what do you imagine the hands of the chair and your past CTO. What do you imagine the future of Stability is going to be going forward? I know that you're not involved anymore. It's under different leadership. What's your advice to them or where do you think they're going to go? The very basic advice, I don't want any conflicts or anything because I'll be setting up lots of new companies and being a founder and a shareholder. And again, Stability, I'm a founder, shareholder. In fact, you're still the majority shareholder, I think, as of right now.
Starting point is 00:33:54 Yeah, just about. Just about, yeah. That will change. I'm sure new money will come in. Like we saw Coheir yesterday on, I think, $20 million of revenue run rate. They're raising it 5 billion. It's incredible, yes. Yeah.
Starting point is 00:34:08 With the right leadership, I think that it can, again, have an amazing part to play in media, and that's what I've suggested to it. And, again, there's a great team that continues to ship great models. So last week there was an amazing code model. Next week, amazing language, audio, and other models are coming out. So, you know, you continue shipping and great products around that too.
Starting point is 00:34:29 So that was kind of my advice to them. Let's focus on media and take that forward. But, you know, I'm not the expert on the business side of things. I did the best I could. My expertise on setting this up, I'd take it 0 to 10. And, yeah, 0 to one is definitely a role
Starting point is 00:34:48 that you've played here and allow someone else to take it the rest of the way. But the area that I know... Actually, there's something I want to discuss here that I think is quite important. Please. For, again, the founders listening and the moonshot companies there is an imbalance
Starting point is 00:35:08 of power when you have very visionary highly competent leaders there what I found at Stability is that everyone would be waiting for me no matter how competent because I was the one that could see around the corners and I was a bit good at
Starting point is 00:35:24 everything even if I hired people that built billion dollar startups or were leaders in research at Google or kind of whatever because you have to outsize things so what kind of Jeff Bezos says you have to speak last in some cases and some people they're meeting because otherwise everyone just does everything you say and they also wait on you now what I find and what I told the team is that you're flat as a power dynamic. You're all on the same page. You're all kind of relatively equal owners. And it'll be interesting to see how it evolves from that
Starting point is 00:35:52 given that there's actually a business. And again, I think this is something that you probably had a challenge with in other founders here whereby they put more on your plate because you are so visionary and because you're like up there in the future and they're always waiting on you. So you're always like, well, my schedule is completely packed.
Starting point is 00:36:09 My schedule now is actually quite free, which is also quite nice. I've had a chance to speak with you every day for the last few days. So that's been a pleasure to have extra time on your schedule. So we do have a world of visionary founder-led CEO companies, right? So you've got Musk and you've got Bezos historically and you had Steve Jobs and that's both powerful and dangerous. The power is the ability for that because we don't ever have a company that is pre-existing. is pre-existing, a new CEO comes in and has the same, both chutzpah and also the power of their vision.
Starting point is 00:36:54 The danger there, you're saying is not allowing your team to step up with their own vision or being overly indexed on your on your vision yeah i think that that can be the issue and that's why i wanted stability to again reach the point of spread and revenue rate increase and other things before i did anything um i felt again this external pressure and that if nobody or there's very few people in the world actually thinking properly about governance and spread and others and a very small window, given the pace of this to make a difference in the dent. I believed I had a reasonable approach to that. But I couldn't while remaining CEO of this company.
Starting point is 00:37:34 And again, it's a pretty unique scenario because you've never seen a sector move this fast that has such wide-reaching human implications. such wide-reaching human implications and regressively there's too few people I think with the right alignment and approach in this area I've been very disappointed like usually what happens is you have power maximization equations and this is what we're seeing from the industry consolidation how many people want to genuinely bring this technology to kids in Nigeria or to the global south or to help those leaders build their own models you know and believe also in a positive sum game that was actually my biggest surprise from the discussion silicon valley almost entirely they all believe in a flat or negative sum you know zero sum or negative sum things where there has to be a winner. Everyone's a winner in this. And again, I was just very disappointed seeing that.
Starting point is 00:38:32 I've been asking you for a while to write and distribute your vision white paper because I've heard you describe it in detail and it's brilliant. And I still hope that the world will see it soon enough. Everybody, I want to take a break from our episode to tell you about an amazing company on a mission to prevent and reverse chronic disease by decoding your biology. The company is called Viome, and they offer cutting edge tests and personalized products that help you optimize your gut microbiome, your oral microbiome, and your cellular health. As you probably know, your microbiome is a collection of trillions of microbes that live in your gut and mouth. These microbiomes influence everything your digestion, immunity, mood,
Starting point is 00:39:14 weight and many other aspects of your health. But not all microbes are good for you. Some can cause inflammation, toxins and actually lead to chronic diseases like diabetes, heart disease, obesity, and even cancer. Viome uses advanced mRNA technology and AI to analyze your microbes and your cells and give you personalized nutrition recommendations and products designed specifically for your genetics, specifically for your biology. for your genetics, specifically for your biology. You can choose from different tests depending on your goals and needs, ranging from improving your gut health, your oral health, cellular function, or all of them. I've been using Viome for the past three years. I can tell you that it has made a huge difference in my health. And because the data they collect and the AI engine they've built, it gets better every single day. I love getting health scores and seeing how my diet and lifestyle affects my microbiome and my
Starting point is 00:40:10 cells. And I love getting precision supplements and probiotics tailored for my specific needs. If you want to join me on this journey of discovery and improve your health from the inside out, Viome has a special offer for you. For a limited time, you can get up to 40% off any Viome test using the code moonshots. Just go to viome.com backslash moonshots and order your test today. Trust me, you won't regret it. All right, let's go back to our episode. Before we jump into health and education, let's talk about governance a second because we've seen governance
Starting point is 00:40:51 complicate this. What is the right governance structure for this super powerful technology? We have representation of democracy that I think can be improved by this. Like I don't think democracy survives this technology at its current form. It will either improve or it'll end. I don't see anything else. Like yesterday though- What does end mean here a benign dictatorship a driven by an ai overlord yeah like yesterday there was a announcement of an app called hume which had emotionally intelligent speech and they can understand your emotions and talk with emotion you and i have to discuss this yes you know where that's going right it's very powerful speech it's incredibly powerful and I have to discuss this. Yes. You know where that's going, right? It's very powerful.
Starting point is 00:41:25 Like speech, it's incredibly powerful. And governments have a tendency, I mean, the official government is- But say it here. It's important for you to state what it means because we've discussed it, but help people here be ready for this. Democracy is all about representation
Starting point is 00:41:43 and you see the questions of deep fakes and things. Speech is one of the most impactful elements there, but now you can't believe anything you see, hear, everything. So Wine Path that we have is a 1984 on steroids, panoptica, you know, where life is gamified and you listen to whatever the government says and they're incredibly convincing and you're happy. And you've always been happy and you've always been at war with eurasia you know propaganda on steroids the other part that you have is things like citizen assemblies consultative democracy
Starting point is 00:42:14 the ability to take right now you can take any of the bills in congress and completely deconstruct them and find what the motivations are you. You can check laws against the constitution in seconds. This is incredibly powerful, empowering technology from a democratic perspective. So I see two routes, unfortunately, because I think that once the thing goes, it goes really fast. Centralized government control increasing because the governments want to protect themselves as an organization. And every party says the other party's crap we've seen the increasing polarization in america already and you know fundamentally come on you can do better than those two leaders that are currently competing i'm saying this is a clear our system is sclerotic
Starting point is 00:42:59 across this like democracy is the worst of all systems except for everyone else we can have a better democracy where it's actually representative and empowers the people or we will have the end of democracy where it is in 1984 panopticon in my opinion because the momentum will go then of course you will start using this technology you're already seeing it being used but not at scale and not intelligently yet, which is scary. I think we finally have the technology for a direct democracy versus a representative democracy where I can have my desires directly represented on any specific law. But I think the point you've made before is that speech, if you look back to everybody from Hitler to some of the most persuasive politicians, is a powerful tool and AI can
Starting point is 00:43:59 become the most persuasive speaker out there. It can take anyone's speech and make it far more persuasive. Like, I think my voice is a bit whiny. I can remove the whine. You know, I can go in a very Polish-British accent and other things like that, right? You know, must fight them on the hills
Starting point is 00:44:14 and the areas and whatever. Public speaking makes a big difference. Someone took Hitler's speeches and put them through an AI and took them into English. Because when we're not in the German context and we listen to them, it sounds like he's shouting, like, what a crazy thing. You hear him in English, it is very different in his own voice.
Starting point is 00:44:32 Just like someone took Xavier Millet's one in the United Nations and put him into English again. He sounds a bit shouty, but then he sounds very reasonable when it's in English. And you can take the phenomes of Obama's best speech and a bit of Trumpianism and a bit of Churchill, and you will have full modulation wave control over all of this. People are already using this technology that everyone should have a passcode with their loved ones because people are getting calls from their mother saying, help, I'm in an emergency and you just send money right now and you cannot tell it and it pulls at the emotional strings and if you look at something like us radio and you know one
Starting point is 00:45:12 side of the political divide is taking over imagine if you're hearing optimized speech every single day that will have a huge impact and then they control the visuals and they control the other things we're not set up for defenses. If it's optimized speech for you, specifically for you, right? For the kids you have, the age group they have, where you live, your historical background and so forth, and end of one persuasive speech coming at you, the brain is not set up for defenses.
Starting point is 00:45:43 We're not. And we can take this as an example of the YouTube algorithm. Like YouTube as an organization is not an evil organization. But it's an organization optimized for engagement, which optimized for more extreme content. So there's some darkface in YouTube, which is optimized for ISIS. The ISIS video spread viral. I don't know. Sometimes viral is good.
Starting point is 00:46:03 Sometimes viral is bad. That one was bad. Because it was was extreme and they didn't understand why. And if you look at it, two of our largest general today companies are Google and Meta and their business is advertising. Their business is manipulation and they are both amoral companies. Because why would you expect a company to have morality? Our governments are also amoral. And again, you can view these things as slow, dumb AIs, so you can see the way they will optimize unless we do something about it.
Starting point is 00:46:35 And they will have full control, like, again, you put on your Vision Pro headset with your spatial audio. That is full sensory control. Not full, but you know what I mean. A level that we've never seen. Full immersion. Full immersion. Full immersion.
Starting point is 00:46:50 And so we have to be aware of this. And there's obviously other tools that can be used, like in the wake of the Arab Spring, you know, governments targeted everyone that was on social media. We can do that on a hyper-personalized basis. Like we need to set some defaults and standards here to protect democracy but again why democracy we're not really trying to protect democracy you know again people have different definitions there what we're trying to protect
Starting point is 00:47:18 is individual liberty freedom and agency education should be about enhancing the education of every child it's not. You know, healthcare is sick care. Our government should uplift us, but how many people believe our governments do that rather than put us down? Because they couldn't encapsulate and cater to the brilliance of each individual because they didn't have the tools until now. So that's why I said, which way, modern man? Yes, which way? Infinite agency or massive control? These are the two ways.
Starting point is 00:47:49 Do we control the technology or do these organizations control the technology that controls us? You know, when we were on the stage at the Abundance Summit, we talked about a future of digital superintelligence, right? And a future in which we've got AI a billion times more capable than a human, which looking at it just from a ratio of neurons is the ratio of a hamster to a human. Do you believe that someday we could have a benign super intelligence that is supporting humanity? Yes, and I believe that it should be a collective intelligence that is made up of amplified human intelligence that is amplifying all of us,
Starting point is 00:48:40 pilots that contain our collective knowledge and culture and the best of us, pilots that contain our collective knowledge and culture and the best of us, and data sets that are built from helping and augmenting us, versus a collected intelligence, an AGI, that is top-down and designed to effectively control us. Again, if you look at OpenAI's statements on the road to AGI, they say, this technology will end democracy, end capitalism, and maybe kill us all. I don't like that.
Starting point is 00:49:04 I remember seeing that. You texted me. You said, read this. Does this sound the same as it does to me? Yeah. So what I'd prefer instead is for this to be distributed. If you have data transformations that are built on enhancing the capability of the nation, that reflect the local cultures, and you push for data transparency on models,
Starting point is 00:49:22 which I believe we must have, especially language models, then you're more likely to have a positive thing. And again, the human collective can achieve anything from splitting the atom to go into space, if we put our minds to it. But we have lacked in coordination mechanisms. They've not been good enough. So if you create the human colossus and every single person has an AI that's just looking out for them to enhance their potential and coordination ais that is a far more positive view of the future and that is the agi that is a general intelligence that's the hive mind general intelligence not a borg style hive mind but one that's really thinking again every child should achieve their potential versus this embodied concept of an AGI
Starting point is 00:50:05 that's a very Western concept. And you see that as well, like, you know, we look at the Japanese concept of a robot. The robot is your equal and your helper. You look at the Western concept of the robot, it's Terminator and Skynet and all of that. And again, I think this is, again, where cultural norms become very interesting.
Starting point is 00:50:21 And what do we want to build? Do we want to build AI God? Or do we want to build that AI helper that helps us and we help it? Those listening now, I mean, you can see Imad's brilliance and why I'm so enamored with the way you think about this,
Starting point is 00:50:39 because there are very few individuals who are looking at this from an objective function of what's best for humanity, what's best for every nation state out there. Let's talk about your going forward future. Are you going to build something in the decentralized side of AI, the democratized side of AI? Is there a company there or a fund in your future for that yes so uh you know doing the white paper finally getting there with a bit of help from ai i can't i can't i can't wait to uh to help uh broadcast that white paper yeah but look i think
Starting point is 00:51:19 the basic thing is this what i want to do is set up an AI champion in every nation with the brightest people of each nation working with the organization of each nation to help guide them through this next period. Because there will be massive job displacement from the graduates going, massive uplifts in productivity from the technology being implemented. And again, that organization can help govern
Starting point is 00:51:41 and create these data sets and these models that are so important. And I believe every nation should have that. But then I also believe that every sector should have a generative AI-first infrastructure company that builds this and helps the healthcare companies, finance companies, and others through that. And to coordinate all of that, you need to have a Web3-type protocol. What is the protocol for intelligence?
Starting point is 00:52:03 So what is a Web3-type protocol? Define that for folks listening. Again, people talk about Web3-type protocol? What is the protocol for intelligence? So what is a Web3-type protocol? Define that for folks listening. Again, people talk about Web3. It's not about the tokens or the meme coins or anything like that. What a Web3 protocol is, is that everyone should have, like AIs, first of all,
Starting point is 00:52:17 aren't going to have bank accounts. They're going to need some way to pay each other or exchange value. And again, Web3 has done a lot of work in that. There needs to be some sort of identity, attribution, and other format because you'll have this mass influx of information. And so again, Web3 concepts are very useful there. There needs to be an identity concept because you'll have real and digital people.
Starting point is 00:52:41 Web3 concepts are very useful there. So data attestation, all these other things, verifiability. So when I look at it, if you've got sectorally, my plan is to launch almost a company for every major sector. And we can talk about health and education and bring the smartest people in the world to solve that challenge of the infrastructure for the future, every nation. But you need to have some sort of coordinating protocol for all of that that becomes a standard. And that's the substrate for this amplified human collective intelligence. And is that where you want to play and focus your energy next?
Starting point is 00:53:13 Yeah, it's setting up these organizations and bringing the brightest, smartest people that really want to make a difference there. Because there's massive network effects in doing this. But again, I just need to be the founder and architect architect i don't want to run the day-to-day of any of these things um and then because the the most scarce talent there's three types of capital as i view it there's financial capital human capital and political capital in order to affect change in the world you actually need all three but the financial capital actually comes with the people capital and the political capital. And the smartest people in the world, in every sector, from healthcare to education, to finance, to agriculture, almost all believe that joint value is the biggest thing they've ever seen. In the last year, everyone's asking
Starting point is 00:53:59 you, all the smartest people, Peter, what's next, right? And you know many of the smartest people in the world. So I want to create organizations that they can come, the chefs and the cooks, the thinkers and the doers, and think what is the future of finance, what's the future of education? And then the national champions that should be owned by the people of each country become the distribution
Starting point is 00:54:18 for the amazing infrastructure that they built. And there's a nice kind of vice versa, but then again, you need the coordination function. So I'm trying to bring together people in each of these and there'll be public calls and things like that to build that infrastructure in the future because as mentioned, AI isn't infrastructure, but it should be.
Starting point is 00:54:37 Maybe it's the rocket ship of the mind, right? I love that analogy, my friend. It is the most important infrastructure that humanity will have going forward across everything it does. And I look forward to helping you build it. have to see it. I was like, what is that? You know, they're like, oh, it's nice. It's nice that people care. Right. But I'm generally excited about what's next. Like, you know, again, it was like staring into the abyss and chewing glass every single day. And that's not what I'm best at or where I could have the most impact. But I want it to be a point whereby if I can accelerate this over the next period, I don't have to make an impact. I should not have any power on this. Whereas again, you see everyone else trying to get more
Starting point is 00:55:30 and more power. I want to make sure it's set up properly, but I want to give it all away because power is obligation. It's dragging. And again, it should not be invested in any one individual. We should not have to rely on anyone being nice or good for this technology. I was talking to Michael Saylor during the Abundance Summit that evening and talking about the fact that because Satoshi, when he set it up, did not retain any power and did not trade on the founding blocks and so forth, that that's the reason it's been able to succeed because there wasn't that centralized power.
Starting point is 00:56:08 And he said Bitcoin had been tried many times before, but because it didn't have that initial anonymity and the dissolution of founding power, that that's the reason it didn't succeed. Yeah. I mean, again, I think you need to have it accelerate. And you see this with movements, right? The movement starts, but then it goes once you've got the DNA and the story there, right? You know, you see the profits, you see the leaders, you see the others. But then it's about setting the framework correctly and reframing the concept.
Starting point is 00:56:41 This technology is not beyond... Look, Stability is a company that started two years ago above a chicken shop in London, right? My first 20 employees, I went to the job center and I said, bring me people that have overcome adversity and I will train them, young graduates. And six of them are still at Stability, because it was a program.
Starting point is 00:57:01 And they're doing things from cybersecurity to running supercomputers. We only had like 16, 17 PhDs. Yet we built the state-of-the-art models in every modality we built mind reading models like mine's eye you know i remember that contributed to all these things yet you're told it's impossible to compete we have shown it's not impossible to compete that's a reframing the reframing is data versus models. You don't need giant supercomputers for everyone. You just need to have a trusted entity to build it right. And so I hope to kind of convey this and then figure out this organizational structure that
Starting point is 00:57:37 can proliferate so I can take the holiday. So before we go further, let's talk about one area of your next chapter in life that we both have as a passion, which is the use of generative AI in health. It's an area that you've given a huge amount of thought to, and I think you're excited about. Can you share what your vision is there? Yeah. So I got into AI 13 years ago, gosh. I was a programmer before for 23 years, building large-scale systems as a hedge fund manager and other things. My son was diagnosed with autism,
Starting point is 00:58:09 and then I built an NLP team to analyze all the clinical literature and then looked at biomolecular pathway analysis of neurotransmitters, gathering glutamate in the brain to repurpose drugs for him, and he went to mainstream school, which was great, N equals 1. And then I was lead architect on one of the COVID AI projects for the united nations going to stanford and others and then because i didn't get the technology i was like oh we've got to build it ourselves but what is health you know again i think we have this discussion a lot health care is sick care we don't have all the information that we should have at our fingertips health assumes ergodicity
Starting point is 00:58:44 a thousand tosses of the coin is same as a coin tossed a thousand times, but we are all individual. And across the world, there are amazing data sets that could be better because when you write down a clinical trial or your own kind of experiences, you lose so much information. At the same time, you don't have all the information on cancer, autism, multiple sclerosis at your fingertips in a comprehensive, authoritative, and upstate way. So when I look at the health operating system, we're going to build a GPT-4 open for cancer.
Starting point is 00:59:18 And it's going to mean that nobody is alone again on that journey and loses that agency because they know comprehensive, authoritative, upstate, all the knowledge. But AI models today already outperform human doctors in empathy. So they're not going to be alone on that anymore. Can I just double click on what you just said? Because it's really important. I've had so many people because of my role as chairman of Fountain Life who reach out and say, I just got diagnosed with this cancer or or my brother, or my sister, or my wife. And they're left with this decimating news, and they're left Googling. But a model that's able to have the most cutting edge information, and then incorporate all their medical data, and give them advice in empathic fashion how far is that see a couple of years if we focus maybe even like next year and that's amazing
Starting point is 01:00:12 because for all of these topics that again we will have diagnosis that is superior we will have research augmentation because again even researchers don't have all that knowledge at their fingertips and again this is public infrastructure and a public good. From primary care all the way through to that, what is the open infrastructure of the future where this technology can come, again, to your own data as well? You have things like Melody and other things around homomorphic encryption federated learning
Starting point is 01:00:40 that they're trying to figure out how to preserve privacy. We can run a language model on a smartphone right now that can analyze all your data and then just feed back stuff to a global collective. But people are people. So when I look at healthcare, I see amazing data sets that we can activate by taking the models to the data,
Starting point is 01:00:57 an infrastructure that we can build, like we had ChexAgent with Stanford, the top x-ray radiology model, to build good standard things across the entire gamut of healthcare. So we can actually get into healthcare versus sick care. So we can make it so that everyone is empowered to make the best decisions, either as experts or individuals, and make it so nobody is alone again, as well as increasing the data quality that will then feed better models that will then save lives, save suffering, and again, increase our potential.
Starting point is 01:01:28 Like, you've got a longevity book behind you, right? Why don't you have all the latest knowledge of longevity at your fingertips at a GPT-4 level right now? That will happen over the next year. We will launch stable health or whatever you decide to call it, and there will be the smartest people in each of these areas working on that so again you never aren't like it doesn't matter if you're with a hundred billion dollars and your kid has autism asd there's no cure there's no treatment there's nothing doesn't matter how rich you are yet with just a little bit of effort right now we can build it as an open infrastructure for the five percent of people in the world that know someone with autism the 50 percent of people in the world that receive a cancer diagnosis of them of someone they love and they feel that loss of agency so we're going to
Starting point is 01:02:14 return agency to humanity that way and again it needs to be an open infrastructure that they can then access private data sets and compensate them appropriately so everyone is incentivized we need that fast yeah and and that's a beautiful vision it is again infrastructure and one of the things that's so beautiful about it is guess what all eight billion people were all human we're all running the same software and the the the breakthroughs and the knowledge accumulated in Kazakhstan is going to be as useful in Kansas. Yeah. But this is the thing, operating system. This is the biggest upgrade to the human operating system we can imagine because we're going from analog to digital.
Starting point is 01:02:58 Text is black and white, whereas these models only understand context. You know, Daniel Kahneman just passed, amazing kind of guy, but he did have this concept of type one, type two thinking. And so we had one, which is these big data things that can only extrapolate, but now we have these models that understand context. And so we have the missing parts of the brain, and that will allow us to extrapolate, allow us to have more rainbows, you know, have the context of each individual, push intelligence to the edge. And that's why, again, there is this imperative to do this now because there's a window on the freedom, agency, democracy side.
Starting point is 01:03:34 But the other imperative is no one should have to suffer as they're suffering now. Amazing. And how much does it actually need? It doesn't need that much, which is the really amazing stuff. This, the total amount spent in Genitive AI, I think I said at the conference, is less than the total amount spent on the Los Angeles-San Francisco railway, which hasn't even started yet. And in building stable health, again, if that's what it's called, I mean, the amount of capital required to build that is de minimis compared to what's spent on a single human trial of any drug. Yeah.
Starting point is 01:04:09 It is. But then, you know, you build it and you get to that 80-20 incredibly quickly that will change hundreds of millions of lives and that will attract the smartest people in each of these areas thinking about what is the open infrastructure of multiple sclerosis, of longevity, of cancer and more but then you can amp that because the value is so so huge and you i hope to build a trusted organization as part of this whole human operating system upgrade you know that's what i want to build i want to build human os or at least catalyze it again i don't want to run or control or own anything i want to figure
Starting point is 01:04:42 out how to give back that control because who should decide what cancer knowledge goes in there? Who should decide what education, et cetera. Let's talk about the second half of your vision, which is how we originally met when you were one of the winners of the Global Learning X Prize that Elon and Tony Robbins had co-funded. that Elon and Tony Robbins had co-funded, your vision around education. Speak to us about that. Yeah, you know, so we're deploying it, kind of the windows kind of separate, but every child, my entire operating system is like, if you think about things in terms of the rights of children today, they have no agency. And so we must respect their rights. Climate, everything becomes a lot simpler. Now that we have language models on a laptop,
Starting point is 01:05:30 like I said, you can go to lmstudio.ai, download stable LM, and it will run on your MacBook faster than you can read. It's crazy. We can have a GPT-4 level ai from us or someone else on a smartphone or a tablet by next year one laptop per child was too early you know now we have this transformative technology you have an ai that teaches the child learns from a child are you visual auditory dyslexic that's the best data in the world for a national model but also to teach these models how to be optimistic, how to be encouraging. This really is the young lady's illustrated primer. This really is Neal Stephenson's vision in that regard. Yeah, but Nell shouldn't have had to find the primer.
Starting point is 01:06:18 She should have had it from day one as a human right. As a human right, yes. Our school's education system, our childcare mixed with the social status game mixed with Petri Dish. You know? They teach our kids not to have agency. Yes. Follow these rules. Whereas they should be telling the kids.
Starting point is 01:06:36 Yeah. Yeah, they should be teaching. It's a relic of the industrial age where everyone had to be counted. And you can't measure what you can't manage, so you manage the creativity and belief out of people everyone in the world can do anything why because even if you don't have that talent you can convince someone else who does have that talent but they don't believe it so they can't do it so what happens if we have an entire nation of children that have this helper that brings the right information at the right time and tells them they can always believe that supports them entire world what can't you do you know then they
Starting point is 01:07:12 have all of the cancer knowledge at their fingertips and all of the engineering knowledge at their fingertips and it's a constantly learning adaptive and improving system again right now almost the entire agi and AI debate is about these machine gods trained on giant supercomputers that bestow their beneficence down or may kill us or whatever. What about that human operating system upgrade that is a
Starting point is 01:07:35 decentralized intelligence where that kid in Mongolia or Malawi or wherever can make a real difference to humanity? Some of the contributors to our open code bases for our models are 15 years old. They just taught themselves and just happen to be their wizards. You don't know in this new age, right? And again, they should contribute to the whole because once something goes into this model or this system, and again, it needs the verification and other things that can be dynamic, they can proliferate to everyone using that system. Do you think that once this capability is built, it will run into blocks in different nations?
Starting point is 01:08:12 Or do you imagine that this will become, again, a human right? Listen, there's no greater gift and no greater asset you can give to a nation's populace than intelligence and education. But I'm not sure every national leader wants to see that. And that's why I think, again, there is a gap here. There is a year, maybe, where you can go to any national leader and say, I will bring this technology to your people, and I will empower the smart, stupid people. I want it to be owned by the people and what option do they have?
Starting point is 01:08:47 This is positive for them. What happens is that a lot of the corruption in the world is because of local maxima. Actually, it's weird because unpredictable corruption is the worst. Predictable corruption is a bit like tax. There's a good book by Fusso and John Moore at Harvard about this. And then you have taxation kicking in at 14%.
Starting point is 01:09:04 If you can show them something bigger, and this is clearly big, they will embrace this technology and set new norms. And if you create the same across all these countries with talented individuals in each of those groups, and talented individuals in each of those sectors with a shared mission, even though they're separate organizations, that's how you set amazing standards. That's how you build a network effect and if you tie them all together with a intelligent protocol and again when talking about tokens or speculation or ramps or anything like that but taking the best of thinking around coordination that can work that can break this open you know um but it's not going to be everywhere
Starting point is 01:09:42 and also when you look at the current debate, the current debate is, for example, we can't let China have this technology. And you're like, what about the kids in China? Well, you know, it's dangerous. They can have AGI. So under what circumstance would China ever have this technology? Never. You know, Pakistan, when should they have the technology?
Starting point is 01:10:01 Never. That's really what they're kind of saying. It's also self-defeating because China has 100 million people they can use to create data sets and two exoplot supercomputers. Let's put that to the side. Again, it's a very Western-oriented debate. Whereas actually, if you go to these countries and you talk to the leaders and the family offices that have power and the people, they will leapfrog in the global South to intelligence augmentation like they leaptfrog to mobile. They want to embrace this technology.
Starting point is 01:10:29 And again, you can set norms now versus what's going to happen is they will get a centralized solution. They'll adopt that instead. If you don't right now, for hundreds of millions, billions of people. That's why I think, again, it's a crossroads. Is there anybody else working towards this vision that you know of? No. In the large AI? No.
Starting point is 01:10:52 Certainly no one with credibility. And again, that's why I had to build these models, and I had to kind of do this. Everyone's working on tiny parts of this, but they're expecting emergence. Build it and somehow it will spread. And again, this is why I find it fascinating the web3 community there are good people in there and i hope to be able to unite them just like hope to unite the people in health and others again peter you've seen people working on tiny parts of this but this isn't a manhattan project where we're facing an enemy unless the enemy is ourselves you know but this does require
Starting point is 01:11:22 this big global coordinated push and that's why i've tried to design this system that i believe will work because it's all about the talent and it is multiplicative is the race against uh over really powerful centralized ai systems that achieve some version of agi Is that what we're racing against? Yeah, again, we're racing against ourselves. Humans can scale through stories. You have organizations, you know, come and join Abundance, come and go to Oxford, come and do this. But then when we scaled through text, text was a lossy information format, and there's this poem by Ginsberg, Howl, about this Carthaginian demon of disorder, Moloch, that comes in. Moloch comes in through the data loss.
Starting point is 01:12:12 Our organizations are slow, dumb AIs. But now what's happening is they're configuring to achieve their thing of getting more and more power. Again, corporations are technically a people under law, but they're not fully formed people. They eat our hopes and dreams. So I believe the competition here is against those organizations consolidating too much power and creating norms that are almost impossible to break. So we're almost competing against ourselves. And again, the question is this. Do you believe in amplified human intelligence or do you believe in artificial general intelligence?
Starting point is 01:12:43 Do you believe in collective intelligence or do you believe in artificial general intelligence do you believe in collective intelligence or do you believe in collected intelligence who decides is this infrastructure or is this a product like so it's not like a manhattan project against you know the soviets or anything like that but this is require us all to come together, or at least the smartest people in each of these areas from coordination to governance systems, to healthcare to education, with a blank slate of how do we upgrade the human operating system, the time is now, it's our last chance to do it. you for it because I think you're right. You were there when Elon beamed in on X video over Starlink and from his airplane, which was a fun moment. And we were talking about the rate of growth and his statement, because Rick Kurzweil was there talking about his still his prediction of Rick Kurzweil was there talking about his still his prediction of AGI by 2029 and Elon saying we'll have AGI, whatever that means by next year and the intelligence of the entire human race by 2029. So I am curious just to close out what you think about those timelines and that potential for a super intelligent AI system that is centralized.
Starting point is 01:14:10 Because the people who are building that level of power are building centralized systems. They're building centralized single systems that, again, take our collective intelligence, like all of YouTube in the case of OpenAI, clearly, and other things, and they package it up, sell it back to us, but they don't care. You know, these organizations are trying to build a system
Starting point is 01:14:33 that will take away our freedom, liberty, and potentially kill us all. Let's be kind of fair about that. Let's be direct. And sell it to us on an incremental basis. The selling to us is a complete cannot. They don't care about the revenue
Starting point is 01:14:45 of this. Again, let's kind of call a spade a spade. They're telling you that they're building something that could kill you, and something that could remove all our freedom and liberty, and they're saying it's a good thing, you should back them because it's cool. It's not. It's actually shameful
Starting point is 01:15:01 if you think about it. And we should not stand for it anymore. And again, this is another reason I want to step aside and say, yeah, because you can't say things like that. I got cancelled in Silicon Valley so many times. But realistically, it's ridiculous and it should not be stood for. But they're going to do it anyway. Because they have the political power. People are scared of them. So there has to be an alternative.
Starting point is 01:15:21 And the alternative has to be distributed intelligence. When I resigned, I said you can't beat centralized intelligence with centralized intelligence. You're not going to beat it with a stability. So this is a great organization. It's going to do well. The only way that you can beat it to create the standard that represents humanity is decentralized intelligence. It's collective intelligence, and the data sets and norms from that will be ones
Starting point is 01:15:46 that help children, that help people suffering, that reflect our moral upstanding and the best of us and gathers the best of us to do it. Because if you work in healthcare, if you work in education, if you work in finance, if you work in any of these things, there's no organization for you to come and join or partner with on this. There's no kind of centralized mission. I have looked. I've wanted to help other people. I didn't want to do this myself. And I don't want it to be about me very, very quickly, which is why I'm kind of getting it out there now. I hope that I can capitalize something that then people will take forward. And time is now for that because AGI, when it comes, if it comes, again, there's various definitions of this.
Starting point is 01:16:25 Why on earth do you need any knowledge workers? Anything that can be done via a laptop doesn't need humans. And so you have concepts of UBI here. You have concepts like when AGI comes, you don't need money. Money is a common story, is a common good. We hit towards a post-capitalist society. The example I think you said was Star Trek versus Mad Max.
Starting point is 01:16:49 I'm like Star Trek versus Star Wars, I think is a better one. And so you've got the Sith Lords and all of that. But again, if you kind of look at this, I don't think we need money. It's cross-contextual bartering
Starting point is 01:17:04 with our AI systems representing us. Or it's you don't need we need money. It's cross-contextual bartering with our AI systems representing us. Or it's you don't need money because you're told what to do. Again, our governments, the definition of a government is the entity with a monopoly on political violence. And an AGI can overtake any government that can then control the people. Because again, listen to it whispering, look at the Coney Hume thing. So we have this opportunity to set norms right now. The way that the big labs are going to AGI is likely to kill us all. Elon and I signed that six month pause letter because even though people like Emma, you're an accelerationist, you put all this open source AI out, you have to think about the other
Starting point is 01:17:39 side and who's involved in that discussion. And again, if we build an AGI as a centralized thing, He was involved in that discussion. And again, if we build an AGI as a centralized thing, is Windows or Linux safer as infrastructure? Our entire internet infrastructure is built on open. Open can be challenged. Open can be augmented. A monolith is like to be crazy.
Starting point is 01:18:00 And the way that I put this is, you and I both know so many geniuses. You know, a side effect of genius is insanity honestly we're not meant geniuses are not mentally stable why would you expect an agi to be so and you're putting all your ends in one basket versus creating a complex hierarchical system that is a hive mind that's an intelligence that represents us all. We should be working towards building that because it's safer, it's better, it achieves all the benefits that people are talking about. And it's possible today. Do you think Elon shares in this vision of decentralized AI? Do you think he would play in that area? And do you think any of the national leaders that you've been speaking to would support that kind of a vision as well? I can't speak for Elon.
Starting point is 01:18:47 I'll speak to him and see what he thinks, and then I'll get back to you. He always says what he thinks. But he's immensely concerned. He was one of the leaders in this area saying, originally, why Google? Now why Microsoft, OpenAI? It can't be centralized, but it's difficult.
Starting point is 01:19:01 This is a difficult question. How many people have a feasible solution? Or have you even thought about this properly? You and I both know just not many, and that's very sad. It should be everyone thinking about this. On the leader side, all the leaders I've met are super happy. Because they, again, leaders want power, they want control and all of this, but generally they want to see abundance. they're not happy with where their countries are and embracing this technology they know that they can leap ahead and you know they will still have a say in all of this it's not like it's kicking them out or removing them there are still various kind of mechanisms there and ultimately improving the health education and capability of your people
Starting point is 01:19:42 is not a bad thing i mean like obviously i haven't talked to the completely oppressive leaders you know maybe that'll be an interesting thing but honestly i don't want to even be talking to leaders i want to create again a system where the people of the country coming together with the franchise system can then build this technology for the good of their people in the open and not be reliant on anyone politically or any other type of thing like that so you don't need giant supercomputers for where we're going. We need coordination. Need a few giant supercomputers.
Starting point is 01:20:10 Yeah. What's your timeline for putting out this white paper? I'm working as hard as I can, you know, putting it together. I've held you to this a number of times. I've said, get the vision out there. It's getting there. We're about to go off this call to a four-hour session to dictate all the various bits and pieces. And again, it was impossible when I was a CEO of Stability.
Starting point is 01:20:30 There was always another fire. There was always another thing. I didn't have time to think. And I hope people can take that white paper and make it better. I don't have all the answers. I'm just trying to capitalize something, man. I think after I heard you step down, I wrote you a text saying congratulations. Yeah, exactly. Not commiserations.
Starting point is 01:20:48 Time to feel unleashed. Imad, thank you, my friend. Thank you for sharing where you are, what led up to this, where you're going next and really pulling the gloves off on discussing the idea of centralized closed AI systems and their dangers and the importance of the vision that you portrayed because I'm fully supportive and fully believe that what you've laid out is probably one of the most sane visions of AI in the future that I've heard. I hope other people agree, you know, and they can take it forward. They're the real heroes.
Starting point is 01:21:32 Thank you. Thank you, pal.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.