Lex Fridman Podcast - #380 – Neil Gershenfeld: Self-Replicating Robots and the Future of Fabrication

Episode Date: May 28, 2023

Neil Gershenfeld is the director of the MIT Center for Bits and Atoms. Please support this podcast by checking out our sponsors: - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: h...ttp://netsuite.com/lex to get free product tour - BetterHelp: https://betterhelp.com/lex to get 10% off EPISODE LINKS: Neil's Website: http://ng.cba.mit.edu/ MIT Center for Bits and Atoms: https://cba.mit.edu/ Fab Foundation: https://fabfoundation.org/ Fab Lab community: https://fablabs.io/ Fab Academy: https://fabacademy.org/ Fab City: https://fab.city/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (05:37) - What Turing got wrong (11:02) - MIT Center for Bits and Atoms (24:08) - Digital logic (30:44) - Self-assembling robots (41:12) - Digital fabrication (52:07) - Self-reproducing machine (59:53) - Trash and fabrication (1:04:49) - Lab-made bioweapons (1:09:04) - Genome (1:20:56) - Quantum computing (1:25:28) - Microfluidic bubble computation (1:30:49) - Maxwell's demon (1:39:35) - Consciousness (1:46:35) - Cellular automata (1:51:07) - Universe is a computer (1:55:53) - Advice for young people (2:05:10) - Meaning of life

Transcript
Discussion (0)
Starting point is 00:00:00 The following is a conversation in Neil Gershonfeld, the director of MIT's Center for Bits and Adams, an amazing laboratory that is breaking down boundaries between the digital and physical worlds, fabricating objects and machines at all scales of reality, including robots and automata that can build copies of themselves and self-assemble into complex structures. His work aspires millions across the world as part of the maker movement to build cool stuff, to create. The very act that makes life so beautiful and fun. And now a quick few second mention of each sponsor. Check them out in the description is the best way to support this podcast. We got Element 4, Zero Sugar Hydration, and that's sweet for business management software and better help for mental health. Choose
Starting point is 00:00:55 wise and my friends. Also, if you want to work with our amazing team or always hiring good Alex Friedman.com slash hiring. And And now onto the full ad reads, as always, no ads in the middle. I try to make this interesting, but if you must skip them, please still check out our sponsors I enjoy their stuff, maybe you'll will too. This episode is brought to you by Element, spelled L-M-N-T. It's an electrolyte drink mix that I'm currently drinking, that I drink throughout the day. I drink a huge amount of it. My favorite flavor is the watermelon salt flavor.
Starting point is 00:01:29 It doesn't mean it'll be your favorite flavor, but I'm pretty sure it's gonna be your favorite flavor. For fasting, for low carb diets, for all kinds of diets really, but certainly for low carb keto carnivore, you have to get the electrolytes right they call it the keto flu if you don't get the electrolytes Right, you know for doing all kinds of crazy exercise that I do all of that you have to get the sodium the potassium magnesium And element does a great job of balancing all of that makes it delicious makes it really easy to make sure that you're getting
Starting point is 00:02:03 just the water intake right because it basically makes water taste great and balances out the electrolytes, the hydration, everything for you. I bring it when I travel I bring it anywhere I go I have to have element as part of my life. It makes me happy. It makes me feel like I got myself together. Anyway, get a sample pack for free with any purchase. Try it at drinkelement.com slash Lex. This shows also brought to you by NetSuite. An all-in-one cloud business management system. You can manage financials, HR, inventory, e-commerce, if you do that kind of thing, and many business-related details.
Starting point is 00:02:50 Running a company is really complicated. There's a lot of people involved, a lot of tasks involved, a lot of roles involved. It's not just engineering, it's not just design, idea, strategy, vision, all those kinds of things. It's all of the glue, the thing that makes the thing a cohesive, singular system that works efficiently in flawlessly. And so you have to hide the right people. You have to use the best tools for the job.
Starting point is 00:03:19 And that suite does that in cloud to manage all kinds of messy business details and make it super easy. You can start now with no payment or interest for six months, go to netsuite.com slash Lex to access their one of a kind financing program that's net suite.com slash Lex. This episode is also brought to you by BetterHelp, spelled H-E-L-P-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H- Take those memories, converting them into words and letting those words leave your mouth, sounds, somehow transforms your own understanding of what those memories, those thoughts mean. It allows you to move past them, to integrate them into a healthier, a deeper understanding of the world around you, your own emotions of your own way of being.
Starting point is 00:04:26 It's kind of amazing that talk therapy, that talking. Talking with a person that knows what they're doing is a powerful way to do that kind of growth. It's funny and it's amazing when simple things like therapy can have a big profound impact results on your life. Anyway, one of the big things about better health is just how easy it is to get started. I think that's a barrier for a lot of people. So it's really important to say this, easy to discrete. It is affordable and available everywhere worldwide. Check them out at betterhelp.com-slash-lex
Starting point is 00:05:06 and save. At your first month, let's BetterHelp.com-slash-lex. This is Alex Friedman podcast. Gershonfeld. You have spent your life working at the boundary between bits and atoms, so the digital and the physical, what have you learned about engineering and about nature, reality from working at this divide, trying to bridge this divide? I learned why, when I'm in entering made fundamental mistakes. I learned the secret of life. Yeah. I learned how to solve many of the world's most important problems, which all sound presumptuous, but all of those are things I learned at that boundary. Okay, so touring and vine-nome, and I'll start there.
Starting point is 00:06:16 Some of the most impactful, important humans who have ever lived in computing. Why were they wrong? So, I worked with Andy Gleason, who was Turing's counterpart. So just for background, if anybody doesn't know, Turing is credited with the modern architecture of computing. Among many other things, Andy Gleason was his US counterpart. And you might not have heard of Andy Gleason, but you might have heard of the Hilbert problems. And Andy Gleason solved the fifth one. So he was a really notable mathematician. During the war, he was Torrence counterpart. Then Van Neumann is credited with the modern architecture of computing. And one of his students was Marvin
Starting point is 00:06:55 Minsky. So I could ask Marvin what John was thinking. And I could ask Andy what Alan was thinking. And what came out from that, what I came to appreciate as background, I never understood the difference between computer science and physical science. But Turing's machine that's the foundation of modern computing has a simple physics mistake, which is the head is distinct from the tape. So in the Turing machine, there's a head that programmatically moves and reads and writes a tape. The head is distinct from the tape, which means persistence of information is separate from interaction with information. Then, Van Neumann wrote deeply and beautifully about many things, but not computing. He wrote a horrible memo called the first draft of a report on the EdVAC, which is how you program a very early
Starting point is 00:07:48 computer. In it, he essentially roughly took Turing's architecture and built it into a machine. So the legacy of that is the computer, somebody's using to watch this, is spending much of its effort moving information from storage transistors to processing transistors even though they have the same computational complexity.
Starting point is 00:08:12 So, in computer science, when you learn about computing, there's a ridiculous taxonomy of about a hundred different models of computation. But they're all fictions. In physics, a patch of space, occupy space, it stores state, it takes time to transit, and you can interact. That is the only model of computation that's physical. Everything else is a fiction. So I really came to appreciate that a few years back when I did a keynote for the annual meeting of the supercomputer industry, and then went into the halls and spent time with the supercomputer builders and came to appreciate, oh, see, if you're familiar with the movie, the Metropolis, people would frolic upstairs in the gardens and down in the basement, people would move
Starting point is 00:09:01 levers. And that's how computing exists today, that we pretend software is not physical, it's separate from hardware. And the whole canon of computer science is based on this fiction that bits aren't constrained by atoms. But all sorts of scaling issues and computing come from that boundary,
Starting point is 00:09:21 but all sorts of opportunities come from that boundary. And so you can trace it all the way back to Turing's machine making this mistake between the head and the tape. Vannoyman, he never called it Vannoyman's architecture. He wrote about it in this dreadful memo, and then he wrote beautifully about other things we'll talk about. Now, to end along answer, Turing and Vannoymon both knew this. So all of the canon of computer scientists credits them for what was never meant to be a computer architecture. Both Turing and Vannoymon ended their life studying exactly how software becomes hardware. So Vannoymon studied self-reproducing automata. How a machine communicates its own construction.
Starting point is 00:10:06 A touring studied morphogenesis, how genes give rise to form. They ended their life studying the embodiment of computation, something that's been forgotten by the canon of computing, but developed sort of off to the sides by a really interesting lineage. So, there's no distinction between the head and the tape, between the computer and the tape, between the computer and the computation, it is all computation. Right. So I never understood the difference between computer science and physical science and working at that boundary helped lead to things
Starting point is 00:10:37 like my lab was part of doing with a number of interesting collaborators, the first, faster than classical quantum computations. We were part of a collaboration creating the minimal synthetic organism where you design life in a computer. Those both involve domains where you just can't separate hardware from software. The embodiment of computation is embodied in these really profound ways. So the first quantum computations, synthetic life, so in the space of biology, the space of physics at the lowest level and the space of
Starting point is 00:11:11 biology at the lowest level. So let's talk about CBA center of bits and atoms. What's the origin story of this MIT legendary MIT center that you're a part of creating? In high school, I really wanted to go to vocational school, where you learned to weld and fix cars and build houses. And I was told, no, you're smart. You have to sit in a room. And nobody could explain to me why I couldn't go to vocational school.
Starting point is 00:11:41 Then worked at Bell Labs. It's a wonderful place before deregulation, legendary place. I would get union grievances because I would go into the workshop and try to make something. They would say, no, you're smart, you have to tell somebody what to do. It wasn't until MIT and I'll explain how CBA started, but I could create CBA that I came to understand.
Starting point is 00:12:03 This is a mistake that dates back to the Renaissance. So in the Renaissance, the liberal arts emerged. And liberal doesn't mean politically liberal. This was the path to liberation, birth of humanism. And so the liberal arts were the trivia, quadrival, roughly language, natural science. And at that moment, what emerged was this dreadful concept of the illiberal arts. So anything that wasn't the liberal arts was for commercial gain and was just making stuff and
Starting point is 00:12:33 wasn't valid for serious study. And so that's why we're left with learning to weld wasn't a subject for serious study. But the means of expression have changed since the Renaissance. So micro machining or embedded coding is every bit expressive as painting a painting or writing a sonnet. So never understanding the difference between computer science and physical science. The path that led me to create CBA with colleagues was, I was, what's called the Junior Fellow
Starting point is 00:13:09 at Harvard. I was visiting MIT through Marvin because I was interested in the physics of musical instruments. I, this will be another slight digression. And Cornell, I would study physics. And then I would cross the street and go to the music department where I played the bassoon, and I would trim reads and play the reads. And they'd be beautiful, but then they'd get soggy.
Starting point is 00:13:30 And then I discovered in the basement of the music department at Cornell was David Borden, who you might not have heard of, but is legend during electronic music because he was really the first electronic musician. So Bob Mogue, who invented Mogue synthesizers, was a physics student at Cornell, like me crossing the street. And eventually he was kicked out and invented electronic music.
Starting point is 00:13:51 David Borden was the first musician who created electronic music. So he's legendary for people like Phil Glass and Steve Reich. And so that got me thinking about, I would behave as a scientist in the music department, but not in the music department, but not in the physics department, but not in the music department. Got me thinking about what's the computational capacity of a musical instrument. And through Marvin, he introduced me to Todd Backover at the Media Lab, who is just about to start a project with YoYoMah, that led to a collaboration to instrument a cello, to
Starting point is 00:14:23 extract YoYo's data, and bring it out into computational environments. What is the computational capacity of music instrument? Does we continue on this tangent and will we share return to CBA? Yeah, so one part of that is to understand the computing. And if you look at the finest timescale and length scale you need to model the physics,
Starting point is 00:14:46 it's not heroic. A good GPU can do tariff laps today. That used to be a national class supercomputer now, it's just a GPU. And that's about if you take the time scales and length scales relevant for the physics, that's about the scale of the physics computing. For Yo-Yo was really driving it, was he's completely unsentimental about the strad. It's not that it makes some magical wiggles in the sound wave. It's performance as a controller, how he can manipulate it as an interface device.
Starting point is 00:15:18 Interest between what and what exactly? Him and sound. Okay, so what it led to was, I had started by thinking about Ops for second, but the Yoyo's question was really resolution and bandwidth. It's how fast can you measure what he does and the bandwidth and the resolution of detecting his controls and then mapping them into sounds. And what we found, what he found was if you instrument everything he does and connect it to almost anything, it sounds like yo-yo, that the magic is in the control, not in ineffable details
Starting point is 00:16:01 in how the wood wiggles. And so with YoYo and Todd that led to a piece, and towards the end I asked YoYo, what it would take for him to get rid of his strad and use our stuff. And his answer was just logistics. It was at that time our stuff was like a rack of electronics and lots of cables and some grad students to make it work. Once the technology becomes as invisible as the strad,
Starting point is 00:16:24 then sure, absolutely, he would take it. And by the way, as a footnote on the footnote, an accident in the sensing of YoYo's cell load led to a hundred million dollar a year auto safety business to control airbags and cars. All that work. I had to instrument the bow without interfering with it. So I set up local electromagnetic fields where I would detect how those fields interact with the bow he's playing, but we had a problem
Starting point is 00:16:54 that his hand, whenever his hand got near these sensing fields, I would start sensing his hand rather than the materials on the bow. And I didn't quite understand what was going on with those, that interference. So my very first grad student ever, Josh Smith, did a thesis on tomography with electric fields, how to see in 3D with electric fields. Then through Todd and at that point, research scientists, my lab, Joe Paradiso, led to a collaboration with Penn and Teller, who, where we did a magic trick in Las Vegas to contact Houdini, and sort of these fields are sort of like, you know, contacting spirits.
Starting point is 00:17:36 So we did a magic trick in Las Vegas, and then the crazy thing that happened after that was Phil Ritmula came running into my lab. He worked with, um, this became with Honda NNEC. Airbags were killing infants in rear-facing child seats. Um, cars needed to distinguish a front-facing adult where you'd save the life, versus a bag of groceries where you don't need to fire their airbag, versus the rear-facing infant where you would kill it. And so the seat need to in effect see in 3D to understand the occupants. And so we took the pen and teller magic trick derived from Josh's thesis from Yo-Yo's cello to an auto show.
Starting point is 00:18:18 And all the car companies said, great, when can we buy it. And so that became Ellis's. And it was a hundred million million of your business making sensors. There wasn't a lot of publicity because it was in the car so the car didn't kill you. So they didn't sort of advertise, we have nice sensors so the car doesn't kill you, but it became a leading auto safety sensor.
Starting point is 00:18:37 And that started from the cello and the question of the computational capacity and musical instrument. Right. So now to get back to MIT, I was spending a lot of outside time at IBM research that had gods of the foundations of computing. There's amazing people there. And I'd always expected to go to IBM to take over a lab.
Starting point is 00:19:00 But at the last minute, pivoted and came to MIT to take a position in the media lab and start what became the predecessor to CBA. Media lab is well known for Nicholas Negroponte. What's less well known is the role of Jerry Weasner. So Jerry was MIT's president before that Kennedy science advisor, a grand old man of science. At the end of his life, he was frustrated by how knowledge was segregated. And so he wanted to create a department of none of the above, a department for work that didn't fit in departments. And the media lab, in a sense, was a cover story for him to hide a
Starting point is 00:19:45 department. It as MIT's president towards the end of his tenure, if he said I'm going to make a department for things that don't fit in departments, the departments would have screamed. But everybody was sort of paying attention to Nicholas creating the media lab and Jerry kind of hid in it in a department called media arts and sciences. It's really the department of none of the above. And Jerry explaining that and Nicholas then confirming it is really why I pivoted and went to MIT. Because my students who help create quantum computing
Starting point is 00:20:15 or synthetic life get degrees from media arts and sciences, this department of none of the above. So that led to coming to MIT with Todd and Joe Paradiso and Mike Holley, we started a consortium called Things that Think, and this was around the birth of Internet of Things and RFID. But then we started doing things like work we can discuss that became the beginnings of quantum computing and cryptography and materials and logic and microfluidics.
Starting point is 00:20:48 And those needed a much more significant infrastructure and were much longer research arcs. So, with a bigger team of about 20 people, we wrote a proposal to the NSF to assemble one of every tool to make anything of any size, was roughly the proposal. One of any tool to make anything of any size was roughly the proposal. One of any tools to make anything of any size. Yeah, so they're usually nanometers, micrometers, millimeters, meters are segregated, input and output are segregated. We wanted to look just very literally how digital becomes physical and physical becomes digital. very literally how digital becomes physical and physical becomes digital. And fortunately we got NSF on a good day and they funded this facility of one of
Starting point is 00:21:33 almost every tool to make anything. And so with a group of core colleagues that included Joe Jacobson, Ike Trang, Scott Manales, we launched CBA. And so you're talking about nanoscale, micro scale, nanostructures, microstructures, macro structures, electron microscopes, and focus time being probes for nanostructures, laser, micro machining, and x-ray, micro tomography for microstructures,
Starting point is 00:22:02 multi-axis machining, and 3D printing for microstructures, just some examples. What are we talking about in terms of scale? How can we build tiny things and big things all in one place? That's awesome. A well-equipped research lab has the sort of tools we're talking about, but they're segregated
Starting point is 00:22:19 in different places. They're typically also run by technicians where you then have an account and a project and you charge. All of these tools are essentially when you don't know what you're doing, not when you do know what you're doing. In that, they're when you need to work across-link scales, where we don't, once projects are running in this facility, we don't charge for time, you don't make a formal proposal to schedule, and the users really run the tools, and it's for work that's kind of in-co-8
Starting point is 00:22:51 that needs to span these disciplines and link scales. And so, work in the project today, work in CBA today ranges from developing zeptageual electronics for the lowest power computing to micro machining diamond to take 10 million RPM bearings for molecular spectroscopy studies up to exploring robots to build 100 meter structures in space. Okay, can we, the three things you just mentioned, let's start with the biggest. What are some of the biggest stuff you attempted to explore how to build in a lab?
Starting point is 00:23:30 Sure. So, viewed from one direction, what we're talking about is a crazy random seeming of almost unrelated projects. But if you rotate 90 degrees, it's really just a core thought over and over again, just very literally how bits and atoms relate, how digital, and it's just going from digital to physical in many different domains. But it's really just the same idea over and over again. So to understand the biggest things, let me go back to Bring in now Shannon as well as of Vanoiman Claude Shannon. Yeah, so what is digital?
Starting point is 00:24:12 The casual obvious answer is digital in 1 and 0 but that's wrong. There's a much deeper answer which is Claude Shannon at MIT wrote the best master thesis ever. In his master's thesis, he invented our modern notion of digital logic, where it came from was Vanever Bush, was a grand old man at MIT. He created the post-war research establishment that led to the National Science Foundation,
Starting point is 00:24:43 and he made an important mistake, which we can talk about. But he also made the differential analyzer, which was the last grade analog computer. So it was a room full of gears and pulleys, and the longer it ran, the worse the answer was. And Shannon worked on it as a student, and he got so annoyed in his master's thesis
Starting point is 00:25:03 he invented digital logic. But he then went on to Bell Labs and what he did there was communication was beginning to expand. There was more demand for phone lines. And so there's a question about how many phone lines you could, phone messages you could send down a wire. And you could try to just make it better and better. He asked a question nobody had asked, which is rather than make it better and better,
Starting point is 00:25:30 what's the limit to how good it can be. And he proved a couple things, but one of the main things he proved was a threshold theorem for channel capacity. And so what he showed was, my voice to you right now is coming as a wave through sound. And the further you get the worse it sounds, but people watching this are getting it as in from packets of data in a network. When they get when the computer, they're watching this gets the packet of information. It can detect and correct an error. And what Shannon showed is if the noise in the cable
Starting point is 00:26:09 to the people watching this is above a threshold, they're doomed. But if the noise is below a threshold for a linear increase in the energy representing our conversation, the error rate goes down exponentially. Exponentials are fast. There's very few of them in engineering. And the exponential reduction of error below a threshold
Starting point is 00:26:31 if you restore state is called a threshold theorem. That's what led to digital. That means unreliable things can work reliably. So Shannon did that for communication. Then Van Neumann was inspired by that and applied it to computation, and he showed how an unreliable computer can operate reliably by using the same threshold property of restoring state. It was then forgotten many years.
Starting point is 00:26:57 We had to rediscover it in effect in the quantum computing era when things are very unreliable again. But now to go back to, how does this relate to the biggest things I've made? So in fabrication, MIT invented computer-controlled manufacturing in 1952. Jet aircraft were just emerging. There was a limit to turning cranks on a machine, on a milling machine to make parts for jet aircraft. Now, this is a messy story. MIT actually stole computer controlled machining from an inventor who brought it to MIT,
Starting point is 00:27:34 wanted to do a joint project with the Air Force and MIT effectively stole it from him. So it's kind of a messy history. But that sounds like the birth of computer-controlled machining, 1952. There are a number of inventors of 3D printing. One of the companies spun off from my lab by Max Libovsky's Forum Labs, which is now a billion dollar 3D printing company, that's the modern version. But all of that's analog, meaning the information is in the control computer. There's no information in the materials.
Starting point is 00:28:06 And so it goes back to Vannever Bush's analog computer. If you make a mistake in printing or machining, just the mistake accumulates. The real birth of computerized digital manufacturing is 4 billion years ago. That's the evolutionary age of the ribosome. So the way your manufactured is there's a code that describes you, the genetic code, it goes to a micro machine, the ribosome, which is this molecular factory that builds the molecules that are you. The key thing to know about that is there are about 20 amino acids that get assembled.
Starting point is 00:28:51 And in that machinery, it does everything Shannon and Vanuaymentaadas, you detect and correct errors. So if you mix chemicals, the error rate is about a part and a hundred. When you make elongated protein in the ribosome, it's about a part in 10 to the four. When you replicate DNA, there's an extra level of error correction, it's a part in 10 to the eight. And so, in the molecules that make you, you can detect and correct errors, and you don't need a ruler to make you, the geometry comes from your parts. So now,
Starting point is 00:29:23 to make you, the geometry comes from your parts. So now compare a child playing with Lego and a state-of-the-art 3D printer or computerized milling machine. The tower made by a child is more accurate than their motor control, because the act of snapping the bricks together gives you a constraint on the joints. You can join bricks made out of dissimilar materials. You don't need a ruler for Lego, because the geometry
Starting point is 00:29:49 locally gives you the global parts. And there's no Lego trash. The parts have enough information to disassemble them. Those are exactly the properties of a digital code. Deon reliable is made reliable. Yes, absolutely. So what the ribosome figured out four billion years ago is how to embody these digital properties, but not for communication or computation in effect, but for construction.
Starting point is 00:30:18 So a number of projects in my lab have been studying the idea of digital materials and think of a digital material just as Lego bricks. The precise meaning is a discrete set of parts reversibly joined with global geometry determined from local constraints. And so it's digitizing the materials. And so I'm coming back to what are the biggest things I've made. My lab was working with the aerospace industry. So spirit era was Boeing's factories. They asked us for how to join composites. When you make a composite airplane, you make these
Starting point is 00:30:54 giant wing and fuselage parts, and they asked us for a better way to stick them together, because the joints were a place of failure. And what we discovered was instead of making a few big parts, if you make little loops of carbon fiber, and you reversibly link them in joints, and you do it in a special geometry that balances being under constrained and over constrained with just the right degrees of freedom, we set the world record for the highest modulus,
Starting point is 00:31:26 ultra light material, just by an effect making carbon fiber Lego. So, lightweight materials are crucial for energy efficiency. This let us make the lightest weight, high modulus material. We then showed that with just a few part types, we can tune the material properties. And then you can create really wild robots that instead of having a tool the size of a jumbo jet
Starting point is 00:31:54 to make a jumbo jet, you can make little robots that walk on these cellular structures to build the structures where they error correct their position on the structure and they navigate on the structure. And so using all of that, with NASA, we made morphing airplanes of former student Kenny, Chang and Ben Jeanette made a morphing airplane,
Starting point is 00:32:17 the size of NASA Langley's biggest wind tunnel. With Toyota, we've made super efficiency race cars. We're right now looking at projects with NASA to build these for things like space telescopes and space habitats where the ribosome, I who I mentioned a little while back, can make an elephant one molecule at a time. Ribosomes are slow.
Starting point is 00:32:38 They run at about one molecule a second, but ribosomes make ribosomes. So you have thousands of them, actually hundreds of them, and that makes an elephant. In the same way, these little assembly robots I'm describing can make giant structures at heart because the robot can make the robot. So more recently, to my students, Amira and Miana had a nature communication paper showing how this robot can be made out of the parts it's making.
Starting point is 00:33:06 So the robots can make the robots. So you build up the capacity of robotic assembly. It can self replicate. Can you linger on what that robot looks like? What is a robot that can walk along and do error correction? And what is a robot that can self replicate from the materials that is given? What does that look like? What are we talking about?
Starting point is 00:33:24 So, um, this is fascinating. Yeah. that look like? What are we talking about? So, it's fascinating. Yeah. The answer is different at different length scales. So, to explain that in biology, primary structure is the code in the messenger RNA that says what the ribosome should build. Yeah. Secondary structure are geometrical motifs,
Starting point is 00:33:42 they're things like helices or sheets. Tertiary structures are functional elements like electron donors or acceptors. Quotinary structure is things like molecular motors that are moving my mouth or making the synapses work in my brain. So there's that hierarchy of primary, secondary, tertiary, and quotinary. Now what's interesting is if you want to buy electronics today from a vendor, there are hundreds of thousands of types of resistors or capacitors or transistors, huge inventory. All of biology is just made from this inventory of 20 parts of amino acids,
Starting point is 00:34:19 and by composing them you can create all of life. And so as part of this digitization of materials we're in effect trying to create something like amino acids for engineering, creating all of technology from 20 parts. As another discretion, I hope started an office for science in Hollywood. And there was a fun thing for the movie, The Martian, where I did a program with Bill Nine,
Starting point is 00:34:48 a few others on how to actually build a civilization on Mars that they described in a way that I like as I was talking about how to go to Mars without luggage. And at heart, it's sort of how to create life in non-living materials. So if you think about this primary, secondary, tertiary, quattenary structure, in my lab, we're doing that, but on different link scales
Starting point is 00:35:11 for different purposes. So we're making micro robots out of like nanobricks and to make the robots to build large scale structures in space, the elements of the robots now are centimeters rather than micrometers. And so the assembly robots for the bigger structures are, there are the cells that make up the structure, but then we have functional cells.
Starting point is 00:35:37 And so cells that can process and actuate, each cell can like move one degree of freedom or attach or detach or process. Now, those elements I just described, we can make out of the still smaller parts. So eventually there's a hierarchy of the little parts, make little robots that make bigger parts of bigger robots that up through that hierarchy. But in that way, you can move up the lines again.
Starting point is 00:36:02 Right. Early on, I tried to go in a straight line from the bottom to the top and that ended up being a bad idea. Instead we're kind of doing all of these in parallel and then they're growing together. And so to make the larger scale structures, we, like there's a lot of hype right now about 3D printing houses where you have a printer of the size of the house. We're right now working on using swarms of these, you know, a table scale robots that walk on the structures to place the parts much more efficiently. That's amazing. But you're saying you can't for now go from the very small to the very large. That'll come. That'll come in stages. Can we just link around this idea? Starting from
Starting point is 00:36:41 Vinoimins, self-replicating automata that you mentioned. It's just a beautiful idea. So that's at the heart of all of this. In the stack I described, so one student Will Langford made these micro robots out of little parts that then were using for me on his bigger robots up through this hierarchy. And it's really realizing this idea of the self-reproducing automata. So Van Neumann, when I complained about the Van Neumann architecture, it's not fair to Van Neumann because he never claimed it as his architecture. He really wrote about it in this one fairly dreadful memo that led to all sorts of lawsuits and fights about the early days of computing. He did beautiful work on reliable computation and unreliable devices.
Starting point is 00:37:25 And towards the end of his life, what he studied was how, and I have to say this precisely, how a computation communicates its own construction. So beautiful. So a computation can store a description of how to build itself. But now there's a really hard problem, which is how, if you have that in your mind, how do you transfer it and wake up a thing that then can contain it? So how do you give birth to a thing that knows how to make itself? And so with Stan Ulam, he invented cellular automata as a way to simulate these. But that was theoretical.
Starting point is 00:38:12 Now the work I'm describing in my lab is fundamentally how to realize it, how to realize self-reproducing automata. And so this is something that Neumiman thought very deeply and very beautiful, beautifully about theoretically. And it's right at this intersection. It's not communication or computation or fabrication. It's right at this intersection where communication and computation meets fabrication. Now, the reason self-ruped is an automata in a lecturely is so important, because this is the
Starting point is 00:38:45 foundation of life. This is really just understanding the essence of how to life. And in effect, we're trying to create life and non-living material. The reason it's so important technologically is because that's how you scale capacity. That's how you can make an elephant from a ribosome
Starting point is 00:39:01 because the assemblers make assemblers. So simple building blocks that inside themselves contain the information, how to build more building blocks. And so between each other construct arbitrarily complex objects. Right. Now let me give you the numbers. So let me relate this to right now we're living in AI Mania explosion time. Let me relate that to what we're talking about. A hundred petaflop computer, which is a current generation supercomputer,
Starting point is 00:39:35 not quite the biggest ones, does 10 to the 17 ops per second. Your brain does 10 to the 17 ops per second. It has about 10 to the 15 synapses and they run it about 100 hertz. So as of a year or two ago, the performance of a big computer matched a brain. So you could view AI as a breakthrough,
Starting point is 00:39:58 but the real story is within about a year or two ago, and let's see, the supercomputer has about 10 to the 15 transistors and the processors, 10 to the 15 transistors and the memory, which is synapses in your brain. So the real breakthrough was the computers matched the computational capacity of a brain, and so we'd be sort of derelict if they couldn't do about the same thing.
Starting point is 00:40:23 But now the reason I'm mentioning that is the chip fab making the supercomputer is placing about 10 to the 10 transistors a second. While you're digesting your lunch right now, you're placing about 10 to the 18 parts per second. There's an eight order of magnitude difference. So in computational capacity, it's done. We've caught up. But there's eight orders of magnitude difference
Starting point is 00:40:53 in the rate at which biology can build versus state-of-the-art manufacturing can build. And that distinction is what we're talking about. That distinction is not analog, but this deep sense of digital fabrication of embodying codes in construction. So a description doesn't describe a thing, but the description becomes the thing. So you're saying, I mean, this is one of the cases
Starting point is 00:41:15 you're making, and this is this third revolution. We've seen the Moore's Law and Communication, we've seen the Moore's Law-like type of growth in computation, and you're anticipating we're going to see that in digital fabrication. Can you actually first of all describe what you mean by this term digital fabrication? So the casual meaning is the computer controls the tool to make something. And that was invented when MIT stole it in 1952. Yeah. There's the deep meaning of what the ribosome does, of a computer, of a digital description
Starting point is 00:41:52 doesn't describe a thing, a digital description becomes the thing. That's where the, that's the path to the Star Trek replicator, and that's the thing that doesn't exist yet. Now, I think the best way to understand what this roadmap looks like is to now bring in fab labs and how they relate to all of this. What are fab labs? So here's a sequence.
Starting point is 00:42:16 With colleagues, I accidentally started a network of what's now 2,500 digital fabrication community labs called fab labs right now in 125 countries, and they double every year and a half. That's called Lass's Law after Sherry Lasseter, who will explain. So here's the sequence. We started Center for Bits and Adams to do the kind of research we're talking about.
Starting point is 00:42:41 We had all of these machines, and then had a problem, it would take a lifetime of classes to learn to use all the machines. So with colleagues who helped start CBA, we began a class modestly called How to Make Almost Anything. And there's no big agenda. It was just, it was aimed at a few research students to use the machines. And we're completely unprepared for the first time we taught it. We were swamped by every year since hundreds of students try to take the
Starting point is 00:43:10 class. It's one of the most oversubscribed classes at MIT. Students would say things like, can you teach this at MIT? It seems too useful. It's just how to work these machines. And the students in the class, I would teach them all the skills to use all these tools. And then they would do projects integrating them. And they're amazing. So Kelly was a sculptor, no engineering background. Her project was, she made a device that saves up screams when you're mad and plays them back later. And saves up screams when you're mad and plays them back. You scream into this device and it deadens the sound, records it, and then when it's convenient, releases your scream.
Starting point is 00:43:48 Can we just just like pause on the brilliance of that invention? Creation, the art. I don't know. The brilliance. Who is this? The creative. Kelly Dopson. Kelly Dopson.
Starting point is 00:44:00 Going on to do a number of interesting things. Meijin, who's going on to do a number of interesting things, made a dress instrumented with sensors and spines. And when somebody creepy comes close, it would defend your personal space. It also varies. Another project early on was a web browser for parrots, which have the cognitive ability of a young child
Starting point is 00:44:18 and lets parrots surf the internet. Another was an alarm clock you wrestle with and prove you're awake. And what connects all of these is, so MIT made the first real-time computer, the whirlwind. That was transistorized as the TX. The TX was spun off from MIT as the PDP, PDPs, where the mini computers that created the internet. So outside MIT was Deck Prime, Wang, Data General, the whole mini computer industry. The whole computing industry was there, and it all failed when computing became personal. Ken Olson, the head of digital famously said,
Starting point is 00:44:59 you don't need a computer at home. There's a little background to that, but deck completely missed computing became personal. So I mentioned all of that because I was asking how to do digital fabrication, but not really why. The students in this how to make class were showing me that the killer app of digital fabrication is personal fabrication. Yeah, how do you jump to the personal fabrication? So Kelly didn't make the screen body because it was for a thesis She wasn't writing a research paper. It wasn't a business model
Starting point is 00:45:30 She wanted it was because she wanted one. Yeah, it was personal expression going back to me in vocational school It was personal expression in these new means of expression So that's happened every year since it literally literally called the courses literally called how to make almost anything. Yeah, a legendary course at MIT. Yeah, every year. And it's grown to multiple labs at MIT with as many people involved as teaching as taking it. And there's even a Harvard lab for the MIT class. What have you learned about humans colliding with the fab lab
Starting point is 00:46:07 about what the capacity of humans to be creative and to build? I mentioned Marvin. Another mentor at MIT, sadly no longer living as Seymour Papper. So Papper studied with Piaget. He came to MIT to get access to the early, Piaget was a pioneer in how kids learn. Papper came to MIT to get access to
Starting point is 00:46:28 the early computers with a goal of letting kids play with them. PHA helped show kids are like scientists. They learn as scientists and it gets kind of throttled out of them. Seymour wanted to let kids have a broader landscape to play. Seymour's work led with Mitch Resnick to Lego, logo, minestorms, all of that stuff. As Fab Lab spread, and we started creating educational programs for kids in them, Seymour said something really interesting. He made a gesture.
Starting point is 00:46:55 He said it was a thorn in his side that they invented was called the Turtle. A robot kid's early robot kid's good program to connect it to a mainframe computer. Seymour said, the goal was not for the kids to program the robot. It was for the kids to create the robot. And so in that sense, the fab labs, which for me were just this accident, he described as sort of this fulfillment of the arc of kids learned by experimenting, it was to give
Starting point is 00:47:23 them the tools to create, not just assemble things and program things, but actually create. So come into your question. What I've learned is MIT, a few years back, somebody added up businesses from spun off from MIT, and it's the world's 10th economy.
Starting point is 00:47:42 It falls between India and Russia. And I view that in a way as a bad number, because it's only a few thousand people, and these aren't uniquely the 4,000 brightest people. It's just a productive environment for them. And what we found is in rural Indian villages, in African-Channet towns, in Arctic, Hamlets, I find exactly precisely that profile.
Starting point is 00:48:08 So, Ling sighed at a few hours above, Tramso way above the Arctic circles. It's so far north to satellite dishes, look at the ground, not the sky. Hans Christian in the lab was considered a problem in the local school because they couldn't teach him anything. I showed him a few projects. Next time I came back, he was designing and building little robot vehicles.
Starting point is 00:48:29 And in South Africa, I mentioned Socien Govi, in this apartheid township, the local technical institute taught kids how to make bricks and fold sheets. It was punitive. But Tepiso in the fab lab was actually doing all the work of my MIT classes.
Starting point is 00:48:46 And so over and over, we found precisely the same kind of bright in vene of creativity. And historically, the answer was, your smart go away. It's sort of like me and vocational school. But in this lab network, what we could then do is, in effect, bring the world to them. Now let's look at the scaling of all of this. So there's one earth, a thousand cities, a million towns, a billion people, a trillion things. There was one whirlwind computer, MIT made the first real-time computer. There were thousands of PDPs.
Starting point is 00:49:27 There were millions of hobbyist computers that came from that. Billions of personal computers, trillions of internet of things. So now, if we look at this FabLab story, 1952 was the NCMIL. There are now thousands of FabLabs, and the fab lab costs exactly the same cost and complexity of the mini computer. So on the mini computer, it didn't fit in your pocket. It filled the room. But video games, email, word processing, really anything you do at the internet, anything you do at the computer today happened at that era because it got on the scale of a work group, not a corporation. In the same way, fab labs are like the mini computers, inventing how does the world work if anybody can make anything. Then if you look at that scaling, fab labs today are transitioning from buying a machine to make machines making machines.
Starting point is 00:50:26 So we're transitioning to, you can go to a fab lab, not to make a project, but to make a new machine. So we talked about the deep sense of self-replication. There's a very practical sense of fab lab machines making fab lab machines. And so that's the equivalent of the hobbyist computer era, what are it called the Altaire historically. Then the work we spent a while talking about
Starting point is 00:50:51 about assemblers and self assemblers, that's the equivalent of smartphones and internet of things. That's when, so the assemblers are like the smartphone, where a smartphone today has the capacity of what used to be a supercomputer in your pocket. And then the smart thermostat on your wall has the power of the original PDP computer, not metaphorically, but literally. And now there's trillions of those in the same sense that when we finally merge materials with the machines in the self assembly, that's like the internet of things stage. But here's the important lesson.
Starting point is 00:51:30 If you look at the computing analogy, computing expanded exponentially, but it really didn't fundamentally change. The core things happened in that transition in the mini computer era. So in the same sense, the research now, we spend a while talking about is how we get to the replicator. Today, you can do all of that if you close your eyes and view the whole fab lab as a machine. In that room, you can make almost anything, but you need a lot of inputs. Bit by bit, the inputs will go down and the size of the room will go down as we go through each of these stages. So how difficult is it to create a self-replicating assembler, self-replicating machine that builds copies of itself or builds more complicated version of itself, which is kind of the dream towards which you're pushing in a generic arbitrary sense. kind of the dream towards which you're pushing in a generic arbitrary sense. I had a student, Nadia Peek, with Jonathan Ward, who for me started this idea of how do we
Starting point is 00:52:31 use the tools in my lab to make the tools in the lab. In a very clear sense, they are making self-reproducing machines. So one of the really cool things that's happened is there's a whole network of machine builders around the world. So there's Danielle and now in Germany and Jens in Norway. And each of these people has learned the skills to go into a fab lab and make a machine. And so we've started creating a network of super fab. So the fab lab can make a machine, but it can't make a number of the precision parts of the machine.
Starting point is 00:53:06 So in places like Bhutan or Kerala in the south of India, we've started creating superfab labs that have more advanced tools to make the parts of the machines so that the machines themselves become even cheaper. So that is self-reproducing machines, but you need to feed it things like bearings or microcontrollers. They can't make those parts.
Starting point is 00:53:29 But other than that, they're making their own things. And I should note as a footnote, the stack I described of computers controlling machines to machine-making machines, to assemblers to self-assemblers, view that as Fab1234. So we're transitioning from Fab1 to Fab2, and the research in the lab is 3 and 4. At this Fab2 stage, a big component of this is sustainability in
Starting point is 00:53:54 the material feedstocks. So Alicia, colleague and Chile is leading a great effort looking at how you take forest products and coffee grounds and seashells and a range of locally available materials and produce the high-tech materials that go into the lab. So all of that is machine building today. Then back in the lab, what we can do today is we have robots that can build structures and can assemble more robots that build structures. We have finer resolution robots that can build micro-mechanical systems, so robots that can build robots that can walk in manipulate.
Starting point is 00:54:34 We're just now, we have a project at the layer below that, where there's endless attention today to billion dollar chipfab investments. But a really interesting thing we pass through is today the smallest transistors you can buy as a single transistor, just commercially for electronics, is actually the size of an early transistor in an integrated circuit. So we're using these machines,
Starting point is 00:55:01 making machines, making assemblers to place those parts to not use a billion dollar chip fab to make integrated circuits, but actually assemble little electronic components. So have a fine enough, precise enough actuators and manipulators that allow you to place these transistors? Right, that's a research project, my lab, called DICE, on discrete assembly
Starting point is 00:55:23 of integrated electronics. And we're just at the point to really start to take seriously this notion of not having a chipfab make integrated electronics, but having not a 3D printer, but a thing that's across between a pick and place makes circuit boards and 2D. The 3D printer extrudes and 3D. We're making sort of a micro manipulator that acts like a printer, but it's placing to build electronics and 3D printer extrudes in 3D. We're making sort of a micro-manipulator that acts like a printer, but it's placing to build electronics in 3D. But this micro-manipulator is distributed,
Starting point is 00:55:51 so there's a bunch of them or is this one centralized? So that's why that's a great question. So I have a prize that's almost but not been claimed for the students whose thesis can walk out of the printer. Oh nice. So you have to print the thesis with the means to exit the printer, and it has to contain its description of the thesis that says how to do that.
Starting point is 00:56:16 It's a really good, I mean, it's a fun example of exactly the thing we're talking about. And I've had a few students almost get to that. And so in what I'm describing, there's the stack where we're getting closer, but it's still quite a few years to really go from us. So there's a layer below the transistors where we assemble the base materials that become the transistor.
Starting point is 00:56:40 We're now just at the edge of assembling the transistors to make the circuits. We can assemble the micro parts to make the micro robots. We can assemble the bigger robots. And in the coming years, we'll be patching together all of those scales. So do you see a vision of just endless billions of robots at the different scales, self-assembling, self-replicating and building and more complicated structures. Yes, and the butt to the yes butt is,
Starting point is 00:57:11 let me clarify two things. One is that immediately raises King Charles fear of Gregor of runaway mutant self-reproducing things. The reason why there are many things I can tell you to worry about, but that's not one of them, is if you want things to autonomously self-reproduce and take over the world, that means they need to compete with nature on using the resources of nature, of wood or in sunlight. And in light of everything I'm describing, biology knows everything I told you. Every
Starting point is 00:57:46 single thing I explain biology already knows how to do. What I'm describing isn't new for biology, it's new for non-biological systems. So in the digital era, the economic win ended up being centralized, the big platforms. In this world of machines that can make machines. I'm asked, for example, what's the killer opportunity, who's going to make all the money, who to invest in? But if the machine can make the machine, it's not a great business to invest in the machine. In the same way that if you can think globally but produce locally, then the way the technology goes out into society isn't a function of central control, but is fundamentally distributed. Now that raises an obvious kind of concern, which as well doesn't this mean you could make
Starting point is 00:58:41 bombs and guns and all of that. The reason that's much less of a problem than you would think is making bombs and guns and all of that is a very well-met market need. Anywhere we go, there's a fine supply chain for weapons. Habias have been making guns for ages, and guns are available just about anywhere. So you could go into the lab and make a gun. Today it's not a very good gun, and guns are available just about anywhere. So you could go into the lab and make a gun. Today it's not a very good gun, and guns are easily available. And so generally, we run these lab in war zones.
Starting point is 00:59:11 What we find is people don't go to them to make weapons, which you can already do anyway. It's an alternative to making weapons. It coming back to your question, I'd say the single most important thing I've learned is the greatest natural resource of the planet is this amazing density of bright and ven of people whose brains are underused. And you could view the social engineering of this lab work is creating
Starting point is 00:59:39 the capacity for them. And so in the end, the way this is going to impact society isn't going to be command and control. It's how the world uses it. And it's been really gratifying for me to see just how it does. Yeah, but what are the different ways, the evolution of the exponential scaling of digital fabrication can evolve? So you said, yeah, self-replicating nanobots, right? This is the the gray goo fear. It's a caricature of a fear. But nevertheless, there's interesting, just like you said, spam, and all these kinds of things that came with the scaling of communication and computation. What are the different ways that malevolent actors will use this technology. Yeah, well, first, let me start with a benevolent story,
Starting point is 01:00:25 which is trash is an analog concept. There's no trash in a forest. All the parts get disassembled and reused. Trash means something doesn't have enough information to tell you how to reuse it. It's as simple as there's no trash in a Lego room. When you assemble Lego, the Lego bricks have enough information to disassemble them. So as you go through this FAB1234 story, one of the implications of this transition from printing to assembling. So the real breakthrough
Starting point is 01:01:03 technologically isn't additive versus subtractive, which is a subject of a lot of attention and hype. 3D printers are useful. We spun off companies like Form Labs, led by Max43D printing. But in a fab lab, it's one of maybe 10 machines. It's used, but it's only part of the machines. The real technological change is when we go from printing and cutting to assembling and disassembling.
Starting point is 01:01:30 But that reduces inventories of hundreds of thousands of parts to just having a few parts to make almost anything. It reduces global supply chains to locally sourcing these building blocks. But one of the key implications is it gets rid of technological trash, because you can disassemble and reuse the parts, not throw them away. And so initially that's of interest for things at the end of long supply chains,
Starting point is 01:01:55 like satellites on orbit, but one of the things coming is eliminating technical trash through reuse of the building blocks. So like when you think about 3D printers, you're thinking about addition and subtraction when you think about the other options available to you in that parameter spaces you call it. Yeah.
Starting point is 01:02:14 It's going to be assembly, this assembly cutting, you said. So, the 1952 NCMIL was subtractive. You removed material. And 3D printing, additive, and there's a couple claims to the invention of 3D printing, that's closer to what's called net shape, which is you don't have to cut away the material. You don't need, you just put material where you do need it.
Starting point is 01:02:35 And so that's the 3D printing revolution. But there are all sorts of limitations on 3D printing to the kinds of materials you can print, the kind of functionality you can print, we're just not going to get to making a, everything in a cell phone on a single printer, but I do expect to make everything in a cell phone with an assembler. And so instead of printing and cutting technologically, it's this transition to assembling and disassembling. Going back to Shannon and Von Neumann, going back to the ribosome four billion years ago. Now, you come to malevolent. Let me tell you a story about I was doing a briefing for
Starting point is 01:03:27 the National Academy of Sciences group that advises the intelligence communities. And I talked about the kind of research we do. And at the very end, I showed a little video clip of Valentina and Ghana making a local girl making surface mountain electronics in the Fablo. And I showed that to this room full of people. One of the members of the intelligence community got up, live it, and said, how dare you waste our time showing us a young girl in an African village
Starting point is 01:03:53 making service mon electronics. We're looking at, we need to know about disruptive threats to the future of the United States. And somebody else got up in the room and yelled at him, and you idiot, I can't think of anything more important than this. But for two reasons, one reason was because if we rely on like informational superiority in the battlefield, it means other people could get access to it. But this intelligence person's point, bless him, wasn't that it was getting at the root causes of conflict.
Starting point is 01:04:28 Is if this young girl in an African village could actually master surface mount electronics, it changes some of the most fundamental things about recruitment for terrorism, impact of economic migration, basic assumptions about an economy. It's just existential for the future of the planet. But we've just lived through a pandemic. I would love to linger on this because the possibilities that are positive are endless, but the possibilities in negative are still nevertheless extremely important. are still nevertheless extremely important, was both positive and negative. What do you do with a large number of general assemblers?
Starting point is 01:05:09 Yeah. With the FabLab, you could roughly make a biolab then learn biotechnology. Now, that's terrifying because making self-reproducing gray goo that outcomputes biology, I consider doom because biology knows everything I'm describing and is really good at what it does. In how to grow almost anything, you learn skills in biotechnology that would let you make serious biological threats. And when you combine some of the innovations you see with large language models, some of the innovations you see with Alpha Fold. So applications of AI for designing biological
Starting point is 01:05:52 systems for writing programs, which you can't have a large language models increasingly. So there seems to be an interesting dance here of automating the design stage of complex systems using AI. And then that's the bits. And you can leap now, the innovations you're talking about, you can leap from the complex systems in the digital space to the printing, to the creation, to the assembly at scale of complex systems in the physical space. Yeah. So something to be scared about is a fab lab
Starting point is 01:06:30 can make a biolab, a biolab can make biotechnology. Somebody could learn to make a virus. That's scary. Unlike some of the things I said, I don't worry about that's something I really worry about that is scary. Now, how do you deal with that? Prior threats we dealt with command and control. So like early color copiers had unique codes and you could tell which copier made them. Eventually, you couldn't keep up with that. There was a famous meeting at a silamar
Starting point is 01:07:07 in the early days of Recombinant DNA, where that community recognized the dangers of what it was doing and put in place a regime to help manage it. And so that led to the kind of research management. So MIT has an office that supervises research and it works with the national office. That works if you can identify who's doing it and where.
Starting point is 01:07:31 It doesn't work in this world we're describing. So anybody could do this anywhere. And so what we found is you can't contain this. It's already out. You can't forbid because there isn't command and control. The most useful thing you can do is provide incentives for transparency. But really, the heart of what we do is you could do this by yourself in a basement for nefarious reasons, or you could come into a place in the light where you get help and
Starting point is 01:08:05 you get community and you get resources. And there's an incentive to do it in the open, not in the dark. And that might sound naive, but in the sort of places we're working, you know, again, bad people do bad things in these places already, but providing openness and providing transparency is a key part of managing these. So it transitions from regulating risks as regulation to soft power to manage them. So there's so much potential for good, so much capacity for good that fab labs and the the the ability and the tools of creation really unlock that potential.
Starting point is 01:08:51 Yeah, and I don't say that as sort of doai naiv. I say that empirically from just years of seeing how this plays out in communities. I wonder if it's the early days of personal computers though, before we get spam, right? In the end, most fundamentally, literally, the mother of all problems is who designed us. So, so, assume success and that we're going to transition to the machines, making machines, and all of these new sort of social systems we're describing will help manage them and curate them and democratize them. If we close the gap I just let off with of 10 to the 10 to 10 to the 18 between chip fab and you, we're ultimately in marrying communication, computation, and fabrication, going to be able to create unimaginable complexity. And how do you design that?
Starting point is 01:09:56 And so I'd say the deepest of all questions that I've been working on is, goes back to the oldest part of our genome. So, in our genome, what are called, hox genes, and these are morpho genes. And nowhere in your genome is the number five. It doesn't store the fact that you have five fingers. And what it stores is what's called a developmental program. It's a series of steps. And the steps have the character of like, grow up a gradient or break symmetry. And at the end of that developmental program,
Starting point is 01:10:40 you have five fingers. So you are stored not as a body plan, but as a growth plan. And there's two reasons for that. One reason is just compression. Billions of genes can place trillions of cells. But the much deeper one is evolution doesn't randomly perturb. Almost anything you did randomly in the genome would be fatal or inconsequential but not interesting. But when you modify things in these developmental programs, you go from like webs for swimming to fingers, or you go from walking to wings for flying, it's a space in which search is interesting. So, this is the heart of the success of AI.
Starting point is 01:11:35 In part, it was the scaling we talked about a while ago. And in part, it was the representations for which search is effective. AI has found good representations, it doesn't, it hasn't found new ways to search, but it's found good representations of search. And that's, you're saying that's what biology, that's what evolution has done
Starting point is 01:11:58 is creative representations, structures, biological structures through which search is effective. And so the developmental programs in the genome beautifully encapsulate the lessons of AI. And this is its molecular intelligence. It's AI embodied in our genome. It's every bit as profound as the cognition in our brain, but now this is sort of thinking in molecular thinking in how you design.
Starting point is 01:12:30 And so, I'd say the most fundamental problem we're working on is it's kind of total logical that when you design a phone, you design the phone. You represent the design of the phone. But that actually fails when you get to the sort of complexity that we're talking about. And so there's this profound transition to come. Once I can have self-rebritosing assemblers placing 10 to the 18 parts, you need to not sort of metaphorically but create life in that you need to learn how to evolve, but evolutionary design has a really misleading trivial meaning. It's not as simple as you randomly mutate things. It's this much more deep embodiment of AI and morphogenesis.
Starting point is 01:13:22 Is there a way for us to continue the kind of evolution of design that led us to this place from the early days of bacteria, single cell organism, derabasomes, and the 20 amino acids? You mean for human augmentation? For life, augment. I mean, what would you call assemblers that are self-replicating and placing parts? What is the dynamic complex things built with digital fabrication? What is that?
Starting point is 01:13:49 That's life. So, yeah. So, ultimately, absolutely, if you add everything I'm talking about, it's building up to creating life in non-living materials. And I don't view this as copying life. I view it as copying life. I view it as driving life. I didn't start from how does biology work, and then I'm gonna copy it.
Starting point is 01:14:10 I start from how to solve problems, and then it leads me to, in a sense, rediscover biology. So if you go back to Valentina in Ghana making her circuit board, she still needs a chip fab very far away to make the processor in her circuit board. She still needs a chip fab very far away to make the processor in her circuit board. For her to make the processor locally for all the reasons we described,
Starting point is 01:14:32 you actually need the deep things we were just talking about. And so it really does lead you. So there's a wonderful series of books by Ginghari. Book one is how to make a charcoal furnace and at the end of book seven you have a machine shop. So it's sort of how you do your own personal industrial revolution. ISRU is what NASA calls in situ resource utilization. And that's how do you go to a planet and create a civilization. ISRU has essentially assumed gingery. You go through the industrial revolution
Starting point is 01:15:13 and you create the inventory of 100,000 resistors. What we're finding is the way the minimum building blocks for a civilization is roughly 20 parts. So it's interesting about the amino acids is they're not interesting. They're hydrophobic or hydrophilic, basic or acidic. They have typical but not extremal properties, but they're good enough you can combine them to make you.
Starting point is 01:15:37 So what this is leading towards is technology doesn't need enormous global supply chains. It just needs about 20 properties you can compose to create all technology as a minimum building blocks for a technological civilization. So there's going to be 20 basic building blocks based on which the self-repequating assemblers can work. Right. And I say that not philosophically, just empirically, sort of that's where it's heading. And I like thinking about how you bootstrap a civilization on Mars, that problem. There's a fun video and bonus material for the movie where we're with a neat group of people we talk about it, because it has really profound implications back here on Earth about how we live
Starting point is 01:16:22 sustainably. What does that civilization on Mars looks like? That's using ISRU, that's using these 20 building blocks and does self-assembly. Yeah, go through primary secretary, tertiary, quattonary. You extract properties like conducting, insulating, semiconducting, magnetic, dielectric, semi-conducting, magnetic, dielectric, flexural, these are the kind of roughly 20 properties. With those, those are enough for us to assemble logic and they're enough for us to assemble actuation.
Starting point is 01:17:02 With logic and actuation, we can make micro robots. The micro robots can build bigger robots. The bigger robots can then take the building block materials and make the structural elements that you then do to make construction. And then you boot up through the stages of a technological civilization. and then you boot up through the stages of a technological civilization. By the way, where in the span of logic and actuation did the sensing come in? Oh, I skipped over that. But my favorite sensor is a step response. So if you just make a step and measure the response to the electric field, that ranges from user interfaces to positioning to material properties, and if you do it at higher frequencies, you get chemistry. And you can get all of
Starting point is 01:17:52 that just from a step in an electric field. So for example, once you have time resolution in logic, something as simple as two electrodes let you do amazingly capable sensing. So we've been talking about all the work I do. There's a story about how it happens. You know, where do ideas come from? And that's an interesting story. Where do ideas come from? So I had mentioned Veneva Bush and he wrote a really influential thing
Starting point is 01:18:25 called the Endless Frontier. So, science one, World War Two. The more known story is nuclear bombs. The less well-known story is the Rad Lab. So, at MIT, an amazing group of people in Vennid radar, which is really credited as winning the war. So, after the war, a grand old man from MIT, and it was charged with science won the war, how do we
Starting point is 01:18:53 maintain that edge? And the report he wrote led to the National Science Foundation, and the modern notion we take for granted but didn't really exist before then of public funding of research or research agencies. In it he made Again, what I consider an important mistake which is he described basic research leads to applied research leads to applications leads leads to commercialization, leads to impact. And so we need to invest in that pipeline. The reason I considered a
Starting point is 01:19:32 mistake is almost all of the examples we've been talking about in my lab went backwards. That the basic research came from applications and further Almost all of the Examples we've been talking about came fundamentally from mistakes. So yeah Essentially everything I've ever worked on has failed But in failing something better happened. So the way I like to describe it is, readyAIM fire is you do your homework, you aim carefully at something,
Starting point is 01:20:13 a target you want to accomplish. And if everything goes right, you then hit the target and succeed. What I do, you can think of, is ready fireAIM. So you do a lot of work to get ready. Then you close your eyes and you don't really think about where you're aiming, but you look very carefully at where you did aim. You aim after you fire. And the reason that's so important is if you do ready aim fire, the best you can hope is hit what you aim at.
Starting point is 01:20:46 So let me give you some examples, because this is a source of great frustration. So I mentioned the early quantum computing. So quantum computing is this power of using quantum mechanics to make computers that for some problems are dramatically more powerful than classical computers. Before it started, there was a really interesting group of people who knew a lot about physics and computing that were inventing what became quantum computing before it was clear anything there was an opportunity there. It was just studying how those relate. Here's how it fits to the ReadyFire aim. I was doing really short-term work in my lab
Starting point is 01:21:32 on shoplifting tags. This was really before there was modern RFID and so how you put tags in objects to sense them. Something we just take for granted commercially. And there was a problem of how you can sense multiple objects at the same time. And so I was studying how you can remotely sense materials to make low-cost tags that could let you distinguish multiple objects simultaneously. To do that, you need non-linearity so that the signal is modulated.
Starting point is 01:22:07 And so I was looking for material sources of nonlinearity and that led me to look at how nuclear spins interact. Just for spin resonance. The sort of things you use when you go in an MRI machine. And so I was studying how to use that. And it turns out that it was a bad idea. You couldn't remotely use it for shoplifting tags, but I realized you could compute. And so with a group of colleagues thinking about early quantum computing, like David D. Vincenzo and Charlie Bennett was articulating what are the properties you need to compute and then looking at how to make the tags.
Starting point is 01:22:53 It turns out the tags were a terrible idea for sensing objects in a supermarket checkout, but I realized they were computing. So with Ike Triang and a few other people, I realized they were computing. So with Ike trying, and a few other people, we realized we could program nuclear spins to compute. And so that's what we use to do, grow versus search algorithm, and then it was used for a shores factoring algorithm. And it worked out. The systems we did it in nuclear magnetic resonance don't scale beyond a few qubits, but the techniques have lived on. And so, you know, all the current quantum computing techniques grew out of the ways we would
Starting point is 01:23:33 talk to these spins. But I'm telling this whole story because it came from a bad way to make a shoplifting tag. Starting with an application, mistakes led to breaking through. The fundamental science. For the mental science. Yeah. I mean, can you just link on that? I mean, just using nuclear spence of do-computation in that, like, what gave you the guts to try to think through this, from a fabric, from a digital fabrication perspective, actually,
Starting point is 01:24:01 how to leap from one to the other? I wouldn't call it guts, I would call it collaboration. So at IBM, there was this amazing group of like, I mentioned Charlie Bennett and David Divencenzo and Ralph Landau and Nabil Amir. And these were all gods of thinking about physics and computing. So I yelled at the whole computer industry being based on a fiction,
Starting point is 01:24:29 metropolis, programmers frallicking in the garden while somebody moves levers in the basement. There's a complete parallel history of Maxwell, the Boltzmann, does a lot to land our debonet. Most people won't know most of these names, but this whole parallel history, thinking deeply about how computation and physics relate. So I was collaborating with that whole group of people, and then at MIT, I was in this high traffic environment. I wasn't deeply inspired to think about better ways
Starting point is 01:25:04 to detect shoplifting tags, but stumbled across companies that needed help with that and was thinking about it. And then I realized those two worlds intersected. And we could use the failed approach for the shoplifting tags to make early quantum computing algorithms. And this kind of stumbling is fundamental to the fab lab idea, right? Right. Here's one more example. With a student, Manu, we talked about ribosomes. And I was trying to build a ribosome
Starting point is 01:25:34 that worked on fluids so that I could place the little parts we're talking about. And we kept failing because bubbles would come into our system and the bubbles would make the whole thing stop working. And we spent about half a year trying to get rid of the bubbles. Then Manu said, wait a minute, the bubbles are actually better than what we're doing. We should just use the bubbles. And so we invented how to do universal object with little logic with little bubbles in fluid.
Starting point is 01:26:03 You have to explain this microfluidic bubble logic. Please, how does this work? So, it's super interesting. Yeah. And so I'll come back and explain it. But what it led to was we showed fluids could do, it been known fluid could do logic. Like your old automobile transmissions do logic, but your old automobile transmission do logic, but that's macroscopic. It didn't work at little scales. We showed with these bubbles, we could do it at little scales. Then I'm going to come back and explain it.
Starting point is 01:26:32 But what came out of that is, Manu then showed you could make a 50 cent microscope using little bubbles. And then the techniques we developed are what we use to transplant genomes to make synthetic life, all came out of the failure of trying to make the genome, the ribosome. Now, the way the bubble logic works is, in a little channel, fluid at small scales is fairly viscous. It's sort of like pushing jello, think of it as. If a bubble gets stuck, the fluid has to detour around it. So now imagine a channel that has two wells and one bubble. If the bubble is in one well,
Starting point is 01:27:24 the fluid has to go in the other channel. If the fluid is in one well, the fluid has to go in the other channel. If the fluid is in the other well, it has to go in the first channel. So the position of the bubble can switch. It's a switch. It can switch the fluid between two channels. So now we have one element of switch. And it's also a memory because you can detect whether or not a bubble is stored there. Then if two bubbles meet, if you have two channels crossing, a bubble can go through one way or a bubble can go through the other way, but if two bubbles come together, then they push on each other and one goes one way and one goes the other way. than they push on each other and one goes one way and one goes the other way. That's a logic operation. That's a logic gate. So we now have a switch, we have a memory and we have a logic gate and that's everything you need to make a universal computer. I mean the fact that you did that with bubbles in
Starting point is 01:28:15 micro-fluid just kind of brilliant. Well so to I mean to stay with that example, what we proposed to do was to make a fluidic ribosome and the project crashed and burned it was a disaster This is what came out of it and so it was Precisely ready fire aim and that we had to do a lot of homework to be able to make these microfluidic systems The fire part was we didn't think too hard about making the ribosome. We just tried to do it. The aim part was we realized the ribosome failed, but something better had happened. And if you look all across research funding, research management, it doesn't anticipate
Starting point is 01:29:01 this. So fail fast is familiar, but fail fast tends to miss ready and aim. You can't just fail. You have to do your homework before the fail part, and you have to do the aim part after the fail part. And so the whole language of research is about like milestones and deliverables. That works when you're going down a straight line. But it doesn't work for this kind of discovery. And to leap to something you said that really important is, I view part of what the fab lab network is doing is giving more people the opportunity to fail. You've said that geometry is really important in biology.
Starting point is 01:29:47 What does fabrication biology look like? Why is geometry important? So molecular biology is dominated by geometry. That's why the protein folding is so important. That the geometry gives the function. And there's this hierarchical construction of, as you go through primary stochotinary, the shapes of the molecules make the shape of the molecular machines. And they really are exquisite machines. If you look at how your muscles move, if you were to see a simulation of it, it would look like an improbable science
Starting point is 01:30:26 fiction cyborg world of these little walking robots that walk on a discrete lattice that they're really exquisite machines. And then from there, this is whole hierarchical stack of once you get to the top of that, you then start making organelles that make cells that make organs through the stack of that hierarchy. Just stepping back, does it amaze you that from small building blocks where amino acids you mentioned molecules, let's go to the very beginning of hydrogen helium at the start of this universe.
Starting point is 01:31:01 They were able to build up such complex and beautiful things like our human brain. So studying thermodynamics, which is exactly the question of batteries run out and need recharging, you know, equipment, you know, cars get old and fail, yet life doesn't. And that's why there's a sense in which life seems to violate thermodynamics, although of course it doesn't. It seems to resist the marshter's entropy somehow. Right. And so Max, well, who helped give rise to the science of thermodynamics, posited a problem that was so infuriating it led to a series of suicides. There was a series of
Starting point is 01:31:58 advisors and advisors, three in a row that all ended up committing suicide that happened to work on this problem. And Maxwell's demon is this simple but infamous problem where right now in this room we're surrounded by molecules and they run at different velocities. Imagine a container that has a wall and it's got gas on both sides and a little door. And if the door is a molecular sized creature and it could watch the molecules coming and when a fast molecule is coming it opens the door when a slow molecule is coming it closes the door. After it does that for a while, one side is hot, one is cold, one something is hot and is cold, you can make an engine, and so you close that, you make an engine and you make energy. So the demon is violating thermodynamics because it's not, it's never touching
Starting point is 01:33:02 the molecule. Yet by just opening and closing the door, it can make arbitrary amounts of energy and power a machine. And in thermodynamics, you can't do that. So that's Maxwell's demon. That problem is connected to everything we just spoke about for the last few hours. So Leo Zalard around early 1900s was a deep physicist who then had a lot to do with also post-war
Starting point is 01:33:39 anti-nuclear things, but he reduced Maxwell's demon to a single molecule. So the molecule, one, there's only one molecule, and the question is which side of the partition is it on. That led to the idea of one bit of information. So Shannon credited Zalard's analysis of Maxwell's demon for the invention of the bit. For many years, people try to explain Maxwell's Neiman by like the energy in the demon looking at the molecule or the energy to open and close the door and nothing ever made sense. Finally, Ralph Landauer, one of the colleagues
Starting point is 01:34:22 I mentioned at IBM, finally solved the problem. He showed that you can explain Maxwell's demon by you need the mind of the demon. When the demon opened and closes the door, as long as it remembers what it did, you can run the whole thing backwards. But when the demon forgets, then you can't run it backwards. And that's where you get dissipation. And that's where you get the violation of thermodynamics. And so the explanation of Maxwell's demon is that it's in the demon's brain. So then,
Starting point is 01:35:10 Ralph's colleague Charlie at IBM, then shocked Ralph by showing you can compute with arbitrarily low energy. So one of the things that's not well covered is the big computers used for big machine learning, the data centers use tens of megawatts of power. They use as much power as a city. Charlie showed you can actually compute with arbitrarily low amounts of energy
Starting point is 01:35:39 by making computers that can go backwards as well as forwards. And what limits the speed of the computer is how fast you want an answer and how certain you want the answer to be. But we're orders of magnitude away from that. So I have a student camera in working with Lincoln Labs on making superconducting computers that operate near this land hour limit that are orders of magnitude more efficient. So stepping back to all of that whole tour
Starting point is 01:36:10 was driven by your question about life. And right at the heart of it is Maxwell's demon. Life exists because it can locally violate thermodynamics. It can locally violate thermodynamics because of intelligence, and it's molecular intelligence that I would even go on and limb to say, we can already see we're beginning to come to the end of this current AI phase. So depending on how you count, this is, I'd say, the fifth AI boom bus cycle. And you can
Starting point is 01:36:46 already, you know, it's exploding, but you can already see where it's heading, you know, how it's going to saturate what happens on the far side. The big thing that's not yet on horizons is embodied AI, molecular intelligence. So to step back to this AI story, there was automation and that was gonna change everything. Then there were expert systems. There was then the first phase of the neural network systems. There've been about five of these.
Starting point is 01:37:24 In each case, on the slope up, it's going to change everything. Each case, what happens is on the slope down, we sort of move the goal posts, and it becomes sort of irrelevant. So a good example is going up, computer chess was going to change everything. Once computers could play chess, that fundamentally changes the world now on the downside computers play chess Winning at chess is no longer seen as a unique human thing, but People still play chess
Starting point is 01:37:55 Yeah, this new phase is going to take a new chunk of things that we thought computers couldn't do now computers will be able to do they have roughly our brain capacity But you know will keep thinking as well as computers. And as I described, wow, we've been going through these five boom busts. If you just look at the numbers of opt-per-second bits storage, bits of IO, that's the more interesting one. That's been steady, and that's what finally caught up to people. But, you know, as we've talked about a couple times, there's eight orders of magnitude to go not in the intelligence and the transistors or in the brain, but in the embodied intelligence, in the intelligence and our body. So the intelligent constructions of physical systems that would embody the intelligence
Starting point is 01:38:40 versus contain it within the computation? Right. And that there's a brain centrism that assumes our intelligence is centered in our brain. And in endless ways in this conversation, we've been talking about molecular intelligence. Our molecular systems do a deep kind of artificial intelligence. All the things you think of as artificial intelligence does in representing knowledge, storing knowledge, searching over knowledge, adapting to knowledge, are molecular systems do. But the output isn't just a thought, it's us, it's the evolution of us. And that's, you know, the real horizon to come is now embodying AI, not just a processor and a robot, but, you know, building systems that really can grow and evolve. So we've been speaking about this boundary between bits and atoms. So let me ask you one of the,
Starting point is 01:39:41 about one of the big mysteries of consciousness. Do you think it comes from somewhere between that boundary? I won't name names, but if you know who I'm talking about, it's probably clear. I once did a drive, in fact, up to the Mussolini era of villa outside Torino in the early days of what became quantum computing with a famous person who thinks about quantum mechanics and consciousness. And we had the most infuriating conversation that went roughly along the lines of consciousness is weird, quantum mechanics is weird, Consciousness is weird. Quantum mechanics is weird. They are for quantum mechanics explains consciousness That was roughly the logical process
Starting point is 01:40:37 And you're not satisfied with that process. No, and I say that very precisely in the following sense I was a program manager somewhat by accident in a DARPA program on Quantum biology and so biology accident in a DARPA program on quantum biology. And so biology trivially uses quantum mechanics that were made out of atoms, but the distinction is in quantum computing, quantum information, you need quantum coherence. And there's a lot of muddled thinking about like collapse of the wave function and claims of quantum computing that garbles just quantum coherence that that You can think of it as a wave that has very special properties, but these wave like properties and so
Starting point is 01:41:21 There's a small set of places where biology uses quantum mechanics in that deeper sense. One is how light is converted to energy and photo systems. It looks like one is all-faction. How your nose is able to tell different smells. Probably one has to do with how birds navigate, how they sense magnetic fields. That involves a coupling between a very weak energy with a magnetic field, coupling into chemical reactions.
Starting point is 01:41:56 And there's a beautiful system. Standard in chemistry is magnetic fields like this can influence chemistry, but there are biological circuits that are carefully balanced with two pathways that become unbalanced with magnetic fields. So each of these areas are expensive for biology. It has to consume resources to use quantum mechanics in this way. So those are places where we know there's quantum mechanics in biology. So, those are places where we know there's quantum mechanics and biology. In cognition, there's just no evidence. There's no evidence of anything quantum mechanical going on in how cognition works.
Starting point is 01:42:38 Consciousness. Well, I'm saying cognition. I'm not saying consciousness. But to get from cognition to consciousness. So, McCullough and Pitts made a model of neurons. That led to perceptrons that then through a couple of boom busts led to deep learning. One of the interesting things about that sequence is it diverged off, so deep neural networks used in machine
Starting point is 01:43:07 learning diverged from trying to understand how the brain works. What makes them work, what's emerged is, it's a really interesting story. This may be too much of a technical detail, but it has to do with function approximation. That we talked about exponentials. A deep network needs an exponentially larger shallow network to do the same function. And that exponential is what gives the power to deep networks. But what's interesting is the sort of lessons about building these deep architectures and how to train
Starting point is 01:43:46 them have really interesting echoes to how brains work. And there's an interesting conversation that's sort of coming back of neuroscientists looking over the shoulder of people training these deep networks, seeing interesting echoes for how the brain works, interesting parallels with it. And so I didn't say consciousness, I just said cognition, but I don't know any experimental evidence that points to anything in neurobiology that says we need quantum mechanics. And I view the question about whether a large language model I view the question about whether a large language model is conscious as silly. In that biology is full of hacks and it works.
Starting point is 01:44:39 There's no evidence we have that there's anything deeper going on than just this sort of stacking up of hacks in the brain. And somehow, well, consciousness is one of the hacks or an emerging property of the hacks. Absolutely. And just numerically, I said big computations now have the degrees of freedom of the brain. And they're showing a lot of the phenomenology
Starting point is 01:45:01 of what we think is properties of what a brain can do. And I don't see any reason to invoke anything else. That makes you wonder what kind of beautiful stuff digital fabrication will create. If biology created a few hacks on top of which consciousness and cognition, some of the things we love about human beings was created, it makes you wonder what kind of beauty in the complexity created for digital fabrication. There's an early peak at that, which is, there's a misleading term, which is generative design.
Starting point is 01:45:37 Generative design is where you don't tell a computer how to design something, you tell the computer what you want it to do. That doesn't work, that only works in limited subdomains. You can't do really complex functionality that way. The one place it's matured though is topology optimization for structure. So let's say you wanted to make a bicycle or a table.
Starting point is 01:46:01 You describe the loads on it and it figures out how to design it. And what it makes are beautiful, organic looking things. These are things that look like they grew in a forest. And they look like they grew in a forest because that's sort of exactly what they are. That they're solving the ways of how you handle loads in the same way biology does. And so you get things that look like trees and shells and all of that. And so that's a peak at this transition to from we design to we teach the
Starting point is 01:46:33 machines how to design. What can you say about because you mentioned cellular automata earlier about from this example you just gave and in, the observation you can make by looking at cellular automata that there's a, from simple rules and simple building blocks can emerge arbitrary complexity. Do we understand, like, do you understand what that is? How that can be leveraged? So understand what it is is much easier than it sounds. I complained about Turing's machine-making physics mistake, but Turing never intended it to be a computer architecture. He used it just to prove results about uncomputability. What Turing did on what his computation is exquisite is gorgeous. He gave us our notion of computational universality and something that sounds deep and turns out to be trivial is it's really easy to show almost everything is computationally universal.
Starting point is 01:47:37 So Norm Margulis wrote a beautiful paper with Tom Toffali showing in a cellular, a cellular tomato world is like the game of life where you just move tokens around. They showed that modeling billiard balls on a billiard table with cellular automata is a universal computer. To be universal, you need a persistent state, you need a nonlinear operation to interact them, and you need connectivity. So that's what you need to show computational universality. So they showed that a CA modeling billiard balls is a universal computer. Chris Moore went on to show that instead of chaos,
Starting point is 01:48:31 touring showed their problems in computation that you can't solve. That they're harder than you can't predict. They're actually in a deep reason that they are unsolvable. Chris Moore showed is very easy to make physical systems that are uncomputable, that what the physics system does, just bouncing balls and surfaces, you can make systems that solve uncomputable problems. So almost any non-trivial physical system is computationally universal.
Starting point is 01:49:02 So the first part of the answer to your question is, this comes back to how, you know, my comment about how do you bootstrap a civilization, you just don't need much to be computationally universal. So then, there isn't today a notion of like, fabricational universality or fabricational complexity the sort of numbers. I've been giving you about you eating lunch versus the chip fab sort of that that's in the same spirit of what Shannon did but once you connect computational universality to kind of Fabricational universality you then get the ability to grow and adapt and evolve. Because that evolution happens in the physical space
Starting point is 01:49:50 and that's ultimately. And so that's why, for me, the heart of this whole conversation is morphogenesis. So just to come back to that, what touring ended his sadly cut short life studying was how genes give rise to form. So how the small amount of it, relatively in effect, small amount of information in the genome, can give rise to the complexity of who you are. And that's where what resides is this molecular intelligence, which is first how to describe you, but then how to describe you such that you can exist and you can reproduce and you
Starting point is 01:50:39 can grow and you can evolve. And so, you know, that's the seat of our molecular intelligence. The make a revolution in biology. Yeah, it really is. It really is. And that's where you can't separate communication, computation and fabrication. You can't separate computer science and physical science.
Starting point is 01:51:01 You can't separate hardware and software. They all intersect right at that place. Do you think of our universe as just one giant computation? I would even kind of say quantum computing is overhyped in that there's a few things quantum computing is going to be good at. One is breaking crypto systems, you know how to make new crypto systems. What it's really good at is modeling other quantum systems. So for studying nanotechnology, it's going to be powerful. But quantum computing is not going to disrupt and change everything. But the reason I say that is this interesting group of strange people who helped invent
Starting point is 01:51:43 quantum computing before it was clear anything was there. One of the main reasons they did it wasn't to make a computer that can break a crypto system. It was, you could turn this backwards. You could be surprised quantum mechanics can compute or you can go in the opposite direction and say if quantum mechanics can compute, that's a description of nature. So physics is
Starting point is 01:52:12 written in terms of partial differential equations. That is an information technology from two centuries ago. The equations of physics are not, this would sound very strange to say, but the equations of physics, Schrodinger's equations and Maxwell's equations
Starting point is 01:52:34 and all of them are not fundamental. They're a representation of physics that was accessible to us in the era of having a pencil and a piece of paper. They have a fundamental problem, which is if you make a dot on a piece of paper, in traditional physics theory, there's infinite information in that dot. A point has infinite information. That can't be true because information is a fundamental resource that's connected to energy. And in fact, one of my favorite questions you can ask a cosmologist to trip them up is
Starting point is 01:53:22 information, a conserved quantity in the universe? Was all the information created in the Big Bang or can the universe create information? And I've yet to meet a cosmologist who doesn't stutter and not clearly know how to handle that existential question. But sort of putting that to a side, in physics theory the way it's taught, information comes late. You're taught about X, a variable, which can contain infinite information, but physically that's unrealistic. And so physics theories have to find ways to cut that off.
Starting point is 01:54:02 So instead, there are a number of people who start with a theory of the universe should start with information and computation as the fundamental resources that explain nature. And then you build up from that to something that looks like throwing baseballs down a slope. And so in that sense, the work on physics and computation has many applications that we've been talking about, but more deeply, it's really getting at new ways to think about how
Starting point is 01:54:38 the universe works. And there are a number of things that are hard to do in traditional physics that make more sense when you start with information and computation as the root of physical theory. So information and computation being the real fundamental thing in the universe. Right. That information is a resource. You can't have infinite information in finite space.
Starting point is 01:55:02 Information propagates and interacts. And from there, you erect the scaffolding of physics. Now it happens, the words I just said, look a lot like quantum field theories. But there's an interesting way where instead of starting with different differential equations to get to quantum field theories and quantum field theories you get to quantization. If you start from computation information, you begin sort of quantized and you build up from there. And so that's the sense in which absolutely I think about the universe as a computer. The easy way to understand that is just almost anything is computationally universal, but the deep
Starting point is 01:55:47 way is it's a real fundamental way to understand how the universe works. Let me go a little bit to the personal and with the center of bits and atoms, you have worked with the students you've worked with have gone on to do some incredible things in this world including build super computers that power Facebook and Twitter and so on. What advice would you give to young people? What advice have you given them how to have one heck of a great career on heck of a great life? one heck of a great career, one heck of a great life. One important one is, if you look at junior faculty trying to get tenure at a place like MIT, the ones who try to figure out how to get tenure are miserable and don't get tenure,
Starting point is 01:56:39 and the ones who don't try to figure it out or happy and do get it. I mean, you have to love what you're doing and believe in it and nothing else could possibly be what you wanna be doing with your life and it gets you out of bed in the morning. And again, it sounds naive, but within the limited domain, I'm describing now as getting tenure at MIT, that's the key attribute to it. And then same sense, if you take the sort of outliers, students were talking about, 99 out of 100 come to me and say,
Starting point is 01:57:14 your work is very fascinating, I'd be interesting to work for you. And one out of 100 come and say, you're wrong. Here's your mistake. Here's what you should have been doing. They just sort of say, I'm here and get to work. Again, I don't know how far this resource goes. So, I've said, I consider the world's greatest resource, this engine of bright and ven of people, of which we only see a tiny little iceberg of it and everywhere we open these labs, they come out of the woodwork, they come, yeah, we didn't create all these educational programs, all these other things I'm describing. We tried to partner everywhere with local schools and local companies and kept tripping over
Starting point is 01:58:00 dysfunction and find we had to create the environment where people like this can flourish. And so I don't know if this is everyone, if it's 1% of society, what the fraction is, but it's so many orders of magnitude bigger than we see today, you know, we've been racing to keep up with it to take advantage of that resource. Something tells me it's a very large fraction of the population. I mean, the thing that gives me most hope for the future, is that population. Once a year, this whole lab network meets,
Starting point is 01:58:29 and it's my favorite gathering, it's in Bhutan this year, because it's every body shape, it's every language, every geography, but it's the same person in all those packages. It's the same sense of bright inven of joy and discovery. If there's people listening to this and they're just It's the same sense of right and then of joy and discovery. If there's people listening to this and they're just over one with how exciting this is, which I think they would be, how can they participate? How can they help?
Starting point is 01:58:53 How can they encourage young people or themselves to build stuff, to create stuff? Yeah, that's a great question. So this is part of a much bigger maker movement that has a lot, lot of embodiments. The part I've been involved in, this fab lab network, you can think of as a curated part that works as a network. So you don't benefit in a gym if somebody exercises in another gym, but in the fab network,
Starting point is 01:59:21 and you do, in a sense, benefit when somebody works in another network, another lab, in the way it network, you do in a sense benefit when somebody works in another lab in the way it functions as a network. So you can come to cba.mit.edu to see the research we're talking about. There's a FAB foundation run by Sherry Lasseter at FABfoundation.org. FAB Labs I.O. is a portal into this lab network.
Starting point is 01:59:44 FAB Academy.org is this distributed hands-on educational program. Fab.City is the platform of cities producing what they consume. Those are all nodes in this network. So you can learn with FabAcademy and you can perhaps launch or help launch or participate in launching a Fab Lab. Well, an intraticular, from 1 to 1000, we carefully counted labs. Now we're going from 1,000 to 1 million, where
Starting point is 02:00:11 it ceases to become interesting to count them. And in 1,000 to the million, what's interesting about that stage is technologically, you go to a lab not to get access to the machine, but you go to the lab to make the machine. But the other thing interesting in it is we have an interesting collaboration on a fab lab in a box. And this came out of a collaboration with solid works on how you can put a fab lab in a box, which is not just the tools, but the knowledge.
Starting point is 02:00:45 So you open the box and the box contains the knowledge of how to use it as well as the tools within it so that the knowledge can propagate. And so we have an interesting group of people working on, you know, the original fab labs we have a whole team to get involved in the setting up and training. And the fab academy is a real indepth, deep technical program in the training, but in this next phase, how sort of the lab itself knows how to do the lab. We've talked deeply about the intelligence in fabrication, but in a much more accessible one about how the AI in the lab in a fact becomes a collaborator with you in this near term to help get started. And for people wanting to connect,
Starting point is 02:01:36 it can seem like a big step, a big threshold, but we've gotten to thousands of these and they're doubling exactly that way just from people opting in. And in so doing driving towards this kind of idea of personal digital fabrication. Yeah, and it's not utopia, it's not free, but come back to today. We separately have education. We have big business, we have startups, we have entertainment, sort of each of these things are segregated. When you have global connection to one of these local facilities, in that, you can do play,
Starting point is 02:02:14 and art, and education, and create infrastructure, you can make many of the things you consume. You could make it for yourself. It could be done on a community school. It could be done on a community school. It could be done on a regional scale. I'd say the research we spent the last three hours talking about, I thought was hard. And in a sense, I mean, it's non-trivial,
Starting point is 02:02:40 but in a sense, it's just sort of playing out or turning the crank. What I didn't think was hard is if anybody can make almost anything anywhere, how do you live, how do you learn, how do you work, how you play, these very basic assumptions about how society functions. There's a way in which it's kind of back to the future, in that this mode where work is money is consumption and consumption is shopping by selecting is only a kind of a few decade old stretch. In some ways we're getting back to a Sami village in North Norway is deeply sustainable, but rather than just
Starting point is 02:03:28 reverting to living the way we did a few thousand years ago, being connected globally, having the benefits of modern society, but connecting it back to older notions of sustainability, I hadn't remotely anticipated just how fundamentally that challenges how a society functions and how interesting and how hard it is to figure out how we can make that work. It is possible that this kind of process will give a deeper sense of meaning to each person. deeper sense of meaning to each person. Let me violently agree in two ways. One way is this community-making crosses many sensitive sectarian boundaries in many parts
Starting point is 02:04:18 of the world where there's just implicit or explicit conflict, but sort of this act of making seems to transcend a lot of historical divisions. I don't say that philosophically, I just say that as an observation. And I think there's something really fundamental in what you said, which is deep in our brain is shaping our environment. A lot of what's strange about our society is the way that we can't do that. The act of shaping our environment touches something really, really deep that gets to the essence of who we are. That's again why I say that in a way the most important thing made in these labs is making itself. What do you think if this shaping of our environment
Starting point is 02:05:14 gets us something deep? What do you think is the meaning of it all? What's the meaning of life now? I can tell you my insights into how life works. I can tell you in my insights in how to make life meaningful and fulfilling and sustainable. I have no idea what the meaning of life is, but maybe that's the meaning of life. The uncertainty, the confusion, because there's a magic to it all. Everything you've talked about from starting from the basic elements with the big bang, the somehow created the sun, the somehow set a few to thermodynamics and created life. And all the ways that you've talked about from ribosomes that created the machinery,
Starting point is 02:06:03 that created the machine, and then now the biological machine creating through digital fabrication, more complex artificial machines, all of that, this is a magic to that creative process. And we notice, we humans are smart enough to notice the magic. So it's, you haven't said the S word yet. Which one is that singularity? word yet. Which one is that singularity? I'm not sure if Ray Cursewiles listening if he is high-ray, but I have a complex relationship with Ray because a lot of the things he projects I find annoying, but then he does his homework and then somewhat annoyingly he points out how almost everything I'm doing fits on his roadmaps.
Starting point is 02:06:48 And so the question is, are we heading towards a singularity? So I'd have to say, I lean towards sigmoids rather than exponentials. We've done pretty well with sigmoids. Yeah, so sigmoids are things grow and they taper and then there can be one after it and one after it. So, you know, I'll pass on whether there's enough of them that they diverge, but you know, the selfish gene answer to the meaning of life is the meaning of life is the propagation of life. And so it was a step for atoms to assemble into a molecule, from molecules to assemble into a protocell, for the protocell to form, to then form organelles, for the organ cells to form organs,
Starting point is 02:07:52 the organs to form an organism, then it was a step for organisms to form family units, then family units to form villages, you can view each of those as a stack in the levelist-to-form villages, you can view each of those as a stack in the level of organizations. So you could view everything we've spoken about as the imperative of life, just the next step in the hierarchy of that, in the fulfillment of the inexorable drive of the violation of thermodynamics.
Starting point is 02:08:22 So, you could view, I'm an embodiment of the will of the violation of thermodynamics. So, you could view, I'm an embodiment of the will of the violation of thermodynamics speaking. The two of us, having an old chat, yes. Yeah. And so continues, and even then the singularity is just a transition up the ladder. There's nothing deeper to consciousness than it's a derived property of distributed problem-solving.
Starting point is 02:08:49 There's nothing deeper to life than embodied AI in morphogenesis. So why so much of this conversation in my life is involved in these fab labs. And initially it just started as outreach, then it started as keeping up with it. Then it turned to it was rewarding. Then it turned to we're learning as much from these labs in as goes out to them. It began as outreach, but now more knowledge is coming back from the labs than it's going into them. And then finally, it ends with, you know, what I described as competing with myself at MIT, but a better way to say that is tapping the brain power of the planet.
Starting point is 02:09:39 And so from, I guess for me personally, that's the meaning of my life. And maybe that's the meaning of the universe too. It's using us humans and our creations to understand itself. In the way, it's whatever the creative process that created Earth is competing with itself. Yeah. So you could take morphogenesis as a summary of this whole conversation, or you could take recursion that in a sense, what we've been talking about is recursion all the way down. And in the end, I think this whole thing is pretty fun.
Starting point is 02:10:17 It's short life is, but it's pretty fun. And so is this conversation. You know, I mentioned to you offline them, going through some difficult stuff personally and your passion for what you do is just really inspiring and it just lights up my mood and lights up my heart. And you're an inspiration for, I know, thousands of people that work with you at MIT
Starting point is 02:10:36 and millions of people across the world. It's a big honor to you, so with me today. This was really fun. This was a pleasure. Thanks for listening to this conversation with Neil Gershonfeld. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Pablo Picasso.
Starting point is 02:10:54 Every child is an artist. A challenge is staying an artist when you grow up. Thank you for listening and hope to see you next time. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.