The Joy of Why - Can We Program Our Cells?
Episode Date: March 8, 2023By genetically instructing cells to perform tasks that they wouldn't in nature, synthetic biologists can learn deep secrets about how life works. Steven Strogatz discusses the potential of th...is young field with researcher Michael Elowitz.
Transcript
Discussion (0)
Daniel and Jorge Explain the Universe is a podcast about, well, everything in the universe.
Do you want to understand what science knows about how the universe began and what mysteries remain?
Are you curious about what lies inside a black hole? And if we'll ever know,
Daniel is a physicist working at CERN who actually knows what he's talking about.
And Jorge asks all the questions that pop up in your mind as you listen to make sure
everything is crystal clear.
Listen to Daniel and Jorge Explain the Universe on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
I'm Steve Strogatz, and this is The Joy of Why,
a podcast from Quantum Magazine that takes you into some of the biggest unanswered questions in science and math today.
In this episode, we're going to be talking about synthetic biology.
What is synthetic biology and what are scientists trying to do with it?
Simply put, we could say that synthetic biology is a fusion of biology, especially molecular
biology, and engineering.
The distinctive thing about it is that it treats cells as programmable devices. of biology, especially molecular biology, and engineering.
The distinctive thing about it is that it treats cells as programmable devices.
It's a kind of tinker toy approach that builds circuits, but not out of wires and switches like we're used to, but rather out of biological components like proteins and genes.
Programming cells in this way isn't really all that different from programming computers,
except that the programming language isn't Python or C++,
it's the language of biology, the language of DNA.
With the goal of making proteins that will interact with each other in some clever ways,
the potential medical applications of synthetic biology are huge,
but also the approach holds promise for illuminating how life works at the deepest level.
It's one thing to strip cells apart to see how they work. That's the classic approach to molecular
biology. But it's another thing to tinker with cells to try to get them to perform new tricks,
which is something that my guest, Michael Elowitz, does.
For example, a while back, he engineered cells to blink on and off like Christmas lights.
And that's just the beginning.
Michael Elowitz is a professor of biology and biological engineering at Caltech and
Howard Hughes Medical Institute.
Welcome, Michael.
Thanks, Steve.
It's great to be here.
So let's talk about the foundational idea of synthetic biology. I mentioned it in the intro
that living cells we could think of as programmable devices. The field synthetic
biology, it seems like you guys have this philosophy that you can learn about cells
by building functionality into cells yourself.
Rather than taking things apart, you learn by creating, sort of almost like a kid playing in a sandbox or something.
Yeah. So I guess, you know, I think in a way there's all these different kind of roots to appreciating and understanding biological systems.
to appreciating and understanding biological systems.
And the classic biological approach is to start with the incredible complexity
of an intact cell or organism
and really try and dissect it and take it apart.
And so you often ask questions like,
what genes are necessary for the cell
to do something interesting,
to find its food or to find a mate or anything like that.
And so you're kind of taking it apart, a complex
system and trying to figure out the function of its individual components and how they all work
together to give you that function. That's kind of, you know, in a way, regular biology.
Synthetic biology kind of sort of flips it on its head and it sort of tries to get at the
similar questions, but from the opposite approach. It says, let's kind of strip everything down
and ask not what's necessary, but what's sufficient. What's the minimal set of components
and interactions that are sufficient to enable a cell to do something, to blink on and off,
to go transform into a different state, whatever you want. And so that's the kind of constructive
approach, sort of building up from the bottom. And it can answer different kinds of questions. Like one thing it allows you to do is to compare different sorts of designs
for the same function and ask, you know, are there advantages to one circuit design over another
circuit design, for example. That's hard to do in a natural system where you're starting with
one complicated circuit design. You don't really understand everything about it, you know, and you
can't really kind of completely replace it necessarily with a totally different design. That's what synthetic biology allows you to do.
Uh-huh. So I'm a little bit puzzled about your use of the word circuit. To me, a cell feels sort
of more like a city with a lot going on in there, all these different players. But what do you have
in mind when you speak of a circuit? How is it a cell or is it that the control system of the cell
is the circuit or what? What's the analog here? Yeah, it a cell or is it that the control system of the cell is the circuit
or what? What's the analog here? Yeah, I guess the analogy is really that,
you know, what's inside the cell? There's DNA, which your genome, which contains the sequences
of all the different individual proteins. So your cells are full of a zoo of these different proteins
and proteins, different ones can carry out a chemical reaction,
or they can form structures within the cell, or they can process information in many cases.
And the way they do that is that each of these proteins can effectively kind of modify,
in a particular way, another protein. So the analogy with the electronic circuit is really
that just as you have resistors and transistors and so forth that are connected by wires,
in the cell you have different kinds of proteins that do specific things to each other in very specific ways.
And the wire, the analogy of the wire is really like the molecular specificity.
It's the fact that one protein will kind of lock on to another protein species and affect it in a particular way.
Uh-huh. It's a really interesting thing for anyone who's ever tried to build something.
You, along the way, always encounter problems.
And, you know, there's one strategy, which is you could reverse engineer something that's already been built.
Like, say, if you wanted to build a radio, you could find a radio, take it apart,
look at all the parts and try to figure out what they're doing.
But it sounds like what you're saying is that there's this ground-up approach where you start.
I like what you said, sufficient rather than, or wait, did you say necessary rather than sufficient?
It's sufficient rather than necessary.
So, what's the minimal thing that's sufficient to do something rather than what's necessary?
But maybe I could add to that.
Like, I think what's weird about it a little bit is it's not totally like building
it from scratch. It's not artificial life. It's not creating a cell just out of molecules. We're
taking genes and proteins and we're putting them into the cell so that they'll carry out a
particular function that we designed. But to do that, they have to make use of all of the
capabilities that are already present in the cell.
For example, if I put a gene into that cell, that gene will be expressed, which means a protein will be made from it.
And that will be carried out by a whole set of molecular machines, the polymerase and things like that, that are basically just doing that all the time in the cell.
in the cell. So on the one hand, we're building the function that we care about, but we're doing it inside of a kind of operating system in a way that the cell is providing that, you know,
provides all the infrastructure necessary for our circuit to run.
So you're piggybacking on what's already there, but you can actually get it to do,
as you say, new functions maybe at your, hopefully at your will.
Yeah. This is the mind boggling thing about
biology that, you know, it didn't in a way have to be true that the machinery of the cell supports
not only the functions that evolved with it naturally, but also new functions that you plug
in or transplant from other organisms. There's this kind of aspect of compatibility between,
you know, a gene from a human insulin gene can be put into
a bacteria and make insulin, even though the last common ancestor of a human and bacteria is billions
of years ago. So it's kind of amazing. Like you can't take your Windows app and run it on Mac,
but you can take a human gene and express it in a bacterium. So.
So that's a really interesting point, this whole thing that, let's see if I can,
maybe we'll unpack this a little bit together.
You're saying that by trying to get a cell to perform some new function, you might have thought that making those demands on having it do something that it hasn't evolved to do might break the cell, right?
Like it might cause everything to fall apart.
Right? Like it might cause everything to fall apart. And a lot of the gadgets that human beings design that are often optimized in one way or another are very brittle. Right? If you push them out of their comfort zone, they fall apart. You're saying that cell is like that, why it has the flexibility to accommodate new kinds of programs that you put into it. It's not brittle,
as you said. And I can speculate on that a little bit. Sure, do it. Okay. You know,
I think it really has to do with the principle of evolvability. This is the idea that the systems
that we have in the cell are not just good at what they're doing in this cell.
They're systems that have continued to evolve over billions of years and allow the organism to evolve.
And to do that, they have to have the flexibility to support new kinds of functions that can arise through evolution.
And so the same thing that makes organisms evolvable may also make them engineerable.
Uh-huh.
Interesting.
Right.
So since evolution depends on the ability to change, I mean, that's what the word evolve is all about, is a kind of change over time.
So the cell has to be able to accommodate those kind of demands to be able to change incrementally or sometimes even in big jumps.
to be able to change incrementally or sometimes even in big jumps.
And you're saying that then whatever it is that lets cells do that also lets you do that to them by making you impose the changes on them
rather than the outside world.
Exactly, yeah.
And so, you know, these are kind of classic ideas.
And actually there's a beautiful book by Gerhard and Kirchner
kind of about this idea of facilitated variation.
But I think it's kind of interesting to think about how, you know,
how obviously the cell did not evolve to enable synthetic biology as far as we know.
But, you know, these properties do enable it, at least to some extent.
Okay. But maybe we better get to the point of this.
Like, why do you want to program cells?
Why are you fiddling around and tinkering at the cellular
level? What are you trying to do? I think there's sort of two reasons to do synthetic biology,
two kinds of reasons. And one is there's a huge variety of applications. You know,
you can engineer cells to be little factories that produce drugs, materials, fuels,
other kinds of chemicals. And you can do that by bringing in
genes for enzymes and by kind of taking enzymes from other organisms and putting them together
in one organism, you can make the cell into kind of a little factory and it can make these chemicals
more cheaply often, more precisely and make different kinds of chemicals than you might
be able to do in a conventional chemical synthesis process.
So that's sort of just a very useful and powerful thing.
There's also potentially kind of environmental applications, you know, like people are
engineering bacteria or microbes that can fix nitrogen or fix carbon.
This is kind of really exciting and something that could have a lot of impact potentially
in the environment for sustainability.
And then there's applications
that are kind of on the therapeutic side.
And I think those are, I think, particularly exciting also
because, you know, a cell, if you could program it,
could be an amazing drug, right?
Much more powerful than a molecule.
So, you know, a molecular drug
can be very specific for its target,
but a cell could be programmed
to detect a lot of different information in the environment,
process that information, make decisions,
try to find specific target cells and change their behavior or kill them.
It has a lot of kind of programmable flexibility as a therapeutic device.
So that's, I think all those together is sort of one good reason
to kind of try to figure out how to program cells.
The other side of it is this ability to kind
of learn about principles of biology this way, that just when you start playing around with these
components, you ask different questions about the cell than you would ask if you're taking apart
a really complex system. You know, maybe you learn about there's components of the cell
that degrade proteins, and you start to ask, okay, how can I use these to degrade any protein I want,
for example? So you start to kind of take a how can I use these to degrade any protein I want, for example?
So you start to kind of take a user point of view.
It's a little bit like shifting from somebody who's using a computer to somebody who's like trying to hack around and program a computer.
It's just you ask different questions about the computer.
All right.
So those are three really interesting directions for us to explore.
Let me see if I can recap.
So synthetic biology in the service of making new kinds of factories to produce insulin or other useful molecules.
Synthetic biology, you mentioned for things that aren't necessarily medical, but maybe they could grab carbon out of the atmosphere or the ocean to maybe help us with climate change and that kind of thing.
Or maybe digest plastics and all that junk floating around in the ocean.
And then as a window into biology itself, can you set the context for us in terms of the history or where the field itself has come from?
So, like, where did it begin? Is it a very old subject? Is it a very new subject?
To me, like like one of the great
inspirations is really comes from the classic era of molecular biology, when people like Jacob and
Minot and others were trying to understand how cells regulate the expression of their genes.
How do they know when like a bacterium is exposed to a new sugar, how does it know to turn on the
genes that are necessary to metabolize that sugar?
And they discovered repressors, which are proteins that turn off the expression of a gene. And they
realized that in the presence of the sugar, the sugar could inactivate the repressor. So it would
no longer turn off the expression of the gene that was necessary to digest the sugar. So that was like
a very simple regulatory circuit. It's actually the sugar was lactose. And this became a paradigm of molecular biology for understanding gene regulation.
I just want to have you pause here because I feel like when we talk about gene regulation
or regulatory sequences or that kind of thing, I always feel like this is a sort of an abstract
idea that is really breathtaking and thrilling when understood, but just sounds like a lot of words when it's not understood.
So the proteins can do all kinds of things in cells.
They can be structural, like everyone knows that their hair is made of – has collagen, right?
So we have – you can use proteins to make actual structures like hair or they could play a role in your skin or whatever.
OK.
There's that.
So proteins can do that humble thing, just be part of the building.
Then they can also do things like help chemical reactions go faster than they would otherwise.
They could be enzymes.
They can be catalysts.
They can do that kind of stuff. But the really freaky thing that some proteins can do,
to my way of thinking as a naive person about biology, is that they can turn genes on or off.
Like you're talking about repressors. So a protein can actually help some other gene make more of
some other protein, including itself. I mean, that's where you really get into weird logical
loops. I'm a protein that's telling
my gene, make more of me or make less of me. That's the regulation part, right?
Yeah, absolutely. And I think that's where things get really fun because, you know, just like you
said, a protein can turn itself off or turn its own gene off or turn its own gene on, which really
means controlling its own level. And in fact, it's not just that it could do that, it does do that.
So I think that kind of circuit, which is really the simplest interesting circuit you can imagine,
a gene turning itself off, occurs much more often than you would expect by chance. You know,
it's happening all the time that genes are regulating themselves, and they can do it
negatively, repressing themselves. There's also proteins that turn on genes that can turn themselves on
equally, and that has other kinds of functions. So if you start to kind of generalize that and
you imagine sort of a network where, you know, you have a bunch of these proteins and they're
all regulating each other in different ways, then, you know, just kind of imagine a graph with a
bunch of these genes and the arrows saying which ones are regulating which other ones, you start to kind of, you know, conceptualize the complexity of the regulation
of the cell, that there's all these different arrows between each of these regulators and
each of the other ones.
And they're kind of, what does that all do?
What kinds of behaviors does it generate?
I mean, exactly.
That thing starts to sound to me like a computer where they're switching circuits that are
turning each, you know, think elements that turn each other on or off. And if I'm on, I make you go down. And so
now you're off, but now you're being off, let something else go on. It also starts to sound
like a brain, right? I mean, brains have all these neurons that inhibit each other or activate each
other. So you start to feel like the cell can, in a certain chemical sense, think.
Absolutely.
Or compute or something.
Yeah.
They absolutely do compute.
And you can see that in many different systems in biology, that it's not just sort of passively taking in information.
It's processing that information.
It's doing it through these molecular circuits.
What is the question or the set of questions at the heart of your research?
Well, I think, you know, for me, it's really like, what does it take to make biology programmable
so that we could predictively create, you know, almost any new function we want out of cells?
Okay. So that's kind of the large theme. That's a big, hard, large problem. Yeah. So, you know, I think that the premise that
we work with is that just like for electronic circuits, you're not going to, at this point in
time, you're not going to just wire together random transistors and just see what happens.
You're going to use a bunch of pretty well-defined circuit design principles and electronics that
people have discovered over many decades. And so we kind of have a feeling
for what kinds of circuits are good for which kinds of functions. And so our premise is really
that there are analogous principles for biological circuit design, and many of them probably not yet
discovered, but they can be discovered, and they should apply equally to natural circuits, or they
could apply equally to natural circuits that have evolved, and also will enable us to natural circuits, or they could apply equally to natural circuits that have
evolved, and also will enable us to make circuits that operate more predictably in the cell. In
other words, we're trying to kind of figure out the principles of circuit design that are not
just imported from electrical engineering, but actually are kind of more appropriate for the way
the cell works. Because after all, these are molecular circuits, they're not electronic
circuits. So that not electronic circuits.
So that's kind of like the question.
But then how do you actually do that?
Okay, so you can't just sort of wait
for the principles to descend upon you.
You have to go out and try to look for them.
And right now, what we're really excited about
is that most synthetic biology, I would say so far,
has been focused on functions that you can
program in a single cell. So you can grow that cell into a population, but all the cells in that
population will be doing the same thing. But we know that a lot of the most exciting things in
biology take advantage of the fact that cells like to work together. So we are a giant multicellular
organism composed of trillions of cells. And even bacteria, which people think of as single-cell organisms, they rarely are alone.
You know, they're interacting with each other and with other microorganisms in complex environments.
And so a lot of the functionality that's really amazing about cells comes from their ability to act together as multicellular systems.
as multicellular systems. And we'd like to kind of bring synthetic biology to that multicellular level by building circuits that take advantage of multicellularity to do things that would be
hard to do for an individual cell. I mean, that's such a rich story to be talking about the
collective behavior of the cells, especially when they have differentiated roles to play.
But I feel like we have to talk about the circuit aspect first. So I wonder if we could, before we get into collective behavior, just talk about how is a biological circuit the same or how is it different from an electronic circuit?
One of my favorite things that really gets at the fundamental difference between, you know, an electronic circuit and a biological circuit is something we call noise.
So a system can have intrinsic randomness to it. In other words,
the behavior may not be totally predictable. It's not deterministic. These are all just
different ways to say the same thing. And in an electronic circuit, there is noise, you know,
there is noise in the flow of electrons along a wire. It's just that we've designed electronic
circuits so that that magnitude of that noise virtually never affects
the behavior of the circuit. So an electronic circuit in your computer is going to behave
deterministically, totally predictably. Life does not seem to operate in that regime.
Inside the cell, the expression of a gene can and does fluctuate, and those fluctuations are
random. They're intrinsically random, and they're in a way
beyond the control of the cell. The cell can say, I want more of this gene or less, can control
things quantitatively. But the exact amount of protein that it generates, for example,
is not determined precisely. And that when you think about it just coming from electrical
engineering, it just starts to seem like, well, cells are just not very good.
They're not as good as our electronic devices.
But after a while, you start to realize that this is a feature and not just a bug in many ways.
So it's something that gives cells the ability to kind of, in a way, have their own little random number generators, to use strategies that are distributed.
So, for example, to divide labor that are distributed. So for example,
to divide labor, maybe I have a cell population, the cells are all nominally sort of equivalent
to each other. But by taking advantage of noise, I can say I want 30% of these cells to do this
thing and 70% to do this other thing. So let's talk about some of the things that you've done
in the lab, Michael. And actually, if I hope I won't embarrass you, I mean, you're kind of known for.
You, for instance, created something called the repressilator, which is some kind of combination
of something that does repressing and oscillating.
Can you tell us about it?
What does a repressilator do?
The repressilator is a synthetic circuit that generates, it kind of acts like a little clock
inside of a
cell. So it generates oscillations in the levels of its own proteins. But the question is sort of,
I think, why would you build something like that? And so this is something that I was a graduate
student at Princeton working with Stanislaus Leibler in his lab. And we started to get
interested really in questions about the
properties of different kinds of circuits. And what we started to wonder was whether you could
actually build a synthetic circuit that did something non-trivial from scratch. And the
question was, of course, what kind of circuit would you build? You could build lots of different
kinds of things, switches or almost anything you can imagine. You know, I think one of the most
fascinating things in biology are these clock circuits. So we have our own circadian clocks.
There's a cell cycle is a kind of a clock that kind of causes the cell to grow and divide and
grow and divide over and over again. There's all sorts of clocks in biology. And throughout science
and physics in particular, oscillators are always sort of important. And so we kind of
wondered, could we build something like that? And what we eventually came up with was a design that
we call the repressilator now. And it's a kind of rock, scissors, paper game of proteins inside
of a cell. So the idea of the circuit is it's built out of three kinds of protein, which are
called, which we've talked about already. There's repressors.
And each of those repressors can specifically repress the next repressor in the circuit. So
repressor one represses repressor two, repressor two represses repressor three, and repressor three
represses repressor one again. So it's kind of the rock, scissors, paper topology, if you want.
And so if you kind of start
to imagine what that circuit could do, imagine I suddenly have a lot of repressor one, that's going
to turn down the expression of repressor two. So you'll start making less of it. Eventually,
repressor two will go away. If it goes away, that allows repressor three to come on. So you'll start
making a lot of repressor three, and that will turn off repressor one again. So a change in one
repressor eventually, after it propagates around this feedback loop, causes a kind of negative
effect on itself, but with a time delay. And that turns out to be sufficient to give you
self-sustaining oscillations that will just continue on and on and on as the cell grows
and divides. And now, didn't you find some way to make this visible, like to the naked eye?
Yeah, so that's critical.
And that's also kind of interesting sort of scientifically is that, you know, in the 90s,
there was this incredible transformation in biology because of the cloning of the green
fluorescent protein, which was a gene from jellyfish that makes a green fluorescent protein,
hence its name.
And people, Martin Chalfie kind of cloned that out of the jellyfish that makes a green fluorescent protein, hence its name. And people, Martin Chalfie kind
of cloned that out of the jellyfish and expressed it in bacteria and showed that that gene by itself
was sufficient to kind of make a green protein. So now you could look at things that are going on
in the cell without having to kill the cell or grow up a huge population of cells. So what we
did is we took three of the best characterized repressor genes, and we kind
of engineered them into this three-protein circuit.
And then we put in this green fluorescent protein and had one of the three proteins
controlling it.
And so the idea is that if the oscillations are going on inside the cell and we make a
movie of the cells, then you kind of can just look at the cells and you can watch them in
the movie and you'll see them get brighter and dimmer and brighter and dimmer over time.
So that's and that actually I mean, the interesting thing for me is that while I was building it, I had really no idea whether it would work.
So when we actually kind of got that system engineered into the cell and started to do movies and then I actually saw these cells kind, blinking on and off over time. That was really quite extraordinary moment for me. Do you remember anything you could
share with us about your personal, like, did you call your parents or, you know, best friend or
anything like that? I remember setting up the experiment, actually having no idea whether it
was going to work or not. And at the time to make the movies, the microscope we
had would lose focus. So I was taking naps on a little couch kind of adjacent to the lab. And
then I would get up and set an alarm every hour to go and refocus the microscope. So I was kind
of losing sleep. And then it's hard to kind of see it in real time because the oscillations are slow.
They're like, you know like a period of several hours.
And so you can't just watch it in real time.
You can kind of tell yourself, I thought that cell was bright before.
I think now it's dim.
But only when you kind of, in the morning, kind of replay the whole movie and see it,
then you actually can see the cells kind of blinking on and off. So yes, I did tell people.
I told everybody around more than they wanted to know. So that
was very exciting. Thank you for sharing that story with us, because that is part of the fun
of being a scientist, that sometimes things do work and it's, you know, against all odds,
because as you say, biology is complicated. There's so many things that could go wrong.
But you did have one ally on your side in all this, which was a branch of mathematics.
The branch of mathematics that I happen to love, nonlinear dynamics, that I want to believe helped you in your work.
But I don't really know the back story here.
So is it true?
Did math help you design the repressilator?
Yeah.
I mean, I think it's really true.
I actually have the classic Steve Strogatz book, Nonlinear Dynamics and Chaos. And in fact, what it meant to design the circuit, there's kind of two levels of design.
One is building these pieces of DNA and figuring out like exactly what sequences to use.
But the other is the mathematical aspect.
And so, you know, if you think about this circuit that I described, the repressilator, it could oscillate the way I just said, you know, with
all of the proteins continually going up and down. But the same circuit could have a very boring
behavior as well, in which you just would make a little bit of each of the three proteins and
just enough so that you continue to make just enough of all of them. And it's all self-consistent
and nothing really interesting happens. Actually, the same circuit, in principle, could do both things. And the question was,
how do you ensure that it does the thing you want it to do, the oscillation, and not the kind of
boring thing? So what we did is we wrote down, exactly using the methods in your book, a set
of differential equations which describe how the rate of change of each of the three proteins
should depend on each of the other proteins in the circuit. And when you write down those equations
and you do linear stability analysis, you sort of figure out that, you know, this circuit has
just one steady state. In other words, one point where if you put the circuit at that point,
it would stay there. And the question about whether it's going to oscillate or not comes down to the question of whether that point is stable or unstable. So if it's stable,
it means if you push it away a little bit, it'll go right back. That's the boring solution.
But under some conditions, it becomes unstable. And then the system develops what you know very
well is called a limit cycle, where it has no choice but just to keep orbiting around and around
and around. And that's the only stable behavior, or not stable, but the only behavior it can do. And that's what we really wanted.
And that kind of mathematical property of a limit cycle would be really desirable because
if there are kind of uncontrollable things inside the cell that perturb the system that we put in,
we'd like it that it would have to go back to that limit cycle, have to keep oscillating,
and that the oscillation would not be degraded by all that stuff. So that's a beautiful thing about the repressilators. It has that kind
of limit cycle solution. And what we learned from the math is basically what you had to do to the
circuit to make it more likely to oscillate. And what we learned is we had to make sure that the
genes are expressed at a high level, but that the proteins are unstable, that they degrade rapidly
inside the cell. And that required kind of engineering little tags that the proteins are unstable, that they degrade rapidly inside the cell.
And that required kind of engineering little tags onto the proteins that make the cell degrade them, which is counterintuitive.
Oh, interesting.
Neat.
So because, you know, sometimes as mathematicians, we like to believe that we're being useful
to scientists in other fields, and especially biology as one of the most exciting subjects
out there.
scientists in other fields, and especially biology as one of the most exciting subjects out there.
But it's not always easy to find really good examples of where math has been helpful in the service of biology. So I appreciate hearing that this was one case where we did some good.
Can I also just add one little thing? It's not just useful, it's really essential. Because
even with the repressilator, which is only three genes, and as you start to get to more complex
circuits, this problem gets even worse. You can't intuitively reason about what the circuit's going to do.
It's too complicated.
You really need these mathematical tools. of how you can go from cells that all have the same genome and yet can end up becoming
such a myriad of different possible kinds of, they could have all these different fates. They
could become liver cells, immune cells, blood cells, any kind of thing, even with the same
genes. So this is, of course, a gigantic question in biology, the question of how you differentiate,
how do you get complexity? What is it that you all did recently? I know it goes, the question of how you differentiate, how do you get complexity?
What is it that you all did recently? I know it goes by the name of multi-fate,
but what's it all about? Sure. Yeah. Just like you said, this is like this foundational question in biology, like why or how do cells generate discrete cell types, just like you said, liver,
blood, neuron, you know, and many, many different subtypes of all those things. And, you know, rather than some big mush of intermediate mixtures of all these things.
And one of the fundamental ideas is that even though all the cells have the same genome,
exactly like you said, the cells can exist in these different states and the states are stable.
They're like sitting in the language of dynamical systems that kind of stable attractors.
They're like sitting in the language of dynamical systems that are kind of stable attractors.
So that if there's a little perturbation or fluctuation in the concentration of one of the components, it's okay because it's stable and it'll go right back to that state.
So the question really is sort of like, what kinds of circuits can generate multiple attractors or stable points like that. And then if you think about how organisms evolve over evolutionary time scales, the complexity can increase, right? We have a lot
more cell types than a fly. And yet we use a lot of the same kinds of genes and proteins to generate
those cell types as the fly. So there's something about the cellate control circuits, the natural ones, that is sort of
expandable or scalable.
It allows the organism to evolve new fades over time.
So we were kind of thinking about this as like a foundational problem for synthetic
multicellularity, right?
If we're going to make a synthetic multicellular system, we've got to be able to have a cell
that can go into a bunch of these different states and sit there stably.
And it would be nice if it had a bunch of other properties as well. I had this truly brilliant and creative student,
Ron Zhu. And, you know, he came to the lab and we were talking about these things. And he started
thinking about kind of what would it take to build a synthetic sulfate system from scratch?
So instead of trying to kind of figure out how the natural ones work, could we actually build one?
So instead of trying to kind of figure out how the natural ones work, could we actually build one?
And we started looking at what people know about some of the best understood natural cell fate control systems.
For example, there's systems that control muscle fate that push certain kinds of stem cells to become a muscle.
And similarly, there's other circuits that push stem cells to become kind of a neuron, for example.
And there's kind of a weird thing about these circuits in nature, which is that the key proteins that control fate are the same kind of,
these proteins that control gene regulation, like the repressors and the activators that we talked about earlier. But they have this weird property, which is they tend to function not alone the way
they do in bacteria, but in a multicellular organism, they tend to function in combination.
So basically, you can have a protein that will, what we call dimerize, it'll stick to
another copy of itself, or it could stick to a copy of one of the other proteins.
And there's families of these proteins, and they stick together in all these different
combinations.
So that's a little weird.
Why is it like that?
And different pairs can do different things.
So there might be one pair that activates expression of one of the proteins in that pair, and another pair
that might do nothing at all. It might just stick to each other and not bind to DNA and not do anything.
So if you look at those circuits, there's kind of this theme of, the fancy word we use for this is
combinatorial dimerization, okay? And we see that as a theme in these circuits. And we tried to think of
what's the simplest design that kind of uses that theme, but can still be engineered into a mammalian
cell. And the circuit that we call, in the end, multi-fate is based on that theme. And the idea is
that it has a set of proteins. We can just call them A, B, C, and so forth. And each of those can pair with itself into a dimer, so like an AA dimer.
And that dimer can activate more expression of A.
And similarly, the B gene will make B, and B will pair with another B and make BB,
and BB will activate more expression of B and so on.
But the trick is that they can also pair with each other.
So you can also make an AB and a BC and so forth.
And those proteins in this design do nothing at all.
So the fact that they do nothing means that if you make a lot of A,
it can kind of soak up like a sponge some of the B and prevent B from doing anything and vice versa.
Okay, nice.
So that's what leads to sort of some interesting dynamics.
You can kind of analyze the system.
It's just sort of some interesting dynamics.
You can kind of analyze the system.
And what you realize is that it can generate multiple attractors, multiple stable states.
So, for example, if you take two of these proteins, just A and B, you can make three states.
Let me see if I get that.
So with an A and a B, I could make something that does what?
AA, BB, or AB.
Are those the three?
Yeah. You can have a state that only makes A, a state that only makes B,
or a state that makes just a specific ratio of A and B together.
So from two, we didn't use the word transcription factor. You're just calling them proteins,
whatever.
We can call them transcription factors.
Okay. Call them whatever you want.
Yeah. They're transcription factors. Yeah.
So you had two of them and they could make three, can we say, cell types?
There are three states, and the point is that those are stable,
so that if you perturb the cell away from it, perturb the level of A or B a little bit, it'll go right back.
That's the point, each of those states.
And to say go back to whatever, you're talking about the whole cell is expressing many, many other proteins than the ones we're talking about.
So you're saying there's a certain pattern of gene activation for all these other unnamed proteins,
and that's the thing that's stable.
Is that what you mean?
Not quite, because I think we're assuming that the rest of the cell is just providing a platform,
this is an assumption, for our circuit.
And these states are states of our multi-fade circuit.
And everything else in the cell is kind of homeostatically there, just providing all the machinery that our circuit needs.
I see.
So the rest of the cell is just keeping the lights on.
Exactly.
While your little synthetic circuit that you're playing with is doing its thing sort of in the background.
And the big complicated cell is like, never mind.
I don't even care about that and that.
It's such a funny thing that you're doing.
You're playing this game inside of a cell.
There's the milieu of the cell.
The cell is doing its thing.
Meanwhile, you're playing your game with this like tinker toy little artificial,
well, synthetic is the nicer word, circuit.
That's right.
And if we do it right, then our synthetic circuit does not
place too much of a burden on the rest of the cell. If we do it badly, then if we overexpress
these proteins too much, they can start to mess up the behavior of the cell. So that's always an
issue with synthetic biology is it's not quite accurate to say that it doesn't affect the rest
of the cell at all. But that's kind of the approximation that we try to reach.
Okay.
So you mentioned that with the two types of proteins, you were able to get three states.
And then you, what, you used some kind of the same trick with colors so that now you
could see them?
Or you also tried things with three proteins and got seven states?
Am I remembering right?
That's exactly right.
So one of the
beautiful things about this whole business is visualizing what goes on. And so we take A and
we attach a red protein to it. B, we attach a green protein to it. And then we can actually
watch the cells and see that they're in a particular state. And we can then see how
that state changes over time as the cells grow and divide by making time-lapse movies in a
microscope. But then what we really wanted to test was whether this system had the expandability property. That's
what makes it really interesting, is that you can start to get more and more states as you add more
factors. And the number of states grows exponentially with the number of these
proteins. So if we add a third protein, now in blue, we can go from three states to seven states.
And if we add a fourth protein, we can go to 15 states.
It kind of goes as two to the n minus one.
Oh, nice.
Yeah.
So it's kind of a more or less exponential growth, you know, until you reach a certain point,
which, you know, you can't exponentially grow forever, but you can in principle for a while.
And we've now taken this to the level of 15 states in the lab.
Really? Oh, wow. That's amazing. Do you have a dream of what multi-fade could someday be?
Well, we're thinking of it as kind of like the foundation for engineering multicellular systems.
And one way to, again, if thinking about kind of the grand vision, we have the immune system
naturally. And our immune system is built out of a huge array, a big zoo of different cell types,
natural killer cells and T cells and B cells and all of those different cell types that
are interacting with each other.
And so, you know, you can imagine trying to engineer, what does it take to engineer a
synthetic system that acts like the immune system, that diversifies into different states,
provides different kinds of memory, and allows the cells to sort of specialize to carry out different
functions. So I think of multi-fade as providing kind of a foundation for some future engineered
system of that type. And then, you know, more generally, I think one of the really exciting
areas right now in synthetic biology is its ability to kind of enable other
sorts of therapeutic modalities. So, you know, there's something called engineered cell therapies,
which is, in a way, programming cells to act therapeutically. And one of the best examples
of that is what's called CAR T-cells. So this is an approach that many people may have heard of,
what's called CAR T-cells. So this is an approach that many people may have heard of where you take a patient's T-cells and you add a synthetic engineered receptor that targets those cells.
So T-cells, I should say, are very good at killing target cells that they recognize with their normal
receptor. But here you instead put in a new receptor that you've designed that takes that
awesome power that the T-cells have to kill and targets it directly at the tumor cells. This has been kind of really a revolutionary
advance in medicine. It's been very successful in certain B cell lymphomas. And the reason that
people are so excited about it is that in principle, it's a kind of platform that you could
expand to target all kinds of different cell types. And it's become
sort of a playground, I would say, for exploring what synthetic biology can do therapeutically.
Because the simplest versions of this just put in one new receptor. But you can also imagine
adding different kinds of logic to these cells so that they recognize, you know, maybe I'll attack
this tumor, but only when I'm in a certain environment and
only when I recognize a different protein that's on the cell as well, like doing combinatorial
logic, and gates, or gates, all sorts of things like that.
Let me see if I understand that last idea, because it's really wild, that we've grown up
with the idea of drugs that could target specific kinds of, let's say, tumor cells.
target specific kinds of, let's say, tumor cells. But now you're saying that there's hope for designing medicines, drugs that would have a kind of logical capacity to them, that they wouldn't
just necessarily look for a particular kind of cell to kill, which would already be a good thing,
but that they could maybe look for that cell to kill under certain circumstances, but not others.
That's part of the dream that people would like.
Because often it's like, you know, the target cell may not be, it may not have one magic
target protein on it that says, come and get me.
It may have like, it may be different in the expression levels of lots of proteins.
And you may have to kind of try to disentangle or try to discriminate that cell in a more
complex way from other cells that are not your target cells. So that's in in a way, a great challenge in medicine. It's not hard to kill a cell. It's
hard to kill the right cell. Another area that we've been really excited about is sort of
intermediate between kind of an engineered cell as a therapy and kind of something more simple,
like a gene therapy. So a gene therapy means you kind of just put in a gene to replace a defective
gene. But you could also imagine a circuit, an engineered circuit, that could try to give you specificity
by acting within different cells and trying to see what kind of cell am I inside of. So
if I could detect the activity of different proteins, I could say, okay, this is likely a
tumor cell because it has a very high level of activity of a bunch of oncogenic signaling pathways, for example.
And therefore, I'm going to now cause this cell to die selectively.
But if I'm in a normal cell, I can tell that it's a normal cell and I won't do anything.
We've been working on other approaches to try to realize that vision, and many groups are trying to do things like that.
So this has been kind of really fun trying to think about what kinds of
circuits can we make? How do we get them into cells? And how do we have them actually perform
this discrimination task of killing target cells, but not off target cells, or doing other things
for that matter. And one thing I'd really like to add is that because these synthetic biology
technologies are so powerful, it's really important that as we develop them, we're really mindful that
they're used safely, and that we make sure that their benefits are broadly distributed globally and that they're properly regulated.
Wow, Michael, thank you for describing this brave new world of synthetic biology.
It seems like the sky is the limit here.
Thank you very much for talking to us today.
Thanks, Steve.
This has been really fun.
So thanks a lot.
Thank you very much for talking to us today.
Thanks, Steve.
This has been really fun.
So thanks a lot.
If you like The Joy of Why, check out the Quantum Magazine Science Podcast,
hosted by me, Susan Vallett, one of the producers of this show.
Also, tell your friends about this podcast and give us a like or follow where you listen.
It helps people find The Joy of Why podcast.
The Joy of Why is a podcast from Quanta Magazine, an editorially independent publication supported by the Simons Foundation. Funding decisions by the Simons Foundation have no influence on the
selection of topics, guests, or other editorial decisions in this podcast or in Quantum Magazine.
The Joy of Why is produced by Susan Vallon and Polly Stryker.
Our editors are John Rennie and Thomas Lin, with support by Matt Karlstrom, Annie Melchor, and Alison Parshall.
Our theme music was composed by Richie Johnson.
Special thanks to Bert Odom-Reed at the Cornell Broadcast Studios.
Our logo is by Jackie King. I'm your host, Steve Strogatz. If you have any questions or comments for us, please email us
at quanta at simonsfoundation.org. Thanks for listening.