The Joy of Why - How Is Science Even Possible?
Episode Date: June 20, 2024The universe seems like it should be unfathomably complex. How then is science able to crack fundamental questions about nature and life? Scientists and philosophers alike have often commente...d on the “unreasonable” success of mathematics at describing the universe. That success has helped science probe some profound mysteries — but as the physicist Nigel Goldenfeld points out, it also helps that the “hard” physical sciences, where this progress is most evident, are in major ways simpler than the “soft” biological sciences.In this episode, Goldenfeld speaks with co-host Steven Strogatz about the scientific importance of asking the right questions at the right time. They also discuss the mysterious effects of “emergence,” the phenomenon that allows new properties to arise in systems at different scales, imposing unexpected order on cosmic complexity.
Transcript
Discussion (0)
Hello, I'm Brian Cox.
I'm Robin Ince, and this is the Infinite Monkey Cage trailer for our brand new series.
We've got mummies, we've got magic, we've got asteroids.
Mummies, magic and asteroids. What's the link?
That it was an asteroid that magically went over the world that led to Imhotep the mummy coming back to life?
That's correct.
I thought it would be weird, scientific as ever.
But the most important thing to know is that we are going to deal with the biggest scientific question we finally ask.
What is better, cats or dogs?
Listen wherever you get your podcasts.
Albert Einstein once wrote, the eternal mystery of the world is its comprehensibility.
It really is awesome when you think about it.
The laws of nature, at least in physics,
turn out to be amazingly simple.
So simple that we human beings can discover those laws
and understand them and use them to change the world.
But why is nature like this?
Why is it so comprehensible?
And why is math so uncannily effective at explaining it?
Not just in physics, but also
in chemistry, in astronomy, and even in some parts of biology?
In short, why is science even possible?
I'm Steve Strogatz, and this is The Joy of Why, a podcast from Quantum Magazine where
my co-host, Jana Levin, and I take turns exploring some of the biggest mysteries in math and science today.
In this episode, we'll be speaking with physicist Nigel Goldenfeld about the mystery of nature's comprehensibility.
Nigel holds the Chancellor's Distinguished Professorship in Physics at the University of California, San Diego, where his research spans condensed matter theory,
the theory of living systems, hydrodynamics, and non-equilibrium statistical mechanics.
Previously, he was a professor at the University of Illinois at Urbana-Champaign and a founding
member of its Institute for Genomic Biology, where he led the Biocomplexity Group and directed
the NASA Astrobiology Institute
for Universal Biology.
In addition to being a fellow of the American Physical Society, the American Academy of
Arts and Sciences, and the U.S. National Academy of Sciences, Nigel is also well-known for
authoring one of the standard, and I have to say terrific, graduate textbooks in statistical
mechanics. Nigel, thanks so much for coming on the show. Nigel Goldberger Oh, it's a have to say terrific, graduate textbooks in statistical mechanics.
Nigel, thanks so much for coming on the show.
Oh, it's a pleasure to be here, Steve.
Yes, it really is a pleasure for me.
I am curious where we're going to go with this.
It's such a really very profound philosophical question, this Einstein quote about nature's
comprehensibility, but I wonder what you think of it.
I mean, let's talk about both parts of it.
Is the world really comprehensible,
at least to some degree? And if it is, does that strike you as mysterious?
So I think it's a wonderful quote, and it's certainly one that inspired me and I'm sure other people thinking about the research that we do. And I think the reason it's important
is because we've grown physics to such an extent that it now starts to impinge on other
disciplines. You mentioned biology, but also, you know, I could mention economics and atmospheric
sciences, climate change, all these sorts of things. And as you start getting into these much more
complex and complicated areas of science, you wonder, how are we even able to do anything in physics,
let alone these other things. And in fact, the reason these other fields are difficult
is something that's also not clear. You could also ask what is the reason for the unreasonable
ineffectiveness of mathematics in biology. When you start to think about it, you realize that
when we talk about the effectiveness, we're talking about problems where we've been lucky to make an impact. And so our sample is skewed. We have a lot of successes in science. Some of the most accurate things that we know in science are in physics.
because, you know, we only talk about those problems because those are the ones that actually worked. All the many other things that we tried to do failed dismally, and we never ask about those.
And so our sample is somewhat biased. Well, that's great, this point that you're making,
that we're sort of assuming facts, not necessarily in evidence here in saying that the world is
comprehensible, because as you say, there are these parts of science that we still have yet to really figure out, economics, parts of atmospheric science, and so on. So for listeners
who aren't necessarily following what we're talking about here, think about the example
from the 1850s or 60s, James Clerk Maxwell figuring out the equations for how electricity
and magnetism work. It's just four little equations that nowadays fit on a T-shirt.
Physics and math nerds like me and Nigel and maybe even you like those T-shirts.
What's crazy is that you can really understand almost everything there is to know
about electricity and magnetism with the help of those equations and some clever math.
For instance, Maxwell himself figured out that a prediction from those equations
is that
something called electromagnetic waves could exist. And today, those are the basis for
wireless communication technology that we all use every day in our cell phones.
And so the question is, how is it possible that we with our puny primate brains can figure out
these four equations that are so marvelous? And is it, as you suggested, Nigel,
just that we're asking questions
whose answers are likely to be simple
and ignoring the really hard ones?
Or, I don't know, how should we think about this?
How is it possible Maxwell could have come up with these equations?
Let me take another example,
which is Einstein's prediction of gravitational waves,
which has been in the news a lot in the last couple of years.
All right.
The story is that Einstein had this idea of thinking about somebody falling in an elevator,
and they realized that the falling in an elevator is similar to what you get from gravity.
And so they came up with a principle of equivalence.
And from that very, very slender insight, translated into mathematics through Riemannian
geometry and tensor calculus and so on,
which Einstein had to learn in order to do that,
he was able to create this amazing mathematical edifice,
which we call the general theory of relativity today,
which is actually the theory of gravitation.
And it explains gravitation to a higher accuracy than Newton's law of gravitation, and makes numerous predictions
of which the gravitational waves are one of the most spectacular. So that's another fantastic
example. And it just boggles the mind that somebody could imagine that and create the
science that makes these predictions. And, you know, 100 years later using astonishing technology, we're able to actually observe these things. It is. It seems
almost like a miracle. It's something that the physicist Eugene Wigner, in a
famous essay in 1960, posed the unreasonable effectiveness of
mathematics in the natural sciences. And you already alluded to this phrase of
his, what is unreasonable about it?
So you and I have been talking about new qualitative phenomena that you predict,
for example, from Faraday's law and all these things that Maxwell had to work with. That's
one thing that's very important about science, that we can predict things that you would otherwise
not expect. But the unreasonable effectiveness that Wigner is talking about is the accuracy
with which it makes those predictions. So here's another example. You look at, say,
the quantum mechanics of an atom interacting with Maxwell's electromagnetic field. When you take
electromagnetic field, you apply quantum mechanics to the interaction of that with an atom,
you're able to make predictions to something like 10
decimal places of accuracy. And those agree with experiments to all significant figures that the
experiments in theory are applicable for. And that's astonishing. And I think Wigner and Einstein
wanted to know, how could it be that such very simple mathematics has such great
explanatory power. And people may say, well, what do you mean it's simple? Einstein's theory of
relativity, general theory of relativity, is one of the most complicated pieces of mathematical
physics that you can learn. And that's true. But the physical insight that goes into it is literally
very simple. Just acceleration is literally like a gravitational force.
And then being able to turn that into a mathematical equation, which you can then make simple predictions from, is really where the beauty and the amazingness lies.
So I think that's one aspect of it.
one aspect of it. There's another thing that is not talked about very much, which is that this idea that mathematics and physics is so powerful in its explanations makes another
assumption. That assumption is reductionism. This goes back to another quote of another
founder of modern physics, Paul Dirac, who wrote down the relativistic wave equation,
an equation that describes quantum
mechanics connected with special relativity. It's called the Dirac equation. And he rather
arrogantly wrote that his equation describes most of physics and all of chemistry. And his idea is
that basically, and it's the same idea that motivates what today we call high energy physics, but in an area with more bravado would be called elementary particle physics. And that was the idea that you can just find the elementary building blocks of matter. And then once you've got those, all you have to do is put them together and you've explained everything in the world. And we know that that's not true.
in the world. And we know that that's not true. And that's the sort of fundamental insight that came out of physics around about 1950 or so, led to the birth of what's known as condensed matter
physics, and is certainly operative on steroids when you look at biological phenomena, where just
knowing the basic forces between atoms doesn't explain, you know, why you can think. So when we talk
about the effectiveness, we're talking about the effectiveness on very simple problems.
Hmm. Interesting distinction. So just to review some of these examples again to make
sure I'm with you. With Maxwell, his equations, not so simple unless you know vector calculus
or something equivalent. But then once you know that math, as I tried to emphasize, it's just four little equations that can fit on a t-shirt.
So simple in that way and simple principles going into them. But then your point seems to be, yes,
but you can only predict simple phenomena like a propagating wave through a vacuum,
whereas really complicated stuff, say predicting patterns of thought in a
human mind I mean this is the tricky part in principle do we believe that it
is actually somehow in the physics but we just can't figure out how to do the
math to show phenomena like consciousness and emotion and all that
or is there something else than what the physical laws imply well I think there
is and this goes back to the question of why it is that we can do science at all.
If you truly believe that to understand, say, the phenomena that we see in biology,
you can get all of that, say, from Dirac's equation or quantum mechanics and so on,
then every time you try to understand something quantitatively in biology or solid state physics, for example,
you know, you'd have to worry about the radiative corrections to the mass of the top quark.
And none of us think that all of those things that happen at such small scales inside a nucleon
at very high energies have anything to do with, you know, why a bird can fly or stuff like that.
The fact that we can do science tells us that somehow these scales get separated through
something which we typically call emergence. The great benefit of that is that we don't have to
solve everything all the way down in order to understand something. Interesting. So you're
saying worrying about quarks isn't going to tell us anything
about the behavior of the stock market tomorrow.
We can somehow, it's like,
as if different scales in nature
are insulated from each other
or something like that.
What's the language you would use?
You spoke of separation.
I talked of separation
and I talked about emergence.
I'd like to give you another example of that,
which was very different from the one
that people like Einstein and Wigner and Dirac and so on would have used, and they wouldn't
even have known about it.
So there's a phenomenon in nature called a phase transition.
A simple example is you take a lump of ice and heat it up, and eventually it'll melt
into liquid.
So it'll go from the solid phase into the liquid phase, and then from there, if you
heat it up further, it'll go into the gas phase. Another example would be if I took a magnet and I heated it up,
it turns out that above a certain temperature, a magnet will stop being magnetic. There is
a theory of that transition, the magnetic transition, and other transitions which are
like it, such as how materials become superconducting and very exotic
things like that. But the most interesting thing about the transition is that we can understand it
using a branch of physics called renormalization group theory. And I'm not going to go into the
technicalities of it, but what the theory predicts is that if you measure how magnetic something is
very close to the temperature where it first becomes a magnet,
whilst also applying a magnetic field, you get a certain magnetization that you can measure as a
function of temperature and external magnetic field. And you can do this for any magnet that
you'd like, but it doesn't really matter what the atoms are. And if you take the data and process
it in a certain way, what you find is the results
are the same for every single magnetic material. It doesn't matter what it is as you go just below
the temperature where it first becomes magnetic. You find that it obeys a certain equation,
and that equation is exactly the same for every material, and not just exactly the same. All the data
from all the different magnetic materials, they all lie on one curve.
And physicists call this a universality. We completely understand that.
Now the other thing is amazing though is that we can make a theoretical
prediction about what that curve should be if you process the data in the way
that the theory tells you to do it.
And when you take the data and you take the theoretical curve, it falls exactly on the
experimental data. Okay, so that's fantastic. This model of what a phase transition is,
is very successful and obviously extremely accurate. Not only does it predict this
universality, but it also predicts exactly not just a number,
but a whole function.
And it's a whole relationship that you can measure experimentally.
So that's true.
And I like to say that it's not really true that the model has given a precise prediction
in agreement with experiment.
It's really a model of a model of a model of a model.
Okay, what? Yeah? You better explain that.
Yes, a model of a model of a model of a model. So why is that? Well, suppose you said to a
scientist, okay, make a theory for me of a magnet. So they'd say, well, a magnet is made out of
atoms. So in order to understand atoms and how they interact and become magnetic, I need to
worry about the electrons in the atoms. I need to worry about the magnetic moments of those atoms. And so I make a model of
the material based on quantum chemistry. That model is unimaginably complicated, and it gives
you no hint that there could be something that doesn't depend on atoms in it, because the model
itself is very specific to the particular atoms. So then you say,
well, really that's way too hard. Maybe a quantum chemist could simulate this and make a prediction.
And if they did that, they would see that the prediction did agree with what you see
experimentally and does agree with what the theory predicts. But that's a very huge computer
calculation. So then you say, let's simplify it. Let's just not worry about the atoms too much. Let's just worry about how the electrons move around in the material.
So you go ahead and do that, and you find you've got a complicated model of electronic structure.
Sorry, let me interrupt for a second. Just make sure this whole model of a model thing is clear.
So there was the real magnet, then there was the quantum chemistry model of the magnet,
then there was the electronic structure model of the quantum chemistry model.
Yes.
Well, now we're going to go to the quantum Heisenberg model of the magnetic moments of
the electrons inside the electronic structure, which came from the quantum chemistry.
Okay.
And that model is too hard.
So you say, okay, well, let's throw away quantum mechanics.
We'll just make it classical.
So you do that.
And the model is still too complicated. So then you say, well, let's take the thermodynamics,
which is what everything depends upon in any case, and let's do some kind of expansion of that.
And that's a model where you can finally do a calculation. As you said, you've got one, two,
three, four, five models of a model of a model of a model of a model of this material. And each step along the way, you have made an approximation that would be rejected from every physics journal,
because everybody would say, that's approximation you can't justify. There's no small quantity,
no idea what you're talking about. How can that possibly work?
I must also say that here in the math department, you know, people would be hysterical. Oh, yes.
Oh, yes.
They would be horrified.
But the joke's on them because at the end of the day, you do this whole procedure and
then you find you make a prediction with no adjustable parameters and it agrees precisely
with experiment.
Dun, dun, dun.
Dun, dun, dun.
Every step along the way, the approximations you're making are not systematic and not justifiable, at least ahead of time.
And that, I think, is a fantastic way to articulate this mystery that you're alluding to.
That is a marvelous exposition.
I didn't imagine this ahead of time while preparing for this interview, but I love it.
And I think you're really capturing the mystery.
It's like we have no right for this to work as well as it does.
It's as if nature is somehow acting in a very forgiving or convenient or cooperative manner for us, like it's helping us get lucky or something.
Well, that's the thing.
get lucky or something.
Well, that's the thing.
This happens only under special circumstances. In this particular case, very close to a phase transition.
So we understand how it works there.
But these different levels of description that I alluded to,
all of these are different ways of describing something
at different length and time and space scales.
And as you go to each level, you kind of absorb all the complications of the level lower down
into some parameter that is in the description that you're talking about. And then once you've
done that, you don't need to worry about what happened below. That, I think, is why we can solve this particular problem and why it works so accurately.
We're going to take a short break, and we'll be right back.
All right, welcome back.
I'm speaking with Nigel Goldenfeld about how we can model complex phenomena
and how we manage to do it so accurately.
So when we talk about how we can do science at all, here is an example which says the
only reason you can do this sort of calculation is because there's these separations of scales
and energy and time and space.
When you start talking about physics being successful in biology or economics or social interactions and
things like that, can we expect to be able to do those things if there isn't any obvious way that
one can separate scales and make sure that what happens at very small scales doesn't affect what
happens at large scales? And it may be that there's some areas of science where that is not
true, and then you may
not be able to be successful in those things. That's an interesting point. I may be going off
the rails here, but I'm thinking of something like economics, which you might want to think of
as the byproduct of hundreds or thousands or millions of people and firms interacting through markets and so on,
that it's a kind of complex system, economics, where the smaller scale, the molecules or the
atoms or the quarks are people making individual decisions that then aggregate into an economy or
a market. In your example, where the fussy behavior of the top quark doesn't affect what's
happening to the birds flying overhead. Here,
we might not have that separation, like individual decision makers can have an outsized impact on the
economy. Is that the issue that makes economics so difficult or one of the issues? Yeah, I don't
know about economics per se, but I've given this some thought in finance. So finance is a very
interesting example to think about emergence. So remember, in finance, we have data.
We know every single transaction that occurred.
We know when it occurred, how much.
We have every piece of information like that.
And now the question is, can you make predictions based on it?
So let me give you an example.
So first of all, of course, we know that you can't predict things very well.
And not only can you not predict things into the future, you can't even predict things into the past. So there was a wonderful example of this, which was an event
called the Flash Crash. Do you remember what that is?
You should remind us. I remember the term, but I'm not sure I remember when and what happened.
On May the 6th, 2010, there was a trillion dollar crash of the US stock market. The Dow Jones plunged like a
thousand points within a few minutes and eventually it came back up again. And this was an unexpected
event. And to this day, people aren't really 100% sure what triggered that. It certainly wasn't
something that people expected at the time.
What actually happened, I believe, is that you have a cooperative phenomenon where a lot of
people are doing algorithmic trading. They were more or less using the same signals to trigger
their computer-guided trades. And I think the whole system just synchronized and crashed. And
eventually people had to stop the thing happening by pulling from the network and things like this.
So this is an example of extreme sensitivity cascading through the system because of
collective properties of the whole financial system, properties that nobody even knew were
there. Yeah, it's interesting to hear you use the word cascade because that comes up
in connection with the power grid, where sometimes you'll have an event like a lightning storm
somewhere. And then because, as you say, there's this connectivity, in this case,
through high voltage transmission lines in the power grid, you can get propagating failures.
So this does seem to be another example where a small-scale event can propagate and have
consequences at a much broader scale. So is this the idea why maybe the hard sciences are the
easiest? Oh, I always say that the hard sciences are the easiest, yes. The reason physics is so
successful is because we only ask very simple questions. So the supposed soft sciences are,
in a certain sense, you would say, then the hardest. So you have to ask a question.
What is the purpose of science?
What do we want to be able to predict?
So let's go back to my example about the phase transition.
I talked about this example of how you can look at the behavior of a magnet very close
to the temperature where it becomes a magnet.
And there's a universal phenomena there, and we understand it exquisitely,
and it's wonderful, and it's amazing. So the listener might get the impression that we understand everything about this, and there's nothing mysterious about it at all. But there
is still one thing that I didn't tell you. And that is this. There is a temperature where every
material becomes a magnet, but that temperature is different for each of the materials.
a magnet, but that temperature is different for each of the materials. And we don't know how to predict that number very accurately. That number is not something that is universal, unlike the
curve that I alluded to that tells you the response of a magnet. That number depends on
everything. All of the levels of description that I swept under the rug in order to explain what
happens near a phase transition,
all of those things come back to bite you when you want to know what is that actual
critical temperature where the material first becomes a magnet.
Interesting.
So you have to ask the questions that you ask in science with an eye to saying,
first of all, let me ask the easy questions, the ones that don't depend on
too much. First, I understand those things. And then later on, we'll get to the other ones,
and maybe never. But there's a sort of rational order in which you would ask questions. And so
science, in some sense, has to be realistic in what its goals are.
So then the resolution to our earlier question
about why is science even possible,
if I'm hearing you right,
you're suggesting that some things in nature
could be described by the adage
that you hear people say all the time,
everything depends on everything else.
And some things in nature are not like that.
Not everything depends on everything else.
Am I on the right track there?
That the ones where everything does depend on everything else are really going to be hard. And there's no shortcut. And there's other things where if you ask the question in the right
way, you can get an interesting answer, which is useful and helps your understanding of the
phenomena and so on. But if you want to know what the actual number is in degrees Fahrenheit,
well, it's not going to tell you that.
So then it seems like we're coming to what some people might view as a disappointing
cop-out of an answer, which is that science is possible because we restrict ourselves to the
questions that have this kind of separation or an insulation that lets us do calculations
where what's happening here doesn't depend on what's happening out at Alpha Centauri.
So it's like we can answer the things that are easy in this sense, that they're well
separated.
The others are just going to be hopeless forever.
Is that the idea?
Well, I don't think it's a cop-out.
I think it's a great advance to be able to say, this question here, that's an example
of one of those things that you shouldn't ask, and this question here is an example of one that you shouldn't. So about 15 years or so ago, we came up with a theory that explains why there is one genetic code. It's a general theory about the ability to express genes and make proteins, and that's what the genetic code is for.
proteins, and that's what the genetic code is for. And also, by the way, it explains how life could have evolved so rapidly early on. So it's quite an interesting theory. And so often I'll
go and give a talk about this work, and people will ask me, well, why are there 20 amino acids
of life? Okay? And I'll say, I haven't a clue. And so I think that's an example of one of those
questions that you shouldn't ask. And I've got another reason for saying that. So the genetic code is literally
a code book that goes from DNA, or actually messenger RNA, to amino acid that then gets
linked into a protein. So it's a kind of grammar language of molecular biology.
So Francis Crick, who of course had Watson discovered the structure of DNA,
wanted to try to understand why there are 20 amino acids in life.
And he came up with an amazing, beautiful theory, which is mathematical.
Can I tell you what the theory is? I don't know if you know about it.
Yeah, so 20 amino acids, and there's a theory for why 20?
Yes, so the theory is very simple.
If you have a sequence of letters and threes, A, C, G, T, A, C, whatever, these correspond
to certain nucleotides, you don't know where the sequence starts.
So really, whenever you read the genome, you should put commas in to tell you where the
words start.
So Crick asked the question, well, can you make a code so that I've got four nucleotide
bases? What is the largest number of amino acids you can code for so that every string in this code
can make sense without you having to put in the commas? It's a very natural question,
a beautiful question. It's a beautiful question. And he came up with an answer. And the answer was
that the largest number of amino acids you can get is 20. Hence, 20 amino acids of life. So then you can enumerate all of these codes
without commas that Francis Crick had postulated. You can enumerate them. And when the actual
genetic code was discovered by Nirenberg and others five or six years later, the actual code
is not one of the ones that he had predicted.
It's completely wrong. So this is, if you like, the reasonable ineffectiveness of mathematics
in biology. Because in fact, the real code is a product of evolution. And there's nothing
special about the number 20. So this is an example of, you've got to ask the right question.
the number 20. So this is an example of you've got to ask the right question. You thought you could do science, biology in this case, using the same sort of elegant mathematical principles that
are so powerful in physics, but you completely get egg on your face when you try them without
really understanding more about the scientific phenomena that are relevant in biology.
And so would you generalize then to say that the role of history or evolution or contingency,
those kinds of things, are another ingredient for why we might expect certain subjects to be difficult
or maybe not amenable to the elegance of math?
Is that the issue?
Well, it is, but it's not completely hopeless.
I mean, we did make a theory for the
evolution of the genetic code, which
did explain, you know, how
is it that the world started
4.6 billion years ago,
the last universal common ancestor
of all life on Earth today
was around 3.8
billion years ago.
So that means that in less than a billion
years, life went from nothing
to the architectural complexity of the modern cell. And then after that, hardly evolved at all.
Okay. So that is staggering. I mean, I don't know, of course, the ultimate reason of how life
evolved and so on. But at least when we made our theory of this, it explained why it evolves so rapidly,
and it explained why the genetic code is so accurate and why there's only one of them.
So it explained some things, but not the other. So we definitely advanced in our understanding
of basic science, but we were able to do that because we fully recognized, here's a question
that's not going to be a good one to
go after. Here's a question that we might be able to do. And I think one of the jobs of a scientist
is to really ask the right questions in the right way. And that's harder than it looks.
Oh, that's a very, very marvelous stopping point for us in a way that part of the secret of science
is the art of asking the right questions.
There's even a book with that name, isn't there?
Isn't that Peter Medawar's book, The Art of the Soluble?
Peter Robinson That's right.
But yes, I mean, all science starts with asking questions.
And if you don't know how to ask questions, you can't do science.
Science is not the technology, the techniques of doing science.
Of course, that's how we're able to do it.
But fundamentally, it comes from asking questions.
Probably a lot of our listeners are thinking, what about everything that's going on today
with machine learning, artificial intelligence, the possible existence of quantum computers
that's supposed to solve all kinds of problems once they really start to get serious? Do you
think that those kinds of technologies will help us deal with these intricately interwoven kinds of problems where everything or many things depend on each other and so on, what those things are are basically
machines that can predict the next word.
Nobody expected that those machines could pass the bar exam or medical exams or help
people with their homework or help people write computer programs and so on.
The range of applications has been staggering and a surprise even to the people who built
these machines.
And in fact, nobody really knows how they work. In fact, if you look at the effectiveness of AI
in solving problems, it also exhibits parallel relationships very much like the ones that you see
in phase transitions. So one of the things I think is a great frontier for science is trying to
understand how these machines are able to do so much more than what they were designed to do.
The other thing where I think it's important is that what AI is very good at is discerning
patterns in data which are so complex that we don't perceive as well as these machines.
in data which are so complex that we don't perceive as well as these machines.
So I think there's great opportunity to use them to solve problems which are very, very hard.
And the problem that I think is an ambitious problem that I think could only be solved
by using AI is trying to understand the origin of instinct.
So how is it that instincts are coded in biological organisms?
So how is it that instincts are coded in biological organisms?
We understand the genetic code and we understand how the proteins that go into living organisms,
how they're coded and so on. But going from that level of description to the complexity of an organism like a fish
that knows where to swim to in order to go to its breeding ground and seagulls and things
like this, clearly we've somehow managed to encode very, very complex behavior. So this is something that
reaches across all scales of living systems. It's hard for me to see, in principle, how something
as complicated as instinct can be coded. But I think that AI would be able to perhaps be a tool that we could use to help us
make a scientific discovery and not just build amazing technological machines.
Well, that is fascinating, Nigel. I knew it would be provocative and stimulating to talk to you.
And you've just, I think, demonstrated how the art of science is asking good questions with that
question you've left us with. So thank you.
We've been speaking with physicist Nigel Goldenfeld. It has been really great pleasure
to talk to you today. Thank you. Thank you.
Thanks for listening. If you're enjoying the joy of why and you're not already subscribed,
hit the subscribe or follow button where you're listening.
You can also leave a review for the show.
It helps people find this podcast.
The Joy of Why is a podcast from Quantum Magazine,
an editorially independent publication supported by the Simons Foundation.
Funding decisions by the Simons Foundation have no influence on the selection of topics,
guests, or other
editorial decisions in this podcast or in Quanta Magazine.
The Joy of Y is produced by PRX Productions.
The production team is Caitlin Faults, Livia Brock, Genevieve Sponsler, and Merit Jacob.
The executive producer of PRX Productions is Jocelyn Gonzalez.
Morgan Church and Edwin Ochoa provided additional assistance.
From Quantum Magazine, John Rennie and Thomas Lin provided editorial guidance,
with support from Matt Karlstrom, Samuel Velasco, Nona Griffin, Arlene Santana, and Madison Goldberg.
Our theme music is from APM Music.
Julian Lin came up with the podcast name.
The episode art is by Peter Greenwood, and our logo is by Jackie King and Christina Armitage.
Special thanks to the Columbia Journalism School and Bert Odom-Reed at the Cornell Broadcast
Studios. I'm your host, Steve Strogatz. If you have any questions or comments for us,
please email us at quanta at simonsfoundation.org.
From PRX.