Theories of Everything with Curt Jaimungal - Donald Hoffman on the fundamental nature of consciousness (the most technical interview published with him)
Episode Date: July 30, 2020Donald Hoffman a cognitive psychologist and Professor in the Department of Cognitive Sciences at the University of California, Irvine. The interviewer is Curt Jaimungal, who has a background in mathem...atical physics, making this an eminently scholarly and technical talk (relative to what exists with Donald Hoffman). Familiarize yourself with the fundamentals of his theory before watching this in order to be maximally edified.
Transcript
Discussion (0)
Alright, hello to all listeners, Kurt here.
That silence is missed sales.
Now, why?
It's because you haven't met Shopify, at least until now.
Now that's success.
As sweet as a solved equation.
Join me in trading that silence for success with Shopify.
It's like some unified field theory of business.
Whether you're a bedroom inventor or a global game changer, Shopify smooths your path.
From a garage-based hobby to a bustling e-store, Shopify navigates all sales channels for you.
With Shopify powering 10% of all US e-commerce and fueling your ventures in over 170 countries,
your business has global potential.
And their stellar support is as dependable as a law of physics.
So don't wait.
Launch your business with Shopify. Shopify has award-winning service and has the internet's best converting checkout. Sign up for a $1 per month trial period at shopify.com slash theories,
all lowercase. That's shopify.com slash theories. I mean, you're asking the right questions for anybody that really knows their math and science.
And it must be hard to know about.
You're asking exactly the right questions.
Some of the deepest questions I've gotten.
So I really appreciate it.
How's it going, man?
Great. How about you, Kurt?
It's going well. How's the weather?
Been like in Irvine?
Oh, well, it's probably the best weather you could have in the world.
I mean, it's almost like we have a thermostat. It's, it's just gorgeous here
year round. So I have no complaints. It's warm. With all the quarantine and lockdowns, are you
okay? Is it all right in your mental health? Do you prefer it? I'm okay. I it's for me, it's great
in the sense that I'm, I'm enjoying less distraction and I'm able to focus on studying some mathematics and physics that I'm interested in studying.
So it's good. It's, of course, very, very sad to see the millions infected and hundreds of thousands, 140,000 at this point that are dead.
So, yeah, that's horrific. And, you know, the threat of the pandemic expanding and so forth is, you know, not much fun.
So, and I've got family and relatives and so forth that I'm concerned about.
Yeah, I'm similar in that I'm ambivalent about it.
I like it because it removes any distractions.
And I, too, am studying math and physics on the side.
And this whole situation has been salutary for me,
but it's obviously not for everyone. Has it changed your views at all? Has COVID given you any insight
or any different ideas with regards to your theories, which we'll get into?
It just confirms what we learned from evolutionary psychology about human nature,
and in groups and out groups and the reasons we
evolved logic and reason. It wasn't in pursuit of truth, but to support ideas that we already
believe. So we just see this being played out in the big debates about masks and so forth that are
going on. And, you know, we see human nature being played out in a big way, and we see the results in terms of, you know, a big failure in our country right now.
The rates of infection are going up.
The deaths were number one in the world's richest country, with the wonderful scientific establishment
that we have, you would think that there would be just no problem in us getting together,
organizing, and defeating this thing. But there's the whole aspect of human evolutionary psychology
that's playing itself out large. And so I study evolutionary psychology,
and what's going on does make sense.
It's tragic for us as a population,
but it makes sense in terms of human nature.
Do you meditate?
Yes, daily.
What kind of meditation? Mindfulness or transcendental?
It's silence.
Nothing else.
Pure silence.
No poses.
I do whatever feels comfortable,
but I spend probably at least three hours a day in pure silence.
And that's it.
Not looking at a screen or reading?
Nothing.
Not walking? Not walking. Just silence. and that's it not looking at a screen or reading nothing not walking not walking just silence sitting lying the pose doesn't matter to me but it's just pure
silence and being just being aware of whatever I feel in the moment in being
aware of thoughts as they come up and just, if they come up,
then I'll let them go and go back to silence. It's very, very simple.
What's what I find tricky about meditation.
This is why I don't do it is that as soon as I start to,
not that the thoughts in truth,
but that great productive thoughts come up and then I have to write them down
because I find that when I don't, it's rare that I get them again.
So do you sit with a pen and paper? If you have a particularly brilliant thought, do you just allow that to
escape? I let it go. It's over the years. It's only maybe one or two times that I've actually
written something down. It's, I just don't, I just let it come and I let it go. I have a fairly
good memory for the stuff that comes up if I want to later. So, so maybe,
maybe if I was really concerned that I would forget something that I really wanted, I might
be doing that, but I'm not too concerned about it. I can remember stuff later, but I just,
you know, it's, it's a matter of what I find it's the meditation process for me
is what I can imagine it's like for a caterpillar to go through metamorphosis.
It turns out when a caterpillar is inside the chrysalis, the immune cells of the caterpillar
attack and try to kill them. In fact, they do kill the cells that are responsible for the
transformation into a butterfly. They attack and kill, attack and kill
until they're overwhelmed. And then most of the structure of the caterpillar liquefies,
and then it's reformed into the butterfly. From the point of view of the caterpillar,
that can't be fun. And the immune system indicates that it's not welcomed by the
caterpillar at all. It's fought to the death by the immune system
until the immune system is overwhelmed. But from the point of view of the butterfly, it's
a brand new creation. So it depends on whether you take the point of view of the caterpillar
or the butterfly. Most of the time I've been the caterpillar, but I'm starting a little bit to
But I'm starting a little bit to welcome the destruction of everything I know, the destruction of deeply held beliefs, the destruction of my fears and so forth, defense mechanisms, letting all that stuff slowly be destroyed just in pure silence and then watching whatever new might come up. I mean, it's hard to imagine that the caterpillar has any idea what it's going to be turned into. It's just fighting it to the death
until it can't fight anymore, and then it liquefies. So from the caterpillar's point of view, it's just
all loss and death and destruction, but there's a rebirth. And that's the way I feel about
meditation, that it's, I'm allowing a transformation to occur,
of which I'm arbitrarily ignorant about its true nature.
And I can only tell you after I've taken the next step what it was like.
Are you aware of any studies that have been done as to the memory of a caterpillar
when it becomes into its butterfly phase, that it retains certain memories?
That's right.
I'm not an expert on it, but I do recall stories about that,
that if you give some kind of aversive training to the caterpillar,
the reactions might be there in the butterfly, that kind of thing?
I was watching a neuroscientist say that some memories might be non-local or not
localized into the neuronal formation in the brain. And the reason why he thinks so is
controversial and it's extremely rudimentary in his research right now is because of the Arctic
squirrel, the Arctic squirrel, when it goes into hibernation. And I believe most creatures
experience this, their brain degenerates to such a degree that it's, it's not clear that memory
should, certain memories should survive,
yet they do with the Arctic squirrel. Interesting. Do you eat a certain kind of breakfast or follow
a diet? I try to have a low acid diet. So I have lots of oatmeal. I have oatmeal for breakfast and try to do low acid, low fat.
So I'll have an omelet with mostly egg whites, maybe one egg yolk and fresh fruit.
My wife is a vegetarian. I'm not. But so she is she makes up vegetables all the time.
So I eat her vegetables and then I'll cook some fish, chicken. don't do beef that often but i don't avoid it either so but i tend to focus on fish and
chicken but you know i'll have a little bit of red meat so i try to eat i eat three times a day
i don't snack at all and i after dinner i don't eat snacks. So I don't eat usually from 6.30 in the evening until at least 7.30 the next day.
I never eat a bite.
And that's just been my habit for decades.
I don't eat at all for like 12 hours, 13 hours.
See, I intermittent fast, but I do so because I'm filled with avarice.
And I just want to eat whatever I see.
So I fast food, like when I spoke to Eric Weinstein, I think at that point during the
interview, I had not eaten for 65 hours or so, not plenty, but I do that before I conduct
my podcasts or interviews generally.
Do you have any rituals or habits that you do before you speak or before you study
besides meditation? No, I'm pretty comfortable talking. I was a professor at the University
of California for 37 years and had to give lectures all the time to big audiences. And so
for me, talking is not an issue at all.
So, you know, I was working out just a couple minutes before we got onto this podcast.
I had an hour window, so I got a little, you know, bench and dumbbells that I use.
And I weight train three times a week and do aerobics three or four times the other three or four days of the week.
So I try to have a regular thing. So I was doing that just before the podcast.
So, so yeah, no, I don't usually have to, I mean, I usually just go and talk and enjoy it.
I was looking at your background and it said you studied initially computational psychology. What is computational psychology? Yeah, so it was computational psychology was the name of my PhD at MIT.
And there I was in the artificial intelligence laboratory at MIT and in what's now the brain and cognitive sciences department. And so what I was doing was a combination of studying just artificial intelligence. And at the time it was, you know, good old fashioned
artificial intelligence. It wasn't the neural network stuff at that point, because that was
from 1979 to 1983. Neural networks came into their own a little bit, a few years later.
So I was doing more of the good old-fashioned AI
stuff, but still it was, you know, I was studying human vision and computer vision, and the idea was
you had a good theory if you could build a computer that could work. So if you have a theory
about how we see in 3D, build it. If you can't build it, then I'm not sure you have a theory.
So it was that really hard-nosed attitude. So then on the other side, I was looking with,
in the brain and cognitive science department, at human vision
and the neurophysiology of human vision,
psychophysical experiments about how we see in 3D,
object recognition, colors, and so forth.
So the idea was to reverse engineer biological vision systems,
which are the only ones that worked at the time,
and take the insights and try to build computer vision systems, which are the only ones that worked at the time, and take the insights and try to build computer vision systems that implemented what we learned.
That way we would know we weren't just, you know, fooling ourselves into thinking we understood.
If it doesn't work, then you don't understand. Go back and
scratch your head a little bit and figure out what's really going on.
So it was...
Sorry, around what year was this?
head a little bit and figure out what's really going on. So it was... Sorry, around what year was this?
I was there from 1979 till 1983 at MIT. And I was very fortunate to have two wonderful advisors,
David Marr and Whitman Richards. So both of them had their foot in the AI lab and in the
brain and cognitive science department. Cool. Cool. You know, I,
I see a few different strands to your theories in this just for the people
listening or watching,
this is going to be extremely technical or much more so than most of the other
interviews that you have online and which I think stay on a,
at a cursory level, particularly,
particularly because you're conversing with people who don't have a
mathematical background or a physics background.
So as far as I can delineate, you have the conscious agent model, interface theory of perception.
That's number two.
Bayesian decision theory, which I believe you mended to computational evolutionary perception.
I could be wrong about that.
And then number four, you have some new papers on eigenforms, which I haven't studied much,
because I don't know much about modular forms and Hecke algebras, so I didn't get a chance to read them.
But are those, is that a fair summary of your work or are there more?
Yeah, so evolutionary game theory I work on.
Yeah, Bayesian models of perception, Markovian models of the dynamics of consciousness, yeah.
Out of curiosity, this is getting somewhat out of left field, why did you choose G to represent
the Braille space for decisions? Because when I was, for those listening, there's this,
it's, there's a relatively simple structure though,
non-trivial called sigma algebras.
And then there's Markov kernels.
And then you have W, which is the world state.
Then you have X, which X experience.
I imagine you chose X because of experience.
World W. Okay, great.
But G, what does that have to do with decisions?
Why did you choose that?
Did you just run out of letters or?
Yeah, we're running out of letters.
And we already had A for the kernel fractions. I actions. In some sense, G is a set of actions
that you're going to take. And we already used action A for the action kernel. So I used G for
the group of actions that you could take. I was thinking about it at the time that it might have
a group structure as well as a measurable structure. So that's why I used G because I
already taken A for the kernel itself. But the easiest
way to think about the definition of a conscious agent is that it's just three kernels. There's a
perception kernel. So you get the world influences you through a perception kernel. In other words,
whatever the state of the world is, it probabilistically affects what your experiences
are. That's what a kernel P does. So whatever the state of the world is, you get a probabilistic
effect on your experiences. Then the D kernel is whatever my experiences are, they probabilistically
affect the actions that I choose. And then my action kernel is whatever actions I choose,
they probabilistically affect the state of the world. Very, very simple triangle. The state of
the world affects my experiences. My experiences affect the choices I make. The choices I make affect the state of the
world. Right. Okay. Now, please forgive me if I jump around. I have 90 questions here.
And what you're saying right now brings me to question number 80, let's say. So it's going to
go all over the place. Why did you have X when you could just go from W to G directly by composing D after P?
So that is to say, you can explain what I'm saying to the people.
Right.
So the idea, when I was working on this model of perception, decision, and action, the idea is, first, I'm trying to model the process of observation.
So, you know, I studied computational vision at MIT and then at UC Irvine.
And it turns out when you look at all the specific mathematical models that we have,
they all have a similar structure, and I was trying to capture that similar structure,
sort of a Bayesian inference kind of structure, but I wanted to actually capture the fact that
we do have these conscious experiences, like I'm perceiving a three-dimensional object,
like an apple, and I'm seeing its three-dimensional shape, its colors, its textures, and so forth,
so I needed a space X of experiences to capture what it was that I was perceiving, its colors, its textures, and so forth. So I needed a space X of experiences to
capture what it was that I was perceiving. But then, you know, perception doesn't occur in a
void. There's a perception action loop, right? This is a point that sort of inactivist approaches
highlight, and that is that perception and action sort of work together. And so I want a model in
which I have experiences, but those are tied to my actions, and those actions affect the world
and my experiences in a loop. And so that's why there's this loop structure to it. And to leave
out the experiences would actually be to leave out the whole point, which is to understand my perceptions.
Okay.
When I watch your interviews with people who most of the time aren't physicists, they seem to be resistant on that space-time is emergent.
Like Michael Scherner.
And he was saying, well,
you probably talk to the minority of physicists that think that space-time
is emergent.
That's not in fact true.
Most physicists, as far as I know,
actually believe that space-time is emergent. Nima is in fact true. Most physicists, as far as I know, actually believe
that space-time is emergent. Nima is one of them, Lee Smolin. So why do you think that there's this
aversion to monist, that is monism, theories predicated on the primacy of consciousness
rather than the primacy being material? Why is it that that's the sticking point?
rather than the primacy being material.
Why is it that that's the sticking point?
Right.
And even among physicists who recognize that space-time isn't fundamental,
you won't find very many of them arguing that, therefore,
consciousness is a good candidate for what is fundamental.
I've been to an FQXI meeting of physicists where they were looking at the role of the observer in physics at Banff a few years ago. And even in that context where they were
looking at the role of the observer in physics, I didn't see any of the physicists there, interested in taking consciousness as fundamental.
And I think the attitude is,
why take such a huge leap?
I mean, letting go of space-time is one thing,
but jumping into consciousness,
I mean, I think most physicists would say,
that's a much bigger leap than we need to take.
Let's try to find just some deeper structure beyond space-time.
For example, this is, I think, the brilliant work that Nima Arkani-Hamed is doing,
where he's saying, look, we can look at these so-called positive geometries behind space-time,
the amplituhedron, associahedron, and other things that he's finding,
the positive Grosmanians, the bigger structureuhedron, associahedron, and other things that he's finding, positive
Grasmanians, the bigger structure that includes many of these. And he doesn't know what that
realm is about, which is fine, but he's finding the mathematical structure. So he's finding that
there are these symmetries, like in the amplituhedron, that can't be captured in spacetime.
They also have found that the mathematical description of the scattering amplitudes,
the computations of the scattering amplitudes,
become much more simpler when you let go of space-time.
If you use Feynman diagrams and have virtual particles in space-time,
then you could have hundreds of thousands of pages.
But in this deeper space that he's finding, this pure math,
it collapses to a few terms that you can
compete by hand. And so I think that the physicists are saying, let's not jump further than we have to.
If we have to let go of space-time, let's just find some, let's let the mathematics guide us first.
And we can try things like, well, let's just try quantum bits and quantum gates.
Seth Lloyd, I think, at MIT took an approach.
Quantum bits and quantum what?
Quantum bits and quantum gates.
Gates, right. Okay.
Right. So if spacetime isn't fundamental, let's posit something that's not conscious,
just quantum bits and quantum gates.
And what Seth Lloyd was able to
show is that you could, in some sense, get little patches of spacetime, general relativistic
spacetime, where the curvature of this little patch was proportional to the action of the gate.
And so instead of starting with spacetime as fundamental, you start with quantum bits and
quantum gates. Where those bits and gates come from, again, that's now that's that's your new
assumption. It's not space time, it's just bits and gates. And you can ask, you know,
who ordered that? Why should the universe be fundamentally that? So the reason
why I go there, even though I think most physicists don't, is that as a cognitive
neuroscientist, I'm very interested in the problem,
what's called the hard problem of consciousness.
We have lots of correlations between brain activity
and conscious experiences.
I like to use the example of area V4 of cortex,
left hemisphere is back over here somewhere.
If you take a magnet and just touch it to your skull there,
a transcranial magnetic stimulator, and inhibit before, you will lose all color experiences in
the right visual field, everything to the right of where you're looking, you will lose color
experiences. You turn off the magnet, color experiences come back. And we have dozens,
maybe hundreds of correlations like that between specific kinds of neural activity
and specific conscious experiences. But we have no scientific theories that can explain that correlation. There are no.
There's not a scientific theory that can explain even one specific conscious experience,
like the taste of chocolate or the smell of garlic, that says, for example,
this pattern of brain activity or this functional aspect of brain activity
this pattern of brain activity or this functional aspect of brain activity must be the taste of chocolate. It couldn't be the smell of, you know, a rose. And these are
the precise reasons why there's just nothing on the table. And so I'm trying to solve two problems.
I'm trying to say the reason why space-time is not fundamental is because something else is, consciousness.
And if I start with consciousness, I can perhaps get a theory that explains this correlation between conscious experiences and brain states, but I don't start with something in space-time, namely brain states.
I don't start with brain states and figure out how they cause consciousness.
states. I don't start with brain states and figure out how they cause consciousness. I go the other way around. I start with consciousness and show how
it creates space-time as a data structure and brains and neurons as
particular objects within that data structure. So the idea, the big
picture idea is if space-time isn't fundamental, as the physicists are now
recognizing, then objects in space-time are not fundamental, as the physicists are now recognizing,
then objects in space-time are not fundamental. And just so people are clear, this is not new that space-time isn't fundamental.
It's not like a kooky theory.
This has been going around since the 80s,
just so that they don't think you're glomming onto some fad.
That's right.
And I think, as you said, most physicists now, most, you know, first rank theoretical physicists just recognize that it's sort of just, of course, space time isn't fundamental. And of course, our job is to try to figure out what's the next step.
And it's not such a crazy idea either.
Right. with Michael Shermer, he was just blown away. There's no way that space-time can't be fundamental. There's a cup in front of me. I see the cup. That's beside the point. There's been many times
where we have an effective theory, then we find one that's more fundamental that produces that.
There's no reason to think that our current theories aren't effective.
Right. And my colleagues in the cognitive neurosciences can be forgiven if they don't understand the state of the play
in physics, right? I mean, they were taught some physics in their backgrounds, and they
have just absorbed the idea that space-time is fundamental, and that's perfectly fine in a
Newtonian universe, and even in special relativity, and in general relativity. It's when you put
even in special relativity, and in general relativity,
is when you put Einstein's theory of gravity together with quantum theory,
that you get the problem.
And that's sort of beyond what you could expect a normal cognitive neuroscientist to really have studied.
And so, you know, I don't blame my colleagues there
for not understanding this.
But for any real professional physicist it's just
obvious you know general quantum field theory don't play well i see there's two problems one
if you encounter the general public and when i say general public i mean the rationally minded
skeptics like michael schirmer or maybe even sam harris and they'll their sticking point might be
the space-time fundamental aspect but then when you speak to a physicist their sticking point would be consciousness you're saying consciousness is funny like i agree with
you space-time is emergent but then consciousness is complicated that's an emergent phenomenon for
a neuroscientist to figure out or or some people like penrose there's a quantum mechanical aspect
to it but it's still somewhat complicated it It's not simple. Why would you start with consciousness? Right, okay. I have a question about... Sure. Can I just respond to why I started with consciousness?
The idea is very, very simple. If space-time isn't fundamental, then objects in space-time aren't
fundamental either. And if they're not fundamental, then they aren't the true source of causal power
in the universe. Physical objects have no causal powers. It's useful fiction to think that the
eight ball being hit by the cue ball has, there's a causal interaction, or that
neurons have causal effects, but it's just a fiction. And so that's why I tie
the two together. Neurons, being objects in space-time, have no causal powers.
My brain causes none of my behavior.
It causes none of my experiences.
But that means that, therefore, any attempt to show how we solve the hard problem of consciousness,
starting with brain activity and trying to boot up consciousness, will fail.
So my idea is simple.
Let's start with the theory of consciousness and show how space-time is not fundamental. It's just a data structure within consciousness. That's why space-time isn't
fundamental. It's merely a data structure, a visualization tool that consciousness uses to
interact with other consciousnesses. So this is all tied together. The space-time being not
fundamental and the move to make consciousness fundamental
is all part of one big move,
recognizing that brain activity
could not possibly cause our conscious experiences.
So why don't we try the other way?
Let's start with conscious experiences
and show how they cause space-time and objects
to arise merely as data structures within consciousness.
Okay, getting to the neuro-correlates aspect, you mentioned that it would be to say that,
hey, because our brain is correlated with certain conscious experiences,
that's one thing, but then to say that the brain causes it is another. It would be akin to saying
there's a train track, there's a subway station, and people form right before the train comes,
but it doesn't mean that the people themselves caused the train to come. However,
you just mentioned that there's transcranial magnetic stimulation where we can, in fact,
perturb neurons and then see, it would be one thing if we saw conscious experience,
then looked at the neurons and then saw that there's some association, that's correlation. But then if we can perturb the neurons to elicit a conscious
experience, why is that not evidence for the causal power going from neurons to conscious experience?
Great question. And that's what my colleagues would all point to is that, look, they'd say,
look, we can intervene. I can literally manipulate the brain and get a change in conscious experience.
The fact that I can intervene means I'm showing you the true causal structure of what's going on there. And that's a
logical error. And it's very easy to see the logical error. Imagine you're playing a virtual
reality game and you're driving a car. You're driving a Mustang in a virtual reality game,
right? And you have a steering wheel. And it turns out you can intervene in that. If I turn the
steering wheel to the left, my car actually turns left in the. If I turn the steering wheel to the left, my car actually turns left in the game.
If I turn the steering wheel to the right, the car turns right.
Therefore, there must be a real steering wheel, and it must have real causal powers.
No, there's no real steering wheel.
That's not the claim.
Well, that may be their claim.
My claim or my question is it is correct to say that the turning of the steering wheel causes turning in the game
now it might not be that the turning of the steering wheel is akin in causal power to way
to the way that we would think that the turning of the steering wheel in a real car changes the
wheels and the axis but all that we need for for causation to be shown as A implies B. It doesn't matter that they're intermediate steps.
Do you understand what I'm saying or am I not explaining it correctly?
The thing about the steering wheel that you see in the virtual reality game is that if you turn your headset to the side, the steering wheel ceases to exist.
There is no steering wheel because I'm not looking at it anymore.
When I look back over to where the steering wheel should be, I see a steering wheel ceases to exist. There is no steering wheel because I'm not looking at it anymore. When I look back over to where the steering wheel should be,
I see a steering wheel.
The steering wheel is something that I create and destroy as I need.
It's gone.
Now it's back.
It has no causal powers.
The steering wheel literally has no causal powers.
It informs me as the player about actions that I can take to interact with the game. And
the way I see myself, the way I visualize what I'm doing is I visualize myself turning the steering
wheel. But that's all just a useful fiction. And the steering wheel itself literally has no causal
powers. It's literally only in my head. There is no external steering wheel
with causal powers. And I'm saying that that's true, not just in virtual reality,
but in everyday life, because space-time is not fundamental. It's just your headset.
When you go around in everyday life, think virtual reality. I create the moon when I look
up in the sky. I delete it when I look away.
There is no moon, just in the same way that there is no steering wheel in the virtual reality game.
So when Einstein, you know, asked someone that was with him, do you really believe the moon is
only there when you look, and it's not there when no one looks? My answer is, yeah, it's not there when no one looks.
There is no moon because there is no space-time.
And see, that's the part that Einstein wouldn't like, right?
Because his theory was the theory of space-time.
Space-time is not fundamental.
When we really understand what that means, nothing inside space-time is fundamental.
Nothing.
And therefore, nothing inside space-time has genuine causal powers.
It emerges from something deeper. It's a useful fiction that our species has to think that
physical objects have causal powers. I throw a rock, it hits a window, the rock caused the
window to break. That's a useful fiction. I inhibit area V4, color disappears. Aha, therefore, V4
causes color. It's a useful fiction, but it's
just a fiction, just like the steering wheel in the virtual reality game. Okay, let's remove the
virtual reality. Let's just pretend it's a regular game with a TV screen and you have a controller
in front of you. Right. And then you move your analog stick to the left and the character moves
to the left. Then would you say that you moving the analog stick to the left causes the character to move to the left?
In that metaphor now you within that metaphor you would say that the the icon on
the screen wasn't the thing that caused things to move right so if I see myself
right or I think a paddle hit the thing like in palm I hit hit a ball with the
it no it's really in that in that metaphor would be the joystick that was
the real thing and And I'm saying,
once you get that recognize that the world around you is just the screen,
even the joystick in your head, but the joystick in your hand,
the joystick is in your head. Okay. Is in your head itself.
Because space, see, that's the real point about what physics has discovered.
Space time is not fundamental. It's hard for us to really grasp that.
It's really, we so deeply believe that of course space-time is fundamental. Of
course the moon is there when no one looks. But when you really grasp space-time
is not fundamental, and it's actually our construction, then physical
objects don't even exist
when they're not perceived.
They don't have causal powers.
They don't even exist.
You can think of this,
of what I'm doing right now,
as partly to convince,
because I already have a predilection
towards your ideas.
I already like you and I like your ideas.
So you don't need to convince me too much,
although I do have some objections,
but I'm demarcating where I feel people like Michael Shermer
and so on have problems
because they take that virtual reality metaphor.
And there are some problems with it because,
well, like I said, forget the virtual reality.
Just imagine a screen and then you have an analog stick
and then those two get conflated.
Right. Okay. There's another problem that I see with the virtual reality just imagine a screen and then you have an analog stick and then those two get conflated right okay there's another problem that i see with the virtual reality metaphor when you take off the headset there is still the computer rendering it to the lcd
now there are some monitors there are some virtual reality headsets that detect when your
forehead is there and then just shut it off. But let's imagine that's not there.
Then they're like, well, it still persists.
Even if when I look to the left, it renders what's on the left or look to the right, it renders.
But I can even simulate someone going through a game without a conscious perceiver.
And then I can set my video camera up to record that.
And that person, that character is moving through the game.
There is no perceiver right then and there.
That's why I see this virtual reality metaphor as having some
limitations, but what would you say to that?
That look, even without a perceiver, I put the headset down, it's still rendering.
Well, that now in terms of physics is the question of do
physical objects literally exist and have definite values of their properties
when they're not perceived so that's the question of local realism right so look
so this is a technical question in physics and this is what we're really
addressing here is the question of local realism.
Do physical objects really exist and have definite values of physical properties like position, momentum, and spin when they're not observed?
And is it true that those properties have effects that propagate no faster than the speed of light?
And you might intuitively think, well, of course, the moon has a position it has a momentum an
electron has a position a momentum a spin value a spin axis when when it's not observed of course
but when we do the experiments um first you know when this was first raised you know
Einstein was one of the ones who asked this question back in the EPR paper back in 1935, right? And a number of physicists, including Wolfgang Pauli, thought
that this was ludicrous, that Einstein was asking a question like, you know, that was no more
interesting than how many angels can dance on the head of a pin. How could you possibly answer
Einstein's question, does an electron have a position, a definite value position or momentum when it's not observed? That seems silly. I mean,
by definition, if you're not observing, how could you tell whether it has a position of momentum or
not? But it turned out in 1963 or so, the physicist John Bell discovered that we can answer that
question. There is a series of experiments that you can do, measurements that you can do,
that will give you certain statistics that can allow you to decide whether or not local realism is true.
And that was a real stunning achievement, one of the greatest intellectual achievements in human history,
John Bell's inequalities.
And then a year later,
John Bell and Koken and Specker,
apparently independently,
also came up with a way
to test non-contextual realism.
Non-contextual realism is the claim
that objects exist
and have definite values of their properties
when they're not observed.
That's the realism. And definite values of their properties when they're not observed. That's the realism.
And the values of those properties don't depend on the way you measure them.
That's the non-contextual part.
And once again, quantum theory entails that non-contextual realism is false.
It also entails that local realism is false.
And it's been tested, and it's true.
Local realism is false, and therefore, either realism is false, that is, that electron has a position when it's not observed, or locality is false, the influences propagate no faster
than the speed of light, or both. And I think that both, both are false. Local realism is just plain false. Non-contextual realism is false.
And once we understand that, that means that there are...
Sorry, did you say that non-local realism is also false?
Yeah, because space-time is not fundamental. So the...
Right, right, right. Okay.
Space-time itself is just... I mean, the big take-home is space-time is not fundamental.
Therefore, particles and their properties inside space-time are not fundamental.
For a physicist, this is obvious because, you know, as soon as you say space-time is not fundamental, it's emergent from something else.
Well, that takes particles with them because particles are nothing but irreducible representations of the symmetries of spacetime, the Poincaré group.
So particles are irreducible representations of the Poincaré group, which are just the symmetries of spacetime.
The spacetime is not fundamental, neither are particles.
So my colleagues in cognitive neurosciences, we're still back thinking that, oh, we have this reductionist view.
Spacetime is fundamental.
We're still back thinking that, oh, we have this reductionist view.
Space-time is fundamental.
You start off with protons and neutrons, or if you're more sophisticated, quarks and leptons and gluons and gravitons.
And those are the elementary particles.
The laws that govern them are the fundamental laws of all of reality.
And we can, in a reductive fashion, build up everything from that.
Well, that whole foundation is gone.
Space-time is not fundamental.
Those particles are not fundamental.
Reductionism is dead.
Space-time reductionism is dead.
We need a completely new framework.
Now, brilliant physicists like Nimar Khan-e-Hamed,
going after the amplituhedron and so forth,
are looking for new foundations for physics outside of space-time, but it's not a
reductionist kind of approach. Does Nima say that consciousness is fundamental, or are you just
saying, well, Nima has a theory that simplifies space and time down to something else, and maybe
with your theory, the cognitive, the conscious agent theory, you can imply the amplitude of hedron,
and therefore you can imply whatever Nima's implying?
Or does Nima think that consciousness is fundamental?
I have no idea what Nima thinks about consciousness.
I haven't seen him talk about it myself in any of his writings or lectures.
I mean, I would be surprised if it turned out he thinks consciousness is
fundamental. I would be very, very surprised. But the right answer is I don't know what he
thinks about that. I would bet good odds that he doesn't think consciousness is fundamental,
but I don't know. I certainly would not want to ascribe that point of view to him.
All I've seen from his lectures and writing is that there are these beautiful mathematical structures, these positive geometries, amplituhedra, sociahedra, that are more fundamental than space-time and that in magical ways seem to give out, those scattering amplitudes turn out to be volumes of these
geometric objects he's finding, these positive geometries.
But what this new deeper realm is about, I haven't seen him speculate about, you know,
what is this telling us is the realm behind space-time?
What is that realm about?
I haven't seen him speculate that.
He might have some speculations, but I haven't seen him. He just goes with the math. Razor blades are like
diving boards. The longer the board, the more the wobble, the more the wobble, the more nicks,
cuts, scrapes. A bad shave isn't a blade problem. It's an extension problem. Henson is a family-owned
aerospace parts manufacturer that's made parts for the International Space Station and the Mars rover.
Now they're bringing that precision engineering to your shaving experience.
By using aerospace-grade CNC machines, Henson makes razors that extend less than the thickness of a human hair.
The razor also has built-in channels that evacuates hair and cream, which make clogging virtually impossible. Henson Shaving wants to produce the best razors,
not the best razor business.
So that means no plastics, no subscriptions,
no proprietary blades, and no planned obsolescence.
It's also extremely affordable.
The Henson razor works with the standard dual edge blades
that give you that old school shave
with the benefits of this new school tech.
It's time to say no to subscriptions and yes to a razor that'll last you a lifetime. Visit
hensonshaving.com slash everything. If you use that code, you'll get two years worth of blades
for free. Just make sure to add them to the cart. Plus 100 free blades when you head to
h-e-n-s-o-n-s-h-a-V-I-N-G.com slash everything and use the code everything.
As for Bell's inequality, I'm sure you're aware that there are some ways around it.
So for example, super determinism, as well as a couple others that I can't recall right now.
Edwin James, he didn't particularly think Bell's inequalities or the experiments that
validate Bell's inequalities demonstrate that non-realism is the way to go.
What do you say to those objections?
Well, I would say that the deeper problems between general relativity and quantum field theory that force us to recognize that space-time isn't fundamental are the ones that
clinch it for me against realism. Because realism about entities that are merely
representations of space-time, when space-time itself is not fundamental, is now beside the point.
What made you choose NEMA's amplitude of hedron as opposed to the other,
perhaps simpler models that demonstrate
the space-time is emergent we mentioned a few others before this call like causal
causal dynamic triangulation there's also spin foam networks right um
I've been intrigued and I'm studying Nima's work because he explains in ways that I really understand
why we can't really even have a vestige of space-time as being fundamental.
So we can't take one part of a time or one part of a space
and try to boot up the other.
And Nima recognizes that we also can't expect that quantum theory is fundamental.
He's saying, look, it's deeper than just space-time.
We're going to have to have a deeper framework in which space-time and quantum theory together arise joined at the hip.
And so many of the techniques are saying, look, what let's,
general relativity is not a quantum theory.
It's a classical theory.
It's continuous space and so forth.
So maybe what we need to do is let quantum be fundamental and then we'll try to somehow
quantize space-time and Nima is going far more deep he's saying we need a new framework in which
locality in space-time and unitarity of quantum theory arise together from something that's neither local in space-time and non-unitary.
And that's, I think, so I'm going after Nima partly because I think that he makes a good case
that space-time and quantum theory both have to emerge from something deeper,
and that we really need that, we need to bite the bullet and recognize we can't go halfway,
we need to go really bullet and recognize we can't go halfway we need to go really
a radical new foundation the constraint on the new foundation of course is that it has to give
rise to space-time and quantum theory in ways asymptotically flat space-time like ours,
the only observable that there is, is scattering amplitude.
The scattering amplitudes. That's the only observable.
There are no local observables, and Nima explains that.
And Nima explains that.
So when we look at how to get the only observable in asymptotically flat space-time, he should let it do that.
What I'm asking is that there are several contenders to fundamental theories of everything, which they have to be fundamental theory of everything.
But there are several. And I'm glad that you're pursuing Nima's amplitude of hedron, because I'm also interested in how consciousness can give rise to some of those other potential theories of
everything. And I only have a finite amount of time. So I'm glad that you're doing it. But I'm
curious as to why you chose that one. The criteria you just said, which is QFT and general relativity
have to come from something else. And you would
prefer that that not be embedded within space-time, but in fact, have some other structure that gives
rise to space-time. But there are quite a few. For example, I just have a list here. And I'm just
curious why you chose the amplitude of heat drawn other than you have to start somewhere. So for
example, Wolfram's theory of everything, which starts with computation, which seems so similar to yours, actually, we can get to that later. And there's Patti Salam,
there's SL10, there is E8 as well. Right, right. Right. And there's Strand model from Christopher
Schiller. There's also one more, Freiden's, Freiden's, Freiden, his name is Bernard Royd
Freiden. He believes that the laws of physics, I believe,
can be unified under Fisher information,
so they're purely informational at their base.
I'm curious, why Nima's as opposed to those that I just listed?
Well, there's a couple reasons.
One is I'm convinced by Nima's argument
that scattering amplitudes are the only observable,
Nima's argument that scattering amplitudes are the only observable, and his is the only theory
that predicts from first principles the exact scattering amplitudes and shows deep symmetries behind those scattering amplitudes, right? And that also, you know, explains why when we look
at these deeper symmetries outside of space-time, hundreds or thousands of pages of computation
that you get from Feynman diagrams on the scattering amplitudes turns into two or three
terms. So, he's got the beef. He's actually, he starts with something outside of space-time,
he gives rise to quantum theory, unitarity and locality, and precise phi cube theory, phi to the fourth theory, Yang-Mills,
super Yang-Mills, he's getting one after another. He's getting these scattering amplitudes. He's got
the beef. These other theories are saying we can do this, we can do this. He's doing it.
I see. In other words, he's the closest so far.
He's doing it.
I see.
In other words, he's the closest so far.
Yeah, he's the closest.
And then there's one other thing that also is making me pursue this. And that is, I'm starting with this theory of Markovian kernels, this dynamical system of conscious agents.
Now, Markovian kernels, in general, are not associated with positive geometries, right?
And Nima is saying this deeper structure is positive geometries. But, and this is what I'm working with my mathematician
colleague Chetan Prakash and others on, I claim, and we'll see if I'm right, I claim
that the asymptotic and invariant behavior of Markovian systems is identified with positive geometries.
So even though the step-by-step dynamics of...
Repeat that one more time.
Repeat that one more time.
So when you have a Markovian system, right, you have the step-by-step dynamics.
But you can also ask, what happens long-term if you look at
the long, the broad, broad view. Technically, I have a Markovian kernel P. I'm looking at P to
the n, which is P composed with P composed with P n times as n goes to infinity. Those kernels,
I claim, are invariably associated with positive geometries.
Without adding more structure?
That's all you need. P to the n, if you take kernel P, P to the n will be associated
with positive geometry as n goes to infinity. Okay, that's interesting. I gotta think about
that. One of your papers with your friend, I love this guy, Prakash Chetan, right?
Yeah. I haven't had a chance to look at any of his papers that weren't co-authored with you,
though. Either way, one of the papers, a recent one, was showing that the payoff functions
that are homomorphic to totally ordered sets, as well as permutation groups, cyclic groups,
and one other, if you take their number and you divide it by the total amount that's admissible,
that it tends to zero.
And for the people listening, what I hear from you is saying that,
therefore, the probability that we see reality as it is, is zero.
But I don't know if that was, but I see that as a leap.
So I must be missing a
step. Because the leap is that there's a hidden claim. You can go from the total size of the set
of admissible functions over the, sorry, under as the denominator of the total functions that
are homomorphic to certain groups and certain structures.
That's fine. You can say that that goes to zero, but then you have to say that there's an even
probability distribution that we could have gotten any one of them for you to say that the probability
then as n goes to infinity of us seeing truth or reality as it is or homomorphic to the structure
is also zero. Excellent point. So this is a very technical point, but it's an important
one. So in evolutionary theory, the fitness payoff functions depend on the state of the world.
Whatever objective reality might be, it depends on that. But it also depends on the organism,
its state, its action, and the competitors and so forth. So it's a complicated function with a very
complicated domain, including the state of the world.
And the range of the function are the payoff values.
You may like from one to N if there are N payoff values.
And of course, evolutionary theory, evolutionary game theory,
is saying that we're tuned, our senses will be tuned to the payoffs, right?
we're tuned, our senses will be tuned to the payoffs, right? In other words,
organisms that better perceive how to get high payoffs are the ones that are going to survive.
So our senses will be tuned to the payoffs. So the question then, the technical question is,
what is the probability that the payoff functions preserve information about the structure of the world? Say the world has a group structure or total order.
And I ask, what is the probability that a payoff function will preserve that structure?
And we looked at two kinds of symmetry groups and total orders and also measurable structures.
And what we did is we assumed a discrete set of states on the world,
a finite set of states and a finite set of payoff values.
So like,
you know,
N states in the world,
M states of the payoff values.
And,
and then what we can do is literally using combinatorial,
combinatorial arguments,
count the number of payoff functions that preserve the structure,
like a group structure, and divide, as you say, by the total number, which is a finite number.
This is a finite number of total payoff functions. And there, of course, counting measure,
literally counting the number of functions, is the canonical canonical measure and then we take the limit as
the number of states in the world and the number of payoff values goes to infinity now now someone
could argue as raise the question that you just raised why is counting measure the measure that
we used why should there be some other measure and the answer is that evolutionary theory gives
us no argument for any other measure. So if someone
wants to say we use the wrong measure, then the burden of proof is on them to say what is the new
measure, and on principled evolutionary grounds, what is the measure that we should use instead of
counting measure, and why? And that will be a brand new addition to the theory of evolution by natural selection. Good luck.
Well, what if the extra condition doesn't come from evolutionary theory, but instead
classical physics like people like Nima or even Michael Shermer might think, which is
saying that, okay, there is a probability distribution that favors certain structure
preservedness for whatever reason,
much like we don't know why the fundamental laws are the way they are. Now, of course,
you're saying, well, I can derive the fundamental laws. Forget that because I'm going to another
model. Them saying that the fundamental laws are fundamental. And one of them is that the
probability distribution of your perceptions will tend, not necessarily tend to, but they're not
uniform. Absolutely. And if someone can do that, fabulous, that would destroy our theorem.
So that's the challenge. You could think of our theorem as putting out that challenge.
Come up with some new principle that explains why there's a bias toward payoff functions that are homomorphic to the structure of reality.
What is the conspiracy in nature that makes that happen? Evolution by natural selection,
as it's currently formulated, and physics as it's currently formulated, gives us no belief
that there's a conspiracy going on to make the homomorphic functions more probable. But if someone can come up with a theory that explains why nature conspires
that way, I'm, I'll be the first to listen very intently.
By the way,
do you also have a measure of structure preservateness that isn't a strict
homomorphism? Because if, okay.
Just for people listening in the way that it's written, that I saw from the paper that you published recently, it's like you have one, two, three, four, five.
Imagine that's a totally order set.
One, two, three, four, five, and that's how the world is.
But if someone sees the world as one, two, three, four, five, four, I'd be like, nope.
No, you're not seeing the world as it is, but yet it's still similar.
Right.
So you can use things like the L2 norm
or the L infinity norm to look at how many. So for, you could say, suppose that here's a payoff
function that's purely homomorphic. We can ask how many payoff functions are within a certain
distance, like an L2 distance or an L infinity distance of that payoff function. And we can,
for each distance that we look at, we can ask how many
payoff functions, even if they're not exactly homomorphic, are that close to homomorphism.
And then we can look at the ratio of those to all payoff and see, again, if they go to zero or not.
And so in the case of total orders, it turns out that if you allow a very, very generous width, I think it was like 20% variation, the set of payoff functions that were within 20% variation of any homomorphic function still went to zero.
In the case of symmetric groups, that was not true.
If you allowed just a small window around the homomorphic functions, then most of them would have full weight.
So most payoff functions were very close to a function
which was homomorphic to,
you know, a homomorphism of the symmetry.
Same thing for measurable structures.
In measurable structures, almost every function was close to a payoff function that is homomorphic.
But notice that that won't buy you anything.
But notice that that won't buy you anything, because there are countless structures that nature might have. Countless. is close to a symmetric, something that's homomorphism, a symmetry preserving function,
also a measurable function, you could now give me thousands of different kinds of homomorphisms
that it's close to. Well, which one is natural selection choosing to shape you to? A random
function is close to all of these. Where is the selection pressure to push you toward one or the other?
In other words, all you get is that, yeah, a randomly chosen payoff function, you could show
us it's close to lots of different things that are homomorphic to all else, but where's the
selection pressure there? There's no selection pressure there at all. Now, one more critique is that gravity is not a cyclic group. It's not a
totally ordered set. It's not a permutation group. It's not a measurable, maybe, but either way,
the structure of the fundamental physics coming from a different frame, that is where consciousness
is not fundamental. It doesn't follow what I've seen so far.
I think you've done totally ordered permutations, cyclic and measurable spaces.
But it's not as if the fundamental laws or the fundamental space of being from the materialist point of view are those.
So what you're counting isn't necessarily reality per se.
It could be.
It could be.
But it's not as if, as far as we know,
and the Lagrangian is a condition on that. And it's not as if the Lagrangian is a cyclical group.
So, so, so fair point. Okay. So why, so why does that, does what you've studied demonstrate
anything about reality? Right. So, yeah. So I think a good argument against our paper is we've looked at
four different structures and we showed in each case that the probability is zero, that a randomly
chosen payoff function would be a homomorphism. And what we want is a paper that proves that
for any possible structure, right? Not just the four structures that we looked at. We don't have that theorem yet. So someone could come along and say, look, we have independent
evidence that this is the structure of the world, and we can prove that for that structure of the
world, almost every payoff function is a homomorphism of that structure. So we're
throwing down that gauntlet. We saying for you for someone to argue that natural
selection will of course favor veridical perceptions seeing the truth of the true
structure of the world then what you have to do you it's it's not obvious that that's the case
here's four counter examples we give four counter examples right total orders so here are four
counter examples so So it's not
just obviously true that that's going to happen for the structure of the world. So you now,
to convince me that natural selection favors vertical perceptions, you have to tell me exactly
the structure of the world and prove that in fact the homomorphisms have full measure.
So we've thrown the gauntlet down. All of my colleagues, almost all of my colleagues
in cognitive neuroscience who study perception, have just assumed that, of course, natural
selection favors vertical perception. We gave four counterexamples, and we're basically throwing
down the gauntlet to our field and saying, if you believe that, you need to come up with a beef.
Where is the beef? What is the structure of the world that would have full measure for the homomorphisms? If you
can't do that, then what is the conspiracy in the laws of nature? So in other words,
what we've done is thrown down the gauntlet. In some sense, we're saying, hey, the burden of proof
is now on the person who claims that evolution is going to shape vertical perceptions. Here are four counterexamples.
Give us an example of where it possibly could.
And good luck trying.
Because here's why they're not going to succeed.
To be a homomorphism, a function has to satisfy some strict equations.
Most functions don't satisfy them.
End of proof.
It's just that simple. Good luck. So we've thrown down
the gauntlet. The argument I just gave you is going to be the heart of a deeper theory, a deeper
paper that we hope to get, in which we just prove that no matter what... Yeah, I'm excited about that.
Right. Have you pursued category theory? That's what works. That's like the most abstract of all
the mathematical theories. That's where we're headed exactly right category theory exactly okay cool cool
i just said with category theory right what about partially ordered sets not as if even if you
proved in the partially ordered set case it's like okay that's slam dunk yeah we'd like to do
partial orders and also topologies right so those are some obvious big ones so if we can't get the full
category i mean we'd like to get the full category theory um proof that just basically says you got
a structure in the world your payoff functions won't preserve it almost surely end of story
that's what we want we've got only four examples um and then we've got this high level like argument
that i gave you that to be a homomorphism, you have to satisfy these equations. So good luck, most won't be. So that suggests to me that there's going to be probably
a very simple category theoretic proof of this thing. And that will take care of it.
Or perhaps you find the one structure, maybe there's even one or two structures, a finite
amount that satisfy that the vertical perception. And then from that, you're like, okay, that's interesting.
That's uniqueness among all the structures.
And therefore there's something special.
Maybe we can imply the laws of physics from that.
Either way, it'd be interesting.
Oh, absolutely.
So, you know, I don't care how it comes out.
I just want to pursue and see what, see what does come out.
Right.
But what I am saying is this way too fast.
My colleagues thinking, oh, of course, natural selection favors true perception.
Oh, that's way, way too fast.
Why do they think that if they come from an evolutionary background?
Because evolution just is all about what works.
And what works, who cares about what's true?
True in the materialist sense.
I agree.
And I should say, there are many of my colleagues and friends, like Steve Pinker, a good colleague and longtime friend, who clearly understands that natural selection does not, in general, favor true beliefs.
And he's got a brilliant paper published called, So How Does the Mind Work? It's a wonderful paper. So if you look up Steve Pinker's
So How Does the Mind Work? He gives in that paper five good reasons for why evolution will not lead
us to have true beliefs in general. And also real Robert Trivers has argued, you know, brilliantly on evolutionary grounds that there are reasons why we should be
deeply self-deceived. We, the best liar, the best,
the most convincing liar in social situations is the one who believes his own
lie. And so there are selection pressures for us to be deeply deceived.
Now in Trivers case, however, he then,
even though he understands that aspect of evolution and leading to false beliefs,
when it came to perception, Trivers still believed and wrote that seeing the structure of reality as it is would make you more fit.
And that's the argument that most people believe.
The argument goes like this.
that most people believe.
The argument goes like this.
Those of our ancestors who saw reality more accurately had a competitive advantage over those who saw the world less accurately.
A competitive advantage in the basic activities of life,
feeding, fighting, fleeing, and mating, right?
Therefore, those who saw more accurately were more likely to
pass on their genes that code for the more accurate perceptions. And therefore, we can be quite
confident that we may not see all of the truth, but we see those aspects of the truth that we need
to survive. And that kind of argument completely persuades most of my colleagues. It apparently
argument completely persuades most of my my colleagues it apparently persuaded trivers because he wrote that are you know yeah it's more fit to see reality as it is it didn't persuade
pinker although pinker in the in that paper that i that brilliant paper that i mentioned you know
so how does the mind work he does at the at the end of his discussion of the five reasons why evolution wouldn't lead to true beliefs, he does make an exception for everyday middle-sized objects.
And for the quotidian beliefs of our friends, the everyday quotidian beliefs of our friends.
So he doesn't explain why he makes those exceptions, but he does make those two exceptions.
And so I'm really focusing on that exception about everyday middle-sized objects. I'm not saying that our perception of
everyday middle-sized objects is also not an example of a true belief. It's just an example
of something, as you said, that's fit enough to keep you alive. Okay, well, this is a question
more about evolutionary modeling in general.
And please forgive me as I verbally fumble through this because my thoughts on this aren't fully developed.
Are fitness payoffs as simple as just a one-dimensional case? So, for example, an issue that I have with utilitarians is that they will project pleasure and pain down to a one-dimensional axis,
which to me neglects that there are different kinds of
pleasures or that you can combine pleasure and pain or that there are different kinds of pain.
Now, this means to me that projecting onto the real line is either not possible or it's not
trivial or it's not useful. And I've always wondered the same about fitness payoffs. I see
something similar happening. So perhaps fitness is a complex multidimensional space and the choice
between any given two points in order for you to make it,
you need to employ a metric or norm,
and then that would need to be justified. In other words, do you have,
do you see any problems with fitness payoffs being real R1 to R1?
Well, yes, I do.
And the reason is that the domain need not be R1.
The domain could be as high dimensional as you want,
or the very notion of dimension may not even apply.
But sticking with the domain having,
it could be a 50 dimensional space.
Image going down to R1. Yeah An image going down to R1.
Yeah. So going down to R1. Now, fitness in terms of evolution is very, very simple.
Do you have more kids than the competition? End of story. That's R1.
So the, but what you were talking about though though the pain and pleasure and so forth
the the kinds of emotions that we may evolve to guide our behaviors in an evolutionary theory
could be as complicated as you want right so so the payoff is reproductive success that's it
that's the only payoff that there is. We will talk about payoff functions as
like points in the game, but ultimately the idea is that the number of points you get in playing
the game is really translated into how many kids do you have? How many genes do you pass on? So
that's why in evolutionary theory, the domain can be as complicated as you want. The range is R1.
Kids. It's not the absolute number of kids. All that matters is that you have more than the
competition, right? So it's not like you have to be perfect or anything like that. It's just
a satisficing solution. So we have to distinguish the payoff value, which is reproductive success versus all the complicated emotions that will
guide our behaviors to get enough payoff value. Right? So, so,
so we don't want to conflate those two things.
So evolutionary theory allows you to have as complicated a set of emotions and
motivations as, as you want. Very, very complicated. The payoff value is R1. I got it. All right. Thanks, man. I appreciate that.
We're going to have to schedule another call if you don't mind. No, you're asking, it's fun for
me because this is far more technical than most questions. So, and it's enjoyable. I just hope
that your audience will be able to follow this is
fairly some of them won't be able to but that's fine i feel like most people one of the reasons
why i've been successful to the degree that i have is that people are watching more because
i as an interviewer i'm engaged like i'm asking you questions that i want to know the answer to
rather than what i think the audience cares about. And there's a bit of a facade there,
a manufacturing.
Good.
Well, I think for a certain audience,
then these are the right questions.
Absolutely.
For any, I mean,
you're asking the right questions
for anybody that really knows their math and science.
And it must be hard to know about.
You're asking exactly the right questions.
Thanks, man.
I appreciate that.
You know, like it gets a bit more technical.
Like, why did you, why are you choosing the geometric algebra with a signature one three and then saying that implies space time but because because that to me is putting extra don't recall exactly where we left off, but it's all right.
I'll put intermission and people will understand if there's a change in topic.
Sure.
Okay. How's it going, Donald?
I'm doing great. How about you, Kurt?
Good, man. Okay. You were just mentioning that some of your practices of meditation are intense
and they usually are. Today was particularly intense or average intensity
just average but it's always pretty intense it's um okay what do you see what happens during it
well it's
it's going into the unknown letting go of all thoughts and there's an innate fear of the unknown.
And so facing that fear and letting it go can be very, very intense.
So if you try just sitting for 10 minutes with absolutely no thoughts,
just being in the moment,
you'll find that thoughts just come up and they invade.
And if you let them go and go back into quiet,
you'll see the thoughts just keep coming back up.
But when you actually go into a space where you do let go of thoughts
and it's utter silence silence then it starts to
appear that those thoughts are part of a defense mechanism that they're there
they're hiding behind them a good deal of fear of the unknown and and we're
always you know one thing about our species is that we build models
of our environment and we rely on our models to protect us, right? We predict, you know, some
creatures don't build models or don't build very sophisticated models of their environment.
And their strategy is like, you know, with some bacteria bacteria you just multiply and multiply and
multiply and and and that's that's how you survive you know just vast
multiplication our species is different we have very few kids and we have a big
cerebral cortex of frontal lobe that's building models of our environment, and we're simulating what will happen. And so that we, you know, in the simulation, something goes wrong,
we see, oh, I shouldn't do that. Instead of sacrificing our bodies, we sacrifice our virtual
body in the simulation, right? And we learn. So when you start letting go of all of your thoughts,
you're letting go of this very deeply ingrained protective
mechanism that we have of building models and analyzing things and looking
into the future and running through scenarios and so forth. It's one of
the great strengths of our species. It's all a big defense mechanism.
Do you feel like all our thoughts are defense mechanisms?
Well, no.
But it's pretty much like this. It's like if we look at eating, we all have to eat.
And so it's a necessary part of life to eat.
to eat. But if you decide that you've had lunch and you're just going to do something else,
but you find that you can't stop thinking about food and you keep walking over to the refrigerator and you get that, well, that's different. That's now something healthy that's turned
into an obsession. It's a problem, right? And it's the same thing with thoughts.
Thoughts are great. We use them to go to the moon. We use them to build
all this technology. We use them to cure disease. They're wonderful. But if you find that when you
say, okay, I'm done, you know, thought is a great tool, but for the next 10 minutes, I would just
like to relax without any thoughts. And then you find you can't do it. You find that these thoughts invade and that when you let go of the thoughts
and you just go into pure silence, that you're afraid and you retreat back into the thoughts.
Well, that suggests that just like in the case of the food, right? There's a useful
usage of food and it's very, very necessary.
But there's a case where it becomes an obsession and there's a problem.
Same thing with thought.
It's a great tool.
But when you can't let go of thought for 10 minutes and the thoughts invade
and you find that going into the silence triggers an innate fear,
then you know that there's something else that's going on.
And so,
so in the meditation, it's, it's a matter of, for me, facing the silence and the fears that come up
without judgment, without condemnation, without trying, just being, being with that and just
watching that whole amazing process to, it's sort of finding out, wow, I didn't know that this was part of me.
I just was living in my thoughts.
I had no idea that it would be so hard to let go of thoughts.
So it's nothing wrong about thoughts, but when they're obsessive, then that's a different thing.
And in the case, the analogy with the fridge and the food it's an obsession when it becomes
maladaptive is that what you're defining as obsession right yeah so you can't your thoughts
are always about the refrigerator and getting the next food and you find yourself gaining tons and
tons of weight and to the point where you have heart disease and things like that right you know
that's at some point you realize okay well maybe I've got food is necessary but but too much food and too much thinking about food is
food is a sign of something maladaptive okay you also mentioned in the case
against reality that illusions are a failure to guide adaptive behavior I
don't know if you mentioned that quoting someone else or if you mentioned that
and because you believe it and either way
Do you agree with that statement or disagree?
Yeah, I do agree with that that
And it was sort of a statement that I was making
In contrast to what most of my colleagues in in perception science would say most of my colleagues would say that an illusion
Is a perception that fails to match
reality. So I see a stick in the water, it looks bent, but the stick isn't bent, so my perception
fails to match the reality. I see something that looks bent, but the stick isn't bent. That kind of thing. And I'm suggesting on evolutionary grounds, well, I'm claiming
that the theory of evolution entails mathematically that the probability that any of our perceptions
in any way match the structure of reality is zero. In other words, none of our perceptions
tell us truths about the structure
of objective reality according to evolutionary theory. That's just a theorem.
Are you aware of the various notions of truth such as correspondence and deflationary and
pragmatic? Sure, absolutely. So most of my colleagues are taking a correspondence notion,
Most of my colleagues are taking a correspondence notion. They're saying that I see a red apple. That's because, in fact, in objective reality, there is something with that shape and that color that would continue to exist even if no one were it's perfect correspondence we don't see all of the structure of objective reality but but the idea of truth that i'm talking about is reality whatever it is
has certain structures to it topologies perhaps group structures metrics whatever it might be
whatever the structures of reality are um And our perceptions then tell us truths about
those structures. Some of the structures of our perceptions are matching the structures of
objective reality. The structure of the shape of the apple that we perceive is homomorphic to the
structure that the apple really has, and so forth. And that's what I'm saying.
Evolutionary theory contradicts.
Now,
are you taking more of a pragmatic stance on the definition of truth?
I'm not.
So what's,
what's interesting is when I talk about,
if you don't mind, just for the audience, do you mind defining the difference between correspondence and then pragmatic and then where you differ from pragmatic?
As far as I can tell, it sounds like yours is pragmatic because it's about adaptive behavior.
So I'm curious to know where the difference is.
Right. So in the correspondence theory of truth, the idea is that there is an objective reality that has a definite state, a definite structure.
And to see truly, to perceive truly, is to have the structures of your perception be homomorphic to at least some of the structures in objective reality.
So to preserve those structures.
A pragmatist would say that's not possible. There are no true perceptions of that kind.
All we can ask of our perceptions is do they work so do our concepts work do our perceptions work
and in and what i'm saying is that from an evolutionary point of view the theory of evolution
gives us no reason to believe that our cognitive or perceptual systems were shaped to give us the correspondence
kinds of truths. No reason. Now, that doesn't mean that evolutionary theory is true.
My attitude about scientific theories is very, very pragmatic in the following sense.
I don't think that any of our scientific theories are true,
including mine.
But they're the best tools that we have so far.
And so what we do as scientists is to very carefully study these tools.
We study evolution by natural selection.
We study general relativity.
We study quantum field theory.
And we see what these theories entail.
Not because they're the final answer. they're not. I don't think any
good scientist thinks they're the final answer. They're the answers we've got so far, the tools
that we have so far. So I'm saying that evolution by natural selection absolutely entails that none
of our perceptions correspond to a true structure of an objective reality.
Now, does that mean that I'm a pragmatist?
No, I'm just telling you what evolutionary theory entails.
That's all.
So I'm just being very, very hard-nosed.
This is what the theory entails.
We've got theorems about it.
If someone wants to argue, I can show the theorem, and we can argue about the theorems.
But now, once I've said that about evolutionary theory, I can step back and say, is there a deeper point of view that I would like to take, in which I still argue for a
correspondence theory of truth? And I'm absolutely open to a deeper point of view, in which there is
a correspondence theory of truth. Whatever that deeper point of view is, though, and this is the
interesting and tricky part about it, whatever deeper point of view is though, and this is the interesting and tricky part about it, whatever deeper point of view I take, it better be the case that I can explain
why within space and time and evolution by natural selection, that those theories
theories entail only a pragmatic view of perceptions, not a truth view. So if I'm going to take a deeper point of view in which I say there is a correspondence
theory of truth, then for that to work I have to be able to explain why such a
successful theory as evolution of a natural selection, and it's spectacularly
successful, why that theory entails a pragmatic view of our perceptions
and our cognitive abilities.
They're all pragmatic, not about truth.
And so that's sort of the subtlety of this point of view.
A lot of people will contradict or critique what I'm saying on two counts.
They'll say first,
how in the world can you say that we don't see the truth about the world?
Do you know what the truth about reality is?
If you don't know what the truth about reality is,
then how could you ever know that we don't see it?
Okay.
That's the first argument. And the reply is,
and this is the power of mathematical models like evolution by natural selection,
that mathematical model entails that whatever the structure of objective reality is,
we don't need to propose that we know what the structure of objective reality is.
Whatever it is, the probability is zero that natural selection would shape us to see it.
So the nice thing about that answer is I don't need to guess or know what
objective reality is to know that evolution by natural selection entails
that whatever the reality is, we don't see it. We don't perceive it. Okay.
So, so that the first objection is, is a very natural objection, by the way. Why? How could you know that we don't see it. We don't perceive it. Okay. So, so that the first objection is,
is a very natural objection, by the way. Why? How could you know that we don't see reality? Because you don't know what reality is. Well, the answer is the mathematics is so sophisticated.
It's allowing you to say that whatever reality is, you don't see it without ever claiming to
know what reality is. The second point then is that, you know, often people will say, well, you know, so you've just
said that none of our perceptions, none of our cognitive capacities are about the truth. And
then you try to come up with a model of objective reality, right? This theory of conscious agents
and so forth. Well, and indeed I am. And that's because I don't take evolution by natural selection as the final word. It's just a
tool. That tool entails a pragmatic view. Great. That's what that tool entails. I would like to
get a deeper view in which consciousness is fundamental. And from that deeper view, show how
space-time emerges as an interface representation of this deeper reality and show within that interface why
the dynamics of this deeper realm looks, at least in certain cases, like evolution by natural
selection and why it looks like the kind of structure that we see in evolutionary theory.
So evolution by natural selection will be a constraint on any deeper theory of reality that I propose.
Any deeper theory, when it projects into space-time, it better look like general relativity, quantum field theory, evolution of natural selection, when you project it into space-time.
Or improvements on those theories.
But not less than those theories.
You know, it has to explain everything those theories explain and more within space-time.
So, but, and this is maybe subtle for those who aren't scientists, but this is just standard.
I mean, what I'm saying here is just the standard view of scientists.
We take our theories not as the truth,
just the best accounts that we have so far,
the best tools that we have so far.
A good theory will tell you where it stops,
where it fails.
But no, and by the way,
general relativity and quantum field theory
very, very clearly tell us that they fail,
that space-time itself is only an emergent concept it cannot be fundamental and so the very foundations of
quantum field theory which is fields on space-time and the very nature of general relativity which is
the dynamics of space-time, can't be fundamental.
So those theories are good enough to tell us that despite their successes,
they're not the final answer.
There's something deeper.
But they can't tell us what that deeper thing is. And that's where the creativity of the scientific endeavor kicks in.
It's up to the scientists to take a leap of
imagination. What deeper notion of reality, what deeper theoretical
structures could we come up with that would project down into what we call
space-time? And in that projection they would look like general relativity, they
would look like quantum field theory and evolution by natural selection. What's the deeper dynamics
in this deeper realm that would project into that? And so there, our current theories can only
give us very, very loose guidance. There's a lot of creative imagination that then is required to
go into this deeper realm. And that's where the most creative scientists imagination that then is required to go into this deeper
realm. And that's where the most creative scientists, of course, want to go. It's great fun
when we find a hole in our current theories and get the chance to discover something new.
And so that's the fun part of it. And when it comes to space and time,
I've heard you describe
it as a data compression tool for us conscious agents. Why do we need, as conscious agents,
data compression? Why is there a limit to how much we can process? Well, that's going to be
a very interesting thing for this deeper theory to try to explain. Why is there this limit?
It may be that all conscious systems are necessarily finite.
And that's one theory that I'm playing with,
that the realm of conscious agents may be such that
there is no bound to the complexity that a conscious agent can have, but it's, but it's always finite.
So it's as big as you wish, but, but always finite. And in which case,
no matter how complex you are compared to all the possible complexity,
your measure zero, your probability is zero, right?
So in that sense, if,
but there may be a really, really deep point of view in which consciousness
may be taking these, dividing itself into sort of
subconsciousnesses which into sort of subconsciousnesses,
which are sort of different perspectives that are looking at the whole.
And these different perspectives would necessarily then have limited information about the whole.
They'll only be able to see through an interface, but necessarily, right?
It's sort of the kind of thing like, right, in the Twitterverse, I can,
there's tens of millions of users, billions of tweets. I could talk with one or two Twitter
users extensively and follow all of their tweets. Maybe if I'm really into it, maybe
a hundred of them and follow all their tweets, but there's 10 million or more Twitter users. So there's no way,
there's absolutely no way. So if I want to get a feel for what's happening in general in the
Twitterverse, I'm going to need to have some kind of data compression tool, some visualization tool.
Right. We see this all the time. Whenever there's big social media data, we use visualization tools
to see what's going on. And that's what I think space-time is. Space-time is our headset.
We've mistaken the headset for reality. We've assumed that space and time are fundamental,
or space-time is fundamental. We've assumed that that's subjective reality, and it's just a rookie mistake. It's our headset, it's our visualization tool, and what's behind it is this vast network
of other conscious agents. Now, it's one thing for me to say this, it sounds, you know,
interesting, but where's the beef? And where the beef is going to have to come from
is a precise mathematical description
of the dynamics of conscious agents. So it's for those interested in the math, it's dynamics on
networks. So it's network theory. So the kind of stuff that you use when you're studying like the
internet and the dynamics of information flow on the internet and so forth. It's that kind of mathematics.
So it's proposing a specific mathematical dynamics on a network of conscious agents.
Then the way I think it'll go is that
the asymptotic behavior of that dynamics,
which is, that just means,
you can't see every single tweet,
you can't interact with every single user,
but you can see the long-term trends.
What are the things that are trending long-term?
What are the big picture kinds of,
that's what I mean by the asymptotics.
What happens if you look at it over a long period of time?
What are the patterns over a long period of time?
And I think that that is what is behind space-time and modern physics. What they're seeing, what the physicists right
now, when they try to get structures beyond space-time, you know, they're trying to take
the next step beyond space-time, they are seeing some interesting structures called positive
geometries. Amplituhedra, sociahedra, and so forth. These positive geometries.
The direction I'm going is to show that those positive geometry structures that they're
coming up with are representations of the asymptotic behavior of this dynamics of conscious
agents.
And if I can do that, then I will be able to use their work and pull the thread all
the way from a theory of conscious agents and their dynamics through the asymptotics of that into these
positive geometries that they found and then they tell you how to go from that to the actual
prediction of scattering events like at the large hadron collider so that's my goal the beef will be
if I can start with a theory of consciousness,
look at its asymptotics and show that the asymptotics of that theory of consciousness through this route that the physicists have found already leads to the scattering amplitudes
that we can then test at the large hadron collider. That's where we have the beef.
Space and time aren't fundamental in your theory. Is causality fundamental?
and time aren't fundamental in your theory is cause is causality fundamental um certainly not causality in space and time right so when we think about causality we think about for example the um
the cue ball hitting the eight ball and making it cream into the corner pocket
and or we think about uh neurons in the brain causing our behaviors and causing our experiences.
And so I flat out deny that anything in space time has any causal powers.
Can you help me? Something I'm working on is a definition of causality.
It turns out it's extremely elusive, even though it seems like it's intuitive.
The closer you look, the more slippery it gets.
How are you defining causality in your theory?
So you're absolutely right.
In fact, there's a handbook of causality.
A bunch of philosophers and others talking about causality,
the Oxford Handbook of Causality or something like that.
And if you read through it,
there are many, many different views on causality.
And there is no agreed upon universal definition of causality.
Mathematically, in computer science,
when we talk about causality some of the best work there is based on the notion of directed acyclic graphs so judea pearl professor at ucla and his students
and many collaborators have been modeling causality whatever is, as directed acyclic graphs. But I think that even
Judea Pearl doesn't hazard definition of causality, right? It's much like, how do you define,
if you're in geometry, how do you define a point? Well, it's sort of a primitive concept.
But two of them make a line, right? Two of them define a line. And so you
start, so for many scientific theories, the notion of causality is a primitive. It's a place where
explanation stops. And that's not a unique problem to the notion of causality. Every scientific theory must stop at some point.
It can't explain everything.
There will be some primitive notions, and this is unavoidable.
So there are no scientific theories that explain everything.
Every scientific theory will have some handful,
hopefully as small as possible, of primitive
notions that where you just say, you know, like if you have a little kid that says, but
why is that?
And you tell them and say, but how come that?
And so at some point you just say it just is, right?
Your theory just stops and says, that's just the way it is.
And so for causality, you know, we tend to think of causality, well, if I can intervene in some system and then things change as a result of my intervention, that's a proof of causality.
But that turns out not to work.
You can get counterexamples to that.
What about causes always precede
their effects? Well, you can have, you can set up thought experiments where that,
where the effects don't precede the causes, but where there's... Yes, right.
So that's the reason why I forget the person's name, Pearl, you mentioned?
Yeah, Pearl, right. That's why he said acyclic, Pearl, you mentioned? The day of Pearl, right.
That's why he said acyclic, because he doesn't want the causes to proceed effect, or because he doesn't want it to get to some place where you cause
yourself.
Right.
Yeah, once you get cycles, it gets pretty complicated, and the notion of
causality would start to slip through your fingers.
complicated and the notion of causality would start to slip through your fingers.
Things where in everyday life you see cycles,
you can unwrap them as an ever-extending directed acyclic graph. So you can just take the time parameter, instead of
cycling back, you go to new states downward. So there's a way
to avoid cycles, even in what we in everyday life
think of as cyclic behavior behavior so i would say that
causality you're you're right to pick on it it's the the best and brightest when i look at i think
it's the oxford handbook of causality wonderful articles brilliant thinkers and they disagree
they don't know so there's no
received opinion about what causality is.
Is there one that you feel like
is getting close to the truth?
I'll tell you why I ask.
All the ones I saw there, I think that they all were touching different parts of the elephant
in interesting ways, right?
But that it may be that ultimately causality is one of these primitives, like points in
geometry. okay when you say that reality is innately objective
so there's the w the world state x g experience so that's like the conscious agent then i assume
what you're saying is objective reality is the w the world state and the agents right
then later you do this i love this you then take well w could also be
composed of different conscious agents and then we can have a string of them why are you calling
that objective to me and this just might be semantics i hear that as subjective given that
it's a conscious agent that is it's a subject all the way through the chain right if it was just
if you were like hey there's one world or there's a few but w is not dependent on other causal
sorry conscious agents then i could see using the term objective why is it that you still use the
term objective when we can extend the graph to just conscious agents. Right. So, so the,
the proposal is that what the universe is fundamentally is a vast network of
interacting conscious agents.
So when I say that that's the objective truth,
of course,
each agent is a subject,
right?
Each agent is a conscious entity. But what
I'm saying is that even when I'm asleep, the network still exists. That's all. So even if I'm
sound asleep and I'm not having any conscious experiences or anything like that, the eye that's
talking to you right now, that that network of conscious agents
is just like a twitter user right um whether or not i happen to be reading tweets or sending
tweets the twitterverse exists it's out there even though really the twitterverse is just a
bunch of other subjects people people tweet intersubjectivity implies the objectivity?
That's right. And specifically the fact that it still exists even if I don't perceive it.
Okay, we're going to get philosophical. I was going to get to this later, but now that we're
here, what's the I in this conscious agent model?
Right.
So in the conscious agent model, the notion of a self is not fundamental.
So conscious agents are just these mathematical structures that have experiences,
make decisions, and then act on those decisions.
But the notion of a self is not fundamental.
A self is something that a network of conscious agents can build.
So it can be an experience that a network...
Okay, I might stop you every once in a while to clarify just for myself.
You're saying that a self is a conglomeration of conscious agents?
It's a data structure that they build. Does that mean that right now I'm talking to Donald, you're not one conscious
agent? I'm speaking to quite a few? Probably a countless collection of conscious agents.
Uncountable. Well, I don't know if it's uncountable, but I certainly couldn't
count them. But as you know, uncountable is a technical term in mathematics. So I just want
to make sure people who know that, I'm not saying that it's uncountable in that sense. It may be,
but I'm not claiming that. But it's certainly not countable by me. So the idea is that, for example, we know that there are at least two different subjects in me.
We can see that if I take a knife and cut the corpus callosum,
which I have a left hemisphere and a right hemisphere.
You do too.
There's a band of fibers called the corpus.
Hopefully I do.
Sometimes I wonder.
225 million fibers.
And if you cut those fibers it turns out
you can actually get clear evidence in in a few people that this has been done with
that there's a different personality in the left hemisphere than the right hemisphere
one in one case the left hemisphere believed in god the right hemisphere was an atheist
the left hemisphere wanted to be a have a desk job the right hemisphere wanted to have a desk job. The right hemisphere
wanted to be a race car driver. And they can even fight. The left hemisphere controls the right hand.
The right hemisphere controls the left hand. And they can fight. The left hand can be fighting
what the right hand is doing. Trying to cook, the left hand might destroy an omelet that you're
making. You've tried to put on some clothes clothes the left hand might be taking off the clothes the right hand is trying to put on so these two
different consciousnesses and i'm saying the tip of an iceberg there's this whole lattice of
conscious agents that that together um cooperate to create um the the conscious agent that's me, but, but the self that I have,
the sense of myself with my personal history and so forth is a data structure
that I create.
Okay. I, you know, I,
I feel like I'm slipping up because I don't know. And this is another one,
by the way, causality is elusive.
Identity is also one of those where the closer you look, the more liquid,
liquid it gets. Are you using self and identity synonymously? identity is also one of those where the closer you look the more liquid liquid
it gets are you using self and identity synonymously well so no so I'm thinking
about self in the colloquial sense that we most of you know think about you know
I am so and so here's my name this is my personal history this is what's important to me this is
my goals and so that kind of um
self that we that we that we experience ourselves the thing that we're afraid of
losing in death and so forth right that that that's that is a data structure. Now, again...
That's interesting.
You're saying that the self is also a data structure.
Yes, that's right.
Okay, sorry, continue.
At least in this theory that I'm working on,
and I can explain why we're going that direction if you want,
but the notion of a self...
Right, so there's certain views of the self that it's um there's no such thing as a
persisting self but it's like this is called the pearl view it's like the number of pearls on a
string so there's a pearl a different instantiation of the self at each moment along the string so
there are certain eastern mystical traditions but also certain western philosophers who take this, this point of view,
that there's not, you don't have one persisting self,
but you have a sequence of selves and William James point of view.
But the, the parole analogy would imply that there's a continuum.
There's a string.
There's a string. And, and for James,
James said the thought itself is the thinker, which is interesting.
I don't know exactly what James means by that, but he was saying that there's no thinker, no self beyond the occurring thought right now.
But the thought itself is creating the illusion of a persisting thinker, of a persisting self.
Now, many Western religions, on the other hand, will take just the opposite point of view.
They'll say there is, of course, a self and it's eternal.
And what you do here determines whether that self will go to heaven or to hell or something like that.
So you can see there's a lot riding on this notion
of self and there are widely differing points of view. And the reason in my theory that I've been
working, by the way, by my theory, of course, I'm not alone. I've got a bunch of wonderful
collaborators, Chetan Prakash, Bruce Bennett, Manish Singh, Chris Fields, Robert Kretner,
Federico Faggin, a bunch of
really brilliant people.
So when I say my theory, you know, that's in quotes.
And also when you say my, that you're implying itself.
Itself, that's right.
But you're questioning at the same time.
Self itself as well.
Yeah.
Sorry, continue.
It's so tricky to talk about this, but anyway.
That's right but it's fun when our everyday assumptions
that we have never questioned in many cases or just assumed it's i think i find it really
refreshing to have everything that i thought i knew blown away i mean it's really quite fun
it's it's maybe an acquired taste like you know but but but it's an interesting thing to be surprised. What I thought I knew,
I don't know. But in the case of the self, when I was trying to come up with the theory of
consciousness, as a scientist, what you try to do when you're creating a theory is you're trying to
put as few assumptions as possible up front. Because every assumption you build in is something you're
not explaining. It's something you're assuming. So you want to assume as little as possible.
So in the theory of conscious agents, I don't, I mean, of course I could have said, okay,
they also have a self and they have an eye and so forth. But I decided not to do that because
it turned out if I just said there are these entities,
I'll just call them formal structural entities called conscious agents,
they may not at all be aware of themselves as entities.
So they may not have any self-awareness at all.
They just have conscious experiences.
That's all they have.
They make decisions and they take actions, but those actions,
all they do is affect
the conscious experience of the other entities. That's it. There's no self. There's nothing like
that. There's no memory, by the way. There's no intelligence, nothing. It's just experiences
leading to decisions. At the fundamental atomic level. That's right. The very foundational level. And I do this on purpose.
The goal is to make the foundations of your theory as sparse as possible.
That's just what we do as scientists.
As few assumptions as possible.
Sure, I could have thrown in a self.
I could throw in the kitchen sink.
Let me throw in intelligence.
Let me throw in just, well, there's nothing left to explain. I've just, I've just assumed it all right there. There we go.
That's well, there's no theory there. So that, so the reason I stopped where I stopped was
very nice theorem by writing down a set of experiences, conscious experiences, and
a so-called Markovian kernel, which is a way of talking about the probabilities of the different decisions that you could take.
The Markovian kernels, I am computationally universal.
Anything that can be simulated by any computer, quantum or otherwise, I believe, can be simulated by this network of conscious agents. So even though my foundations are incredibly sparse,
there are agents which have experiences and make choices based on those experiences. That's it.
Networks of agents can provably compute anything that's computable at all, period. So because I
know in cognitive neuroscience, for example, that we have mathematical models that are very, very good at problem solving, intelligence,
creating all sorts of structures. I can build networks of conscious agents that can do anything
that cognitive neuroscience can do right now. So that's why I stopped. I don't need anything more.
I can build models of the self out of networks of conscious agents.
So there's no reason for me to assume them. I should force myself to build them from networks
of conscious agents. And so that's the fun part of this. So what we've got is essentially the
tinker toys of consciousness. Now we can build up any structure that we want, provably.
Going back to this D and A and P model, let's forget about the P because it's
just conscious agents. The decisions and the actions are stochastic, so they're
Markov kernels. Where is the free will there? Right, so all the, of course, this raises a deep question about the relationship between mathematics and the thing that you're modeling with the mathematics, right?
So all I've got are measurable structures that I say are conscious experiences, spaces of conscious experiences, right?
And then I've got these Markovian kernels, which I say are modeling the choices.
And you can say, well, you know, I just see, I see probabilities there, but I don't see any free will.
And so it's a matter of how I interpret those probabilities.
One could interpret those probabilities as pure objective chance.
And I think many of my physicalist colleagues would say,
if you see probability somewhere,
then if it's not due to ignorance,
if the probabilities are not due to my ignorance,
then it's due to an objective chance in nature.
But I want to be a monist in my theory.
I don't want to be a dualist.
If I'm going to say that consciousness is fundamental,
in my theory. I don't want to be a dualist. If I'm going to say that consciousness is fundamental,
then I don't also want to propose as a dualist that there are other sources of novelty outside of consciousness into the universe. I want consciousness to be the only source of novelty
in this universe. Objective chance, I mean, what do we mean when we say objective chance as a physicalist?
So I suppose I'm a physicalist.
I believe space-time, say, is fundamental,
and I believe that there are things called objective chance
that are not just due to my ignorance.
This is something that we call objective chance.
What I'm saying is,
these probabilities of objective chance, I'm saying there is a source of novelty coming into my universe that I cannot explain.
There is where a miracle is happening. So I'll be very, very clear. Objective chance is a miracle. It's in the following sense. You're
saying my theory has no resources to explain those outcomes. I cannot predict those outcomes.
That's why I'm calling it objective chance. Well, that reinterpreted, re-explained is
in the language of your theory theory you have no resources to explain
what's happening so in terms of your theory that's a miracle it's
unexplainable now so every theory and this is not a put-down every theory has
its miracles we just have to own that every scientific theory right right and
you're using miracles as a synonym for axioms? That's right.
It's a miracle for your theory.
If your theory has to assume it, it can't explain it.
Okay.
Right?
Okay.
Now, when you have probabilities, there are two kinds of miracles that you can imagine.
If it's not just due to lack of knowledge, right?
So sometimes you have probability just because there was a true state of
affairs, but you're, you're, you don't have enough knowledge.
That's a different kind of thing. But when it's a fundamental probability,
you can either ascribe it to the miracle of objective chance or free will.
They're both equally miracles. But I, since I'm trying to have a monist theory,
I interpret probability not as an objective chance, but as free will choices. And then it's,
you know, up to me then in the mathematics to unpack what does that free will choice idea
amount to. And it's really quite interesting because this is a network dynamics. Each agent
at each level of the dynamics is making its own contributions to free will choices. And there
are bottom-up influences. The free will choices of lower level agents influence higher agents
and vice versa. And the whole system of conscious agents also itself is an agent
by the mathematics. The mathematics says collections of agents also form an agent and so what you get is this really interesting
analysis and synthesis approach where i can get a dynamical model of free will bottom up and top
down at the same time and that would be my model of the free will choice of the
one single agent and so it's it's going to be it's again forcing me to do the work um is forcing me
to do the work of what what ultimately do i mean by free will by by assigning it only to local
agents and then saying what what do you mean by free will when you have agents interact?
Right, right, right.
So a question I have for you, let's imagine we have this graph
and it's extended and they're all conscious agents.
Okay, we pick one of them, and there's free will associated,
and they influence one another.
Does that mean from sheer will can one drastically change the game for the rest?
There may be cases where that can happen, but it looks like
every agent makes its contribution.
And it's almost like a symphony, right?
Every member of the symphony makes their contribution to the whole.
There may be a conductor, but it's an organic whole.
Where would the conductor be in this model?
Well, that's the thing.
There would only be like relative conductors that the agents above would have sort of top down influence,
but the agents below have really strong bottom up influence on the free will decisions. So
it's hard for me to say at this point that one is more important than the other.
Yeah. And I'm having a difficult time when you say that one is above another,
because as far as I know from just from reading the papers, and I could be mistaken,
it's not as if one is privileged one node is privileged over another right so so that's right
when i say above it's it's it's like um my two hemispheres in my brain right when the corpus
closum is intact there seems to be a unified me that I called on, right? But if you come on corpus callosum, you could easily, it turns out,
and this is how strange you can get.
They were able to get these split brain patients to have,
to play 20 questions with each other, with themselves.
The right hemisphere has a word in mind. The left hemisphere
doesn't know what it is. And then the left hemisphere, which can talk, can start asking
questions. Is it an animal? Is it vegetables, a mineral, things like that. And the right hemisphere,
which can't talk, but can understand language, can use, say, the left hand to thumbs up for yes, thumbs down for no. And so these, a split brain patient
has two separate spheres of consciousness, so separate that they can play 20 questions and the
left hemisphere doesn't know the answer, right? The left hemisphere does not know, and sometimes
will fail to guess in 20 questions, what's in the mind of the right hemisphere. Now you might,
some people argue that there's no separate personalities, no separate consciousnesses. Good luck making that point when
you know the left hemisphere can fail to win a 20 questions game with the right hemisphere,
even giving its level best. And I'm saying that's just the tip of an iceberg,
but it goes all the way down to what I call these one-bit agents that have just basically two conscious experiences, say red or nothing or something like that.
And so it's in that sense when I talk about agents being above.
And whatever agent is me that's talking to you seems to be a combination of my left hemisphere and my right hemisphere agents.
be a combination of my left hemisphere and my right hemisphere agents. If we split my corpus closum, that higher level agent apparently would be gone. And there would be two different agents
that could now play 20 questions with each other and lose, lose the game of 20 questions with each
other. Whereas I couldn't play 20 questions with myself right now and lose. If you told me something,
I would know, my left hemisphere would know, and I would know.
So that's the sense in which I talk about agents being higher and lower.
Agents can combine to create new agents.
Are you familiar with Douglas Hofstadter's work?
A bit.
What do you think of his theory of consciousness? It sounds similar to what you're describing in the sense of a strange loop that, well, you can do a much better job at describing it than I can. So there's a fundamental difference in the
approach that we're taking. Hofstadter is brilliant, right? He's a
world-class genius. He's going after the notion of self-referential systems.
And those have really interesting properties, right? If I have a sentence of the form,
this sentence is false, that the sentence is referring to itself by saying this sentence
is false, the sentence has got a strange
loop. It's got a self-referential loop in it. And then you get all sorts of weird things that
happen because is that sentence true? I mean, the sentence says the sentence is false. Well,
if it's true, then it's false. But if it's false, then it's true. And you get caught in this weird thing.
And so, I mean, Hofstadter has a lot of brilliant ideas,
and I don't want to claim that I'm doing justice to them, but the key is that there's an essential role
to this self-referential kind of structure
that we see in that kind of sentence,
this sentence is false. And that you see in that kind of sentence, this sentence is false,
and that you get in the work of Kurt Gödel when he proves his incompleteness theorems and so forth.
There's this really clever use of mathematics to refer to itself and get self-referential
kinds of loops. And his idea is that these kinds of self-referential loops are somehow critical,
perhaps, to booting up consciousness,
to booting up what we call a self, and so forth.
And there's a fundamental difference in philosophy in the following sense.
You mean between you and Hofstadter that's right and and my colleagues who who and many of my
colleagues who are taking a similar kind of thing that that's saying that that consciousness is the
some property of computational systems like this ability to have self-reference and so forth.
But it could be other properties.
It could be some kind of other computational properties of systems.
This is called the computational theory of mind and so forth. But the idea is that you start with systems that are not conscious.
But if they have certain interesting properties,
like self-reference or other computational properties,
for example, certain kind of integrated information,
like Tononi and Koch are looking at.
So they have certain kinds of computational properties
that they call high integrated information.
Then these unconscious systems give rise to consciousness.
Now, that's a fundamentally different point of view than the one I'm taking. I'm saying,
look, I'm not starting with a universe that's unconscious. I'm proposing at the foundation
that conscious experiences are fundamental.
That's where explanation stops.
I'm not explaining how conscious experiences arise.
I'm saying that they are what exists.
Just like a physicalist who believes in space-time and takes it as fundamental says,
I don't know where space-time comes from.
I'm assuming it.
I'm assuming that space-time and fundamental particles exist.
Every theory is going to make its own assumptions.
So we're all equal footing in that regard.
We all have miracles at the start.
The question is, what miracles do you pick?
And the reason I don't go with a physicalist starting point,
like a computational system that most of the time is going to be
unconscious, but if it has the right integrated information or it has the right kind of self
reference, then the magic of consciousness somehow emerges. The reason I don't go after that is
a couple of reasons. One, no one has been able to use that approach to predict exactly even how one
specific conscious experience could arise, right?
So how could self-reference or integrated information or
collapse,
orchestrated collapses microtubule states in certain neurons be or cause my
experience of the taste of vanilla?
be or cause my experience of the taste of vanilla?
What specific patterns of activity of the brain or computational system must be or must cause the taste of vanilla?
And why is it that it could not be the taste of chocolate or the smell of garlic?
Those theories have utterly failed to explain even one specific conscious experience. In other words, there's not
one success for even one specific conscious experience on the table. That is to say,
why does this brain state or this computational model correspond to this qualia and not another?
And does your theory deal with that? So I choose a different miracle. So their miracle is grant me computational
systems in space and time. I will predict qualia. Well, they can't predict qualia. So I've granted
them what they want and they can't give me what they've promised. They can't give me even one
specific qualia. It's a failure. And by the way, these are brilliant friends. They're brilliant. They're doing great work.
Absolutely, but I think that they've given themselves an impossible task.
You can't start with unconscious ingredients and boot up consciousness.
It can't be done.
So what I'm doing is I'm saying, look, they can't explain qualia.
No one's been able to explain conscious experiences.
I'm going to start with qualia.
I'm going to start with conscious experiences, the taste of garlic, the smell of chocolate,
all of these things. Those are the things I'm going to start with. So those are my miracles,
right? I'm not explaining them. I'm assuming them. Now, my goal is to say, grant me this model of that, my conscious agent model.
Then I will show you how space-time and quantum mechanics and general relativity and evolution by natural selection come out as interface representations of the dynamics of consciousness.
So, instead of starting with space-time as my miracle and booting up consciousness,
I'm going the other way.
Start with conscious experiences as my miracle.
I will show you how to boot up space-time.
Now, the winner will be whoever can explain the most with the least assumptions.
Right?
Right now, my colleagues, my good friends
who are physicalists are assuming space-time and computational systems, and they have yet failed
to explain even one conscious experience. I'm starting with conscious experiences that no one
can explain. I can't explain them either. I'm assuming them. But if I can explain how space-time
and quantum mechanics and general relativity come out out then I have fewer miracles than them so that wins whichever theory has the fewer miracles
is the one we should prefer that's sort of Occam's razor in your theory why is it that we would
let's say taste vanilla why is it that we would have one experience taste of the one
vanilla versus the sight of tic mielmo, which is another conscious experience?
Well, I don't have an answer to that right now.
And the very existence of conscious experiences
is something that I don't explain.
It's something I assume, right?
So my colleagues, my physicalist
colleagues would like to say
just the opposite of what I'm saying. They would like to
say, we don't assume that conscious experiences
are fundamental. In fact, at
the Big Bang, there were no conscious experiences.
There was just space, time, and matter
and energy.
We will tell you how conscious experiences
emerge from that, but they fail
to do so.
I'm saying you guys assume space, time, and matter and energy.
I'm going to assume conscious experiences.
What I'm asking for is, I understand that you're assuming the various qualia, but why is it that one is chosen versus another? It sounds to me like a similar problem that the physicalists have, which is to explain why one qualia over another.
And I don't have an answer to that. That's a very, very good question.
And to really make that question intense, we can look at people with synesthesia, right? So, um,
there was a synesthesia,
about 4% of the population has synesthesia of human population has four
synesthesia and to be 100% if you take LSD,
I want to talk about psychedelics by the way, sorry, continue. Right. So,
so there's this one guy like Michael Watson.
His synesthesia was that everything he tasted with his tongue,
he felt as a three-dimensional shape in space.
That's your friend?
He wasn't a friend of mine, but he was studied extensively by a neurologist.
He died.
He's dead now.
Okay. But he was a great cook. Everything that he tasted felt with
his hands. So mint was a tall, cold, smooth column of glass. He could feel it. He could fill the
weight of it. He could feel the coldness of the surface. He could feel the smoothness of the surface with his hand. Angostura bitters, which is something you put in drinks, felt like a basket of ivy.
He could feel the leaves, the sponginess, the texture, the warmth.
So it was a very rich sensory experience that he had.
And so you and I don't have, most of us don't have those experiences when we cook.
He did, and it allowed have those experiences when we cook. He did,
and it allowed him to be an exceptional cook. He had this other way of dealing with complex
tastes when he was putting foods together, so that raises your question really, really sharply.
Why is it that we experience, when we put something on our tongue, why do we experience it as what we call taste?
Saltiness, sugar, sweetness, and so forth.
Why don't we all experience it like Michael Watson did, as shapes that we feel with our hands in space?
Now, what's interesting is that mathematically, there was probably some kind of homomorphism that you could
write down mathematically between the shapes and the textures and the weights
and so forth he felt with his hands and the experiences of taste that we all
feel so there's probably some homomorphism that you some mapping some
dictionary between what he felt with his hand and what we taste we experienced
with their taste and and that's, what we experience with our taste.
And that's why your question is so difficult.
In many cases, from a functional point of view,
the details of exactly why you have that experience don't matter.
Lots of other kinds of qualitative experiences could have the same functional structure as those experiences.
In fact, you count them in one of your papers.
You count the amount of homomorphisms.
That's right.
Which makes me wonder, like you raised, why this and not another, given that there are so many homomorphisms?
So here's the best I've got so far on this, right? So first off, I just wanted to say, hats off, you asked a really good, good question. It's a really tough question and this synesthesia
thing really makes it clear how, why your question is so tough, right? The only idea I've got right now that I think is interesting enough
is based on Gödel's
incompleteness theorem
great
yeah I'm going to ask you about that later
right so
Gödel's incompleteness theorem
intuitively
basically says
that there's no end to the exploration of mathematical structure.
And if we take consciousness as fundamental, as I'm proposing,
then there's only one thing that mathematical structure could be about,
which is consciousness,
conscious experiences.
And so in that context, Gödel's incompleteness theorem I would interpret as saying that there's
an infinite variety of kinds of conscious experience that are out there, of possible
structures of conscious experience.
And for each kind of structure that there is,
there's an infinite variety of conscious experiences
that all share that structure.
Right?
So I would call this Gödel's candy store.
This infinite variety of possible structures
of conscious experiences.
And for each structure,
probably an infinite variety of conscious experiences
that share that structure.
So it's very, very rich.
And the goal of consciousness then on this theory would be to explore Gödel's candy store.
That's what consciousness is up to.
Gödel basically, his incompleteness theorem is saying, no matter how much mathematics you have, you haven't even begun.
No matter how much structure you've explored, you've not even begun, because there's endless, endless exploration.
In some sense, God could never know it all.
So God has endless, whatever you think about God, there's an endless structure for even God to explore, right? And so that endless exploration
of all the rich varieties, and it's, here I don't know, here I don't know how to think about it yet.
Is it an exploration of a pre-existing candy store, or is it an invention of all the new candies in this endless store? My feeling right
now is it's an invention that Gödel is telling us. Gödel's incompleteness theorem is saying
there's an infinite variety of structures to be invented. And then in the context of
consciousness, I interpret it as saying an infinite variety of conscious experiences to be invented.
I think it's invented, not explored in terms of pre-existing conscious experiences.
But there, I must say, my intuitions at this point fail me.
I don't know.
Okay, let's go even further off the deep end.
Sure.
You're completely allowed to speculate.
I'm not Michael Schirmer.
I'm not going to throw skepticism to you
straight out the gate.
Where in your model does God lie?
Well, the word
God is something that has been used in various forms for
thousands of years by human
beings and has never been precisely defined, right? Never been precisely defined. There's
quite a few of those words, causality, free will, identity, God, slippery, slippery. They all might
be related. But we don't kill each other over our differences about the word causality. We do kill
each other over our differences about the word Godality. We do kill each other over our differences about the word God.
And so that's a particularly troublesome problem that we have,
that we think that's a very, very important word.
And in many cases in human history,
we've proven that we're willing to kill millions of people
who disagree with us about what we mean by that word.
And the remarkable thing is that's when there's no precise definition.
So I would like to propose a precise definition of God. All right, let's hear it. And of course,
I'm wrong. Of course, I'm wrong. But the idea is to have something on the table so that we can now
begin to figure out why it's wrong. So I have this definition of a conscious agent.
I'll just propose that God is any conscious agent
that has an infinite set of possible conscious experiences.
Start there.
Now, of course, I'm wrong.
But now the fun part is where?
What's wrong with it?
That to me sounds like there could be multiple gods,
because you could have multiple agents with infinite sets.
That's exactly right.
So now, by the way, with that definition,
it may end up being a theorem that polytheism is true, not monotheism.
But notice all of a sudden that we get this thing where now, I gave you a definition of God, and you immediately saw an implication.
Try that with most other religious views of God.
They're not precise enough to have these kinds of inferential implications where we can actually…
What if someone said, okay, I can save monotheism by saying that God is the amalgam of all the conscious agents?
And that would be another way to go at it. And that might be the critique that you give of my
first definition of God. It would be my critique of my first definition of God would be to say,
well, no, let God be the top conscious agent, the one that's the unit, you know,
the combination of all the other agents that exist
but then it may turn out as we look at the dynamics of conscious agents and the combination
we may find that there is no at any moment there may not be a single maximum so it may not be a
lattice where there's one peak to the whole network there may be multiple and no single
maximal at any one at any one moment in the dynamics.
So again, it would be a matter of, and it may turn out to be a property of the kind
of dynamics of conscious agents that you propose, that in these kinds of dynamics, it's a polytheism
and this one, there's a monotheism.
So that's what's so fun about this, is now all of a sudden, we're already engaging, by the way, like this, on the word God, in a way that we could never before engage.
This is now a technical term.
And again, of course, everything that we've said so far is almost surely deeply wrong, but it's precise.
And so we can find out precisely where it's wrong.
And by the way, notice now we're in a very different space. I'm not tempted to kill you
or hurt you because you disagree with me. I'm rather tempted to listen to your ideas and go,
oh, wow, that's really cool. I hadn't thought about that. Let me rethink my position. It's a
very, very different kind of thing. And so can we, I mean, it seems that. Let me rethink my position. It's a very, very different kind of
thing. And so can we, I mean, it seems that the word God is very, very important to us.
Instead of having it be an ambiguous term that we fight to the death over and hate each other over,
why not have it be, since we think it's so important, it's important enough to be precise.
If it's that important to us, then it's important enough that we should be precise about what we
mean. And if we're precise about what we mean, then we can start to have a dialogue and try to
refine it and come to some kind of mutual understanding. So that's sort of the scientific attitude of
be precise so that we don't get stuck in the same thoughts that we've had for thousands of years.
We don't end up being dogmatic defenders of something that's not even well defined.
Rather, it's a sense of humility. These are the best ideas that I, in my tradition, have had so far about God.
There's probably a lot of insight in those ideas, and there's probably some mistakes.
Let me listen to your ideas.
Let's have you listen to my ideas.
Let's have you listen to my ideas.
And can we then get to a new and deeper and more precise notion of God, since we agree that this seems to be an important concept for us?
Can we together work in humility and both admit up front that maybe we don't have it quite right?
That's sort of the scientific attitude about this thing. Being precise is an ultimate act of humility on the one hand, because you're making yourself vulnerable to being shown wrong.
But it's also the ultimate act of true inquisitiveness. this is precisely the best idea I've got so far, but I really want to understand. So I'm
stating it precisely so I can figure out where I'm wrong, and then we can move on into a deeper
inquiry. So that's what I would like to see in the discussion of God. I would love to see,
in this sense, a scientific spirituality, which doesn't mean a dry, desiccated,
scientific spirituality, which doesn't mean a dry, desiccated, you know, academic kind of discipline. It's rather taking these very important notions and giving them the respect that they
deserve, a precise definition and a precise inquiry into what we mean, that could eventually
then lead to a deeper understanding of right and
wrong, of morals. What is a good life? What is the meaning of life? And so forth. But if we can't
even define our basic terms, how will we ever make progress on the deeper questions?
Okay, we're in the deep end already. Let's go to the Mariana Trench. What happens when you die?
that trench what happens when you die well if you're a physicalist it's quite clear right your consciousness is entirely the product of your brain or your embodied brain i mean some
of my colleagues will emphasize the cycle between sort of an inactive point of view
that it's not just the brain, it's the brain and the body in the environment in that loop.
That loop. Fine. But when you die, that loop stops. And therefore your consciousness stops.
And so from that point of view, the prediction is quite clear. There's nothing that survives death.
From that point of view, the prediction is quite clear, there's nothing that survives death.
From the view that I'm proposing here, space-time itself is not fundamental.
The brain does not cause anything, including our conscious experiences.
Instead, space-time is like our headset.
It's a virtual reality headset. But so consciousness does not have to die when the body dies.
But as I said, the self is not fundamental in the theory that I'm developing.
The self is just a data structure, like space-time is a data structure.
So from the point of view that I'm working on right now,
my death does not mean that my consciousness disappears,
but my self may.
I don't know yet.
I mean, that's going to be something I want to work on
in the next of this theory.
The self just being a data structure, maybe that data structure dissolves.
Or maybe it's like what happens in metamorphosis.
A caterpillar becoming a butterfly, right?
In some sense, there's not much recognizable left of the caterpillar when the butterfly emerges.
There is a continuity, but the butterfly has only six legs.
The caterpillar had quite a few more.
The caterpillar eats leaves.
The butterfly can't eat leaves.
The butterfly drinks nectar. The caterpillar't eat leaves. The butterfly drinks nectar.
The caterpillar never drank nectar.
The butterfly has wings.
The caterpillar has no concept of wings.
They're the same creature at different stages, but they're utterly alien.
And maybe the thing that survives death is as different for me as a butterfly is from a caterpillar.
So I just don't know.
So I'm still on that.
I'll be very interested to work on the mathematics and see.
But I'll give you an analogy that may be helpful here.
So suppose you go to a virtual reality arcade with some friends
to play, say, virtual volleyball.
You put on your headset and bodysuit, and you see yourself immersed in a beach volleyball scene. You see palm trees
and a net and sand, and the avatars of your friends cross the net. And so you serve the
virtual volleyball, and you guys play for a while. And then one of your friends, Bob, says, excuse me,
I'm thirsty. I'm going to go get a drink. And he takes off his
headset and his bodysuit to go get a drink. And his avatar collapses on the sand. Well, within
that virtual reality, he's effectively dead. But of course, Bob isn't dead. He just stepped out of
the interface. And so there's this sense in the theory that I'm developing
that your body is just an avatar. The death of your body, just like the death of Bob in the
virtual reality game, is not the death of Bob. Bob just was out getting a drink. So death may just
be stepping out of the interface. But even if consciousness survives, the consciousness that
survives maybe is different from the consciousness that you now have, as you could imagine a
butterfly being different from a caterpillar, even more. In the analogy of Bob going to get a drink,
there's a retainment of memory in your model. I recall you saying earlier in the conversation that
you don't have a theory for
memory or there is no memory. I could be wrong. You might, but regardless, does memory survive
death? Is it possible? Memories are data structures as well. So those data structures
may or may not be preserved. So that will be another aspect of this whole thing.
But right now I have no reason to say up front that I would claim that they necessarily are preserved.
So I can't claim that your current notion of self would be preserved
or your current memories would be preserved.
And the issue would then be,
could I build models of conscious agents
in which I first model what the death
process is, in terms of stepping out of a particular interface, and can I find models?
But notice what I'm saying here now. I'm actually trying to model the death process
with a mathematical model in terms of conscious agents, and then I'm asking a technical question,
could certain data structures,
the self data structure, memory, my own personal episodic memory data structures, be retained
in this other process that is leading to the extraction of a conscious agent out of a particular interface? That, I don't know the answer to it, but for me, it's a technical question and
a really interesting one, and one that I don't think is beyond our scope. I think it's one that
we can address. We can try to find out. What I like about your approach is it reminds me of
Newtonian approximation, where you just guess at what one of the roots are, and you're knowing,
you know full well that it's likely to be wrong.
But because of that guess,
you can get closer and closer to the truth.
That's how science works, right?
You guess at the foundations.
You guess at what the right assumptions are.
Newton did a brilliant job.
Probably one of the smartest men that ever lived,
maybe the smartest.
Incredible brilliance.
And where he was wrong, the new theories give Newton as an approximation.
Like Einstein gives Newton as a limiting case as the speed of light goes to infinity.
Newton is wrong, but he's deeply right too
he had his hands on something so we would like to have assumptions in this area of consciousness
as minimal as possible
but that then lead that turn all the kinds of questions you're asking
into technical problems that we can in principle solve and all the questions you're asking into technical problems that we can in principle solve. And all the questions you're asking so far are ones I think that we can pose as technical questions and try
to solve them. Let's get back to Gödel's incompleteness theorem. People like Penrose and
Lucas would say that any system that's computational in nature, which yours sounds like,
or at least it's algorithmic or rule-based,
would necessarily fail at a model of consciousness because within that system,
there are truths that that system cannot see as being true, but we can.
How do you deal with that? What do you think of that?
how do you deal with that?
What do you think of that?
Well, so there are non-computable functions, right?
And this is something that Penrose knows quite well and is focused on in some of his work.
So in fact, it's quite remarkable.
If you just look at the integers, or even just the natural numbers, let's say the integers, and you ask, look at all the functions from the integers to the integers, right?
Mathematicians can show that there's an uncountable set of functions from the integers to the integers, because the number of functions is the power set of the integers, so it's higher cardinality than the integers.
So it's an uncountable set.
But Turing proved that the set of all computable functions is countable.
all computable functions is countable.
Therefore, when we look at just the collection of functions from the integers to the integers, which is uncountable,
what that means is that almost no function is computable.
When you look at the set of all functions,
the set of computable functions is probability zero for some interesting measure. So the reason we focus on computable functions
is partly because of our lack of imagination. It's hard for us to imagine uncomputable functions,
right? If you've taken a class in computer science where
you actually have to like deal with uncomputable functions, you can get
there. You can study a non-computable function, but boy is it hard. It really
strips all the gears in your head. It's just so hard for us. But so most
functions are not computable. Our brains have a hard time even understanding one of them.
So it's a real limitation of our brains.
So in my field, cognitive neuroscience,
we're giving computational theories of mind.
There is a way to look at this,
to what my field is doing,
and say, what are you guys up to? You're trying to
give computational theories of cognition. If you're saying that cognition is only based on
computation, you're claiming that of all the functions that are available, there's this
probability zero subset of functions that are only responsible for our minds and for consciousness,
that are only responsible for our minds and for consciousness, right? Like the Tononi and Koch approach of integrated information.
It's a computational function.
So why should we assume that cognition in general and consciousness
is restricted to this probability zero subset of functions?
What's the principled reason for that?
And most of my colleagues don't even understand
that this is an issue
and certainly don't have an answer for it.
But of course, Penrose is an exception.
Penrose is saying he understands all this.
But I still don't agree with where Penrose goes on this.
Penrose is saying he still wants to start with a physicalist framework
and get consciousness to emerge by some kind of objective reduction.
Forget about that. I'd say it's more fundamental. The conscious agent model
looks to me to be algorithmic or rule-based and the statement
from Penrose, independent of whether or not consciousness emerges from physicality,
just forget about that, is that a rule-based system can't account for our conscious behavior,
given that we have an understanding of what's true, but not provable within that rule-based
system, any rule-based system.
And because we have that quality, our consciousness can't be relegated to an algorithm,
which yours seems like.
Now, it could just be this is rudimentary form, like I'm at step one,
or there might be another counter, and I'm just curious to know what you think.
Well, so again, the framework of what I'm doing is different from what he's trying to do, right?
I'm not trying to explain how consciousness arises.
I'm assuming it.
So consciousness is given.
Now there's a question, do consciousnesses interact only by rules? Right now I'm studying rules, but I'm completely open to non-computational modes of
interaction among these consciousnesses. I see, I see.
That has no bearing because, see, I'm not trying to say how consciousness arises. I'm saying it
exists. It's fundamental. But the kinds of interactions that consciousnesses have,
that consciousnesses have. Surely if they're, I don't want to exclude rule-based interactions,
but I may find that I don't want to restrict to rule-based interaction. So I'm starting with the rule-based ones, partly because of my background and my limitations, but eventually I'd love to
go beyond. So I would say point well taken. Non-computational approaches may be very,
very interesting in this. Absolutely.
There's a cognitive scientist named Jonathan Vervakey or John Vervakey. And he says that
there are four, I don't know if they're orthogonal, they're definitely related,
but there's four different modes of knowledge. And one is propositional. And that's what,
since the development of the scientific method, we've relegated our knowledge to the propositions are what are true.
But he says, well, there are others.
There's participatory knowing.
There's procedural knowledge, obviously.
Like that one anyone can get.
And then there's perspectival.
And I'm curious, it's like how would one even,
if your theory would incorporate,
then how the heck can you even write down,
because this notion of writing down is a proposition, how would one model the other three fundamentally?
Well, so in current neural network models, the way that the network learns is not by acquiring new propositions, it's by updating connection weights in the network.
That's a procedural knowledge. So the history of AI, I was in the artificial intelligence lab at
MIT. I did my PhD there and in the brain and cognitive science department at MIT from 79 to 83. At the time, AI was mostly about propositional, right?
It was good old fashioned AI.
So we were,
it was explicit algorithms with specific things that you could write down as
propositions and so forth.
But with neural networks,
you have this kind of thing where the system learns,
but it's not learning propositions.
You're just updating weights and you're getting behaviors.
And that turns out to be very, very effective
for modeling procedural memories, like motor memories and so forth.
And in fact, at Boston Dynamics, I believe they've made great progress
by doing that, by having robots that fall down and try things and they
walk, fall down, but their neural networks are just updating and updating and updating.
And they don't get any list of propositions, do this, do this actuator, then do that actuator.
No, it's just this procedural memory.
So there's been this big debate between propositional and procedural that's gone on between the neural network modelers and the good old-fashioned AI kinds of modelers and so forth.
And it's a profitable debate.
And then there's the inactive or embodied cognition kind of thing that comes into it as well.
I'm taking a perspectival thing as well, right?
So I'm saying that in some sense conscious experiences and perspectives, therefore, are fundamental. But I
see within the network of conscious agents being able to get both procedural
and explicit propositional kinds of representations coming out. But I think also, as you said,
I should be looking at non-computational approaches to this.
And I've got some mathematician friends working with me who are expert in category theory.
So category theory is a great way to explode this into a very, very general mathematical framework.
And we intend to go there.
I know you've got to go soon.
I'm going to just get to the best of questions or the ones that I like answered, which might be more technical.
Okay.
At some point you use Landauer's limit. You know what I'm, you recall what I'm referring to.
Okay. Why do you use that when Landauer's limit presupposes space and time?
Why do you use that as a limit for the amount of ticks or the energy that goes into a tick?
Why do you use that as a limit for the amount of ticks or the energy that goes into a tick?
We've used that, and this is largely the brilliance of my friend Chris Fields, who's a physicist who's been working on that.
So he likes to look at the information theoretic things using Landauer's principle.
So most of his analyses are in terms of quantum theory and what are the limits of information transfer in quantum systems.
So that's usually the context in which we bring that up.
Ultimately, if we get a theory of space-time model is capturing features of the information dynamics of conscious agents.
So it will come there as a property of the interface, but not as a fundamental limit of consciousness.
Okay, okay. And does that give a bound for the amount of time for each click? So just so people
are aware, there was that model that I'll show on the screen. There's the WXG. I was simplifying it.
There's a T that's in the middle that counts the amount of revolutions in the system. I could be
wrong. Just correct me. But either way, that to me seems akin to time. And then in the paper,
it seemed like you were giving a lower limit to time saying that
the amount that a conscious agent can experience is something like 100 femtoseconds.
Please correct me if I'm incorrect here.
And I thought that, well, that sounds like a theoretical prediction too, that if we can
get a camera or conscious experience of a quicker time, then that would invalidate it.
Or if we can't, then that would be evidence for it.
Right. So, so I, I don't go that direction with it.
And I'm not,
I don't think that Chris Fields goes that direction with it either.
And for the following reason,
there,
there is no fundamental pixelation of space-time, right?
So space-time is doomed, not because we run into pixels at the Planck scale, but rather because space-time itself ceases to be well-defined.
It's not an empirically observable construct at that level.
The attempts to think about spacetime as being pixelated at roughly the Planck scale
run into a very serious problem. They're not Lorentz invariant. The pixels, what's tiny for
you may not be tiny for someone else. So
it's been, no one has a way to make the pixelation of space-time Lorentz invariant.
And so I wouldn't want to go there, and I don't think Chris Fields would want to go there.
Nevertheless, we may find, I mean, the Landauer limit does play a role in information transfers
in space-time. So we will want to understand where that limit comes from, from our theory of consciousness and our theory of emergent spacetime.
But I don't see us getting any predictions about a pixelation of time or a pixelation of space, because it would violate Lorentz invariance.
Okay.
violate Lorentz invariance. Okay. On the interface theory, when I'm not looking at,
let's say, the moon, it doesn't necessarily exist, but you're saying that what doesn't exist is our idea of the moon, or that our idea is representing something and that something still persists,
it's just our idea of the moon doesn't exist. Right. So when I say the moon doesn't exist,
I'm saying it in the same way that if I look at an image
and see an illusory cube,
and sometimes the cube I see one face in front
and sometimes it flips and I see another face in front.
And then when I look away,
I say that there is no such thing as the cube because the cube only exists when I look at it. And then I see the face in front. And then when I look away, I say that there is no
such thing as the cube because the cube only exists when I look at it. And then I see the
cube and it's flipping. The same thing with the moon. The moon is like that cube. Now,
there is some reality that I'm interacting with and it continues to exist whether or not I look.
But that reality is not inside space and time at all, much less is it
the moon. Space and time themselves are not a fundamental reality. They are data structures
that we create. So anything that's in space-time is in some sense no more real than the illusory
cube that I look and then destroy when I look away. Space-time itself is created and destroyed by us.
It only exists in the same way that that illusory cube exists, which isn't to say that there isn't
something that exists, but it's not even in space and time. Whatever it exists, it's outside of space
and time. It's outside of my headset. Space-time is mine. I see. I see. So let's imagine there's
Tickle Me Elmo and there's a cat and i
see the tickle me elmo but you're like okay you don't you look it's gone it's destroyed it's not
rendered anymore but a camera that i just press record on can record the tickle me elmo then i
can go and press play how do you account for that well notice you only see the elmo on the camera
when you look you only see the camera when you look. So the camera itself is, again, just something that exists when you perceive it. It's pointing to another reality. There is a
reality that exists independent of you, but the reality is not the camera and it's not the digital
recording on the camera. Those are just, again, your own icons. So the camera itself is just a
symbol. And how does your theory, I know you've covered this before, but just for the people who are listening, how does your theory account for intersubjective agreement?
Now that is one of our current projects.
So one of the papers we're working on right now is exactly that problem because it's critical to science, right? The way scientists typically talk about it,
about what science is doing, is that what makes science special in some sense is that
one scientist can look at a physical system like this animal or this planet. And another scientist can make their own independent measurements of that
animal, that specific animal or that planet. And then we can go and compare notes. In other words,
we have this notion of public physical objects, objects that are really there, whether or not
we exist, like dinosaur bones, right? The dinosaur bones have been there for 200 million years.
They were there.
If no one had ever been around, they would still be there.
I can look at the dinosaur bones.
You can look at the dinosaur bones.
They're public physical objects.
We can make our measurements, get the DNA and so forth.
That's gone in my point of view.
That whole point of view is gone because space-time itself is doomed.
Space-time isn't fundamental.
There are no public physical objects.
So how do we rethink what we
mean by objectivity in science where objectivity it has to do with again having independent
scientists do their own experiments and come to some agreement so the framework that i'm working
on right now poses that as a really serious question. And so we're writing a paper on
that right now and we're gonna be running simulations. But here's the
kind of idea that I'm playing with. I think that we have to take the
possibility of synesthesia very seriously. We actually know synesthesia occurs. It could be easily in the cards that your
visual experiences that you described with the words red and round and square and so forth,
and I described, if I could get inside your head, for all I know, I would go, whoa,
that's not what I would call red and round. That's what I would call the smell of garlic and the taste of butter or a headache.
Those are, just like Michael Watson, he tastes mint, but he feels a tall, cold, smooth column of glass.
I don't feel that.
If all Michael Watson ever experienced were the feelings in his hand, he would have a perfectly good taste sensation.
He could cook.
He could figure out what foods are.
So in other words, you and I, as scientists, could absolutely agree, believe that we're talking the same language, and yet your experiences could be synesthetically utterly different from mine and we would never know and I think
that that's going to turn out to be a fundamental and unbreachable limit to
what we can know and to the degree of objectivity that we can have in science. The technical thing would be that we can only agree up to, effectively, a homomorphism.
A homomorphism of your set of perceptual representations and experiences and mine.
The homomorphism could be as radical as the difference between the taste of mint
and a tall, cold, smooth column of glass.
It could be that different.
And we would never know.
The only reason we know in Michael Watson's case is because he had both,
and he could tell us about both.
We could say, oh, I understand the one, but gee, I never thought about the other.
That's incredible.
So we're going to have to understand what is the most confirmation and what's the greatest level of agreement that
scientists can have given this possibility of synesthetic complete divergence between us
and i would love to i was talking with my colleagues on this in the paper that we're writing.
We may actually do this,
not just as a thought experiment paper.
We may actually do this.
I would love to create this really interesting virtual reality game,
two player game or multiplayer,
but for one player,
it's it,
it,
they're immersed in what seems like, say, Grand Theft Auto.
The other player is immersed in a completely different game, like Super Mario Brothers.
But I've arranged it so they both think that they're playing the same game, and they are actually playing, and they win and lose.
But they have no idea that the other person is in a completely different virtual world.
But they're using language they seem to agree.
You take them out of their heads, have them swap,
and blow their minds when they find out what the other person was doing.
I think we can do that.
And so we have to be, if that's true, if that's possible,
in category theory that would be the idea that you might have this one category
for your representation,
that other scientists might have another category, but if there's some functor,
if there's some functor that maps the objects and relations that you have in your interface
to the objects and relations that the other scientist has in their interface,
if you have the right functor, then the scientists will have the appearance of agreement and they will never know that
they're separated by this functor. And so I think that there's going to be
category theoretic analyses ahead on this that are going to be really quite
stunning, but I think that's going to be the future of the philosophy of science,
is this category theoretic fundamental limitation
on the agreement that scientists can have.
There are no public physical objects,
so what does it mean for us to get laboratory-independent verification?
My laboratory gets it, your laboratory gets it.
What are we really doing in that case? It's going to be up to some functor.
Right. It sounds to me, and I only have three more questions, and they might be long, but
I have three questions. It sounds to me like you're saying that there are some homomorphic
functions between us, and that's responsible for why we feel like we share a similar world.
That's responsible for why we feel like we share a similar world.
However, can we not use the same argument that you use to show that the probability that we see the truth is zero, that is, that the admissible functions grow much larger than the homomorphic functions to the structure?
Can we not use that to say that the probability that we share any interface is also zero, including homomorphic aspects? Right. Not at least within the theory of conscious agents and the dynamics that I'm
talking about now, because what we can do is we can ask, are there certain strategies of
interaction that conscious agents can have where they would converge in the limit to homomorphic
things? So that's different evolutionary game theory where the rules have been spelled out. I mean, in that,
I don't get to play with evolution. But there are constraints, so it's not just uniform probability.
That's right. I see, I see. So in the case of conscious agents, the question is, can we find
dynamics in which the agents can converge up to homomorphisms. And I think we can.
Okay. Here's one that's super philosophical. If what you're saying is that we're not tuned for
truth because truth is deleterious, at least compared to fitness, which is salubrious.
However, let's imagine your theory is correct. Then we're starting to see the truth. Does that
not mean that it's deleterious?
We shouldn't do that.
We're not designed for that.
Absolutely not.
And so here's where it's very, very careful to, again, see the game that I'm playing.
The game that I'm playing is I take evolution on its own terms,
and I see what its implications are.
I don't say I believe evolution.
I'm just playing the game.
This is what the math of evolution says. It says very clearly we don't see the truth. Okay, that's what evolution says.
Probability is zero that we see the truth. That's not what Hoffman says. Hoffman doesn't care about that, except to understand that that's what evolution says. But evolution says that doesn't
mean I believe it. So I'm free, on the one hand, to prove that evolution entails we don't see the
truth. I'm free to prove that, and then to step back and say, I still believe, on the one hand, to prove that evolution entails we don't see the truth.
I'm free to prove that, and then to step back and say, I still believe we see the truth.
Here's my theory in which we see the truth.
But the constraint will then be, okay, then, if you have this theory in which we see the truth, you have to explain how it is in that framework where you see the truth that you end up arriving at the theory of evolution by natural selection,
which says otherwise. If you can't do that, then you're wrong. So this is a really, and by the way,
it's a great question, because I get this question in emails quite a bit. So the key thing is this,
Hoffman doesn't say we don't see the truth. Evolution by natural selection says we don't see the truth.
I don't say that. Evolution by natural selection says that.
I just, I'm just the messenger.
That's what evolution is.
Don't shoot me, I'm just the messenger.
I'm just pointing out what one of our best theories entails.
Now I'm stepping back and I'm going,
I would like to see the truth.
Of course, maybe I can't.
I'm not saying I know that we can see the truth. I'm just saying, as a scientist, I would like to see the truth. Of course, maybe I can't. I'm not saying I know that we can see
the truth. I'm just saying, as a scientist, I would like to come up with a theory which entails
that we see the truth. And that's why I'm looking at consciousness, because I am conscious. So if
I'm conscious and the reality is consciousness, there is some connection between me and reality.
I might actually be able to understand that reality because I'm conscious. But then the
burden is on me
to then explain how my theory where I see the truth of these conscious agents leads to an
interface of space-time, and in that interface, this theory of evolution of natural selection,
which says you don't see the truth. Well, that won't be so strikingly strange because evolution is telling you that what you see
in the interface isn't the truth. Well, I'm also saying that from this bigger picture.
I'm saying that there's this realm of conscious agents outside the space-time interface that's
the truth, and you can't see it within the interface. Well, it's not going to be a surprise
that evolution by natural selection says that inside the interface. So it'll all work out.
So that's – I hope that gives – that's a great question.
Okay, let me quick – let me rephrase it so that I get it across because I don't think I portrayed it correctly.
You're saying that – and I'm sorry, evolutionary theory is saying that we're not designed for the truth.
We're designed for fitness.
Right.
And those are two completely separate phenomena.
Okay.
If we were designed for truth, if we were able to see the truth, that would be horrible for us.
But at the same time, you're like, I want to look at the truth.
So let's say you're actually uncovering the truth.
Is that not harmful given the presupposition that we should be pursuing
fitness and not truth? No, it's not because by stepping outside of the entire framework of
evolution, I'm stepping outside of its assumptions. I'm saying there's a deeper framework in which
the assumptions of evolution aren't true. I see, I see. Only true within the headset.
The headset is just a headset. There's this deeper reality outside the headset.
It's like if I see the Twitterverse only through my headset, and there are certain rules about how I perceive in the headset,
and I assume that those rules of what I see in the headset must apply to all the Twitter users.
Well, that's stupid. The Twitter users are doing their own thing.
My headset is just smashing all this complexity into this trivial little headset.
And that's why evolution has to tell us that it's not
seeing the truth. Because what's happened from this bigger framework is space-time and objects
in space-time is this incredible smashing down, information-losing map from the reality, this vast
social network, into this trivial little space-time interface. So it's no surprise that evolution by natural selection is saying,
you know what? You're not going to see the truth. You're just going to see little interface symbols.
That's what it's telling us. And I say, I agree. We're only seeing interface symbols, but
that's only in space-time. We're not stuck in space-time. You and I are not in space-time.
We create space-time. The rules of the game apply within the game but you can step out
of the game that's right okay okay so the authors of the game okay cool cool cool i'm gonna i'm gonna
instead of asking you i'm gonna list the last questions and then you just choose one because
i know you gotta go yeah okay so i was gonna say where does where do people like deepak chopra
take your message too far?
So that's an option. You can answer that.
Another one is, when Jesus says, I am the way, the truth, and the light,
what truth do you think he was referring to?
And then the other question, what determines the initial world state?
Well, those are all tough.
With Deepak, Deepak and I have been good friends for several years now. He's a spiritual teacher, Anat. We respect each other. I respect him as a seeker.
I respect him as a seeker, and he's learning to be careful about how he throws around words like quantum mechanics and so forth.
I mean, he's been – I'm aware.
He's taken hits in the past, and I think that – but I have really – he and I – and this is really important.
Science and spirituality do need to have a dialogue.
And whenever there's a dialogue, even within members of two different scientific disciplines,
you have to have a lot of patience with the other person.
So, you know, someone who is not a cognitive scientist coming to talk to me, say he's a
chemist.
Well, when I start talking chemistry with him, I'm going to feel like and be really
stupid compared to him. When he talks cognitive science to me, I'm going to feel like and be really stupid compared to him.
When he talks cognitive science to me, he's going to be really pedestrian to me.
It's going to be the same thing in science and spirituality.
So if I want productive dialogue, of course, I'm going to be saying things that to someone who's spiritually trained is going to sound, you know, scientific nonsense and immature and not very deep, and vice versa.
So, of course, I'll have said things that Deepak thinks are spiritually stupid,
and then he says things in quantum mechanics and so forth that most scientists would say
that's not what a really trained scientist would say.
Fine.
But if we want to have a dialogue that's really productive, we have to cut each other some slack.
We have to listen past the barriers and understand.
You know, it took me many, many years to understand what I know about cognitive science.
I can't expect someone to just pop into my field and sound like they know what they're talking
about. If I'm not willing to cut them some slack, we'll never get a dialogue. So for me,
that's, so I, and I learned from my dialogue with Deepak, he learns in his dialogue with me,
and we've learned to have this respect of what each other knows and trying to learn from the other. So that's what I hope we mirror, and of course it's my job to gently correct him
when I see things that he says that might be wrong, and it's his job to
gently correct me when I misstate his tradition, for example. So that's
perfectly fine. On Jesus, I don't know what to say about I mean, I was raised
my father was a fundamentalist
Christian minister, so I've
read the Bible cover to cover more than once.
I know what the standard
interpretation of the
Bible is
among Protestant fundamentalist Christians.
What Jesus meant when he said I am the way, the truth, and the Bible is among Protestant fundamentalist Christians. What Jesus meant when he said, I am the way, the truth, and the life, I don't know.
I'll just say, I don't know.
I will say that I think that there are many beautiful insights in the Bible,
and I think there's also nonsense.
And I think that just like in scientific theories,
there are many beautiful insights,
and then there are places where we know we're absolutely wrong.
And my attitude for any scripture and for any scientific theory is the same.
Let's treasure the insights and let's recognize humbly that we can be wrong.
Any religious tradition, any scripture, any scientific theory,
if we're not willing to be wrong,
to open up to the possibility that something we deeply believe could be deeply wrong.
We can't grow.
That's the place where we stop growing.
So it's a matter of humility, both to listen to other traditions, other points of view,
to be open that they may have insights I've never had.
I may have insights that they've never had. I may have insights
that they've never had. I may have nonsense that they can point out. They may have nonsense that
I can point out. If we can do it in this respectful way, again, like the kind of thing I'm trying to
do with Deepak. I don't want to have him say something wrong about quantum mechanics and go,
oh, you said that about quantum mechanics. Forget it. I'm not going to talk to you anymore you got that wrong that's that's not the way to do it that's it's it's to have a respectful dialogue if others
are willing to humbly also put out their ideas and listen that i think is the heart of true
religion when we talk about loving each other what deeper love can you have than to respectfully
and carefully listen to the thoughts and opinions
of others and be willing to entertain the possibility that they may be right in some
places and you may be wrong. That's a deep, respectful, and humble kind of interaction.
And that's how we will learn. And that's how we'll stop killing each other too.
Thank you. Don, where can the audience find out more about you?
If you Google Donald Hoffman, H-O-F-F-M-A-N, my homepage at University of California comes up,
and I've got links to my papers. And I have a book, The Case Against Reality,
published by Norton. So The Case Against Reality is a great place to get
these ideas for a
broad audience.
It's not a technical book.