Theories of Everything with Curt Jaimungal - David Chalmers: Are Large Language Models Conscious?
Episode Date: November 9, 2023YouTube Link: https://www.youtube.com/watch?v=nqWxxPhZEGYDavid Chalmers analyzes consciousness in AI, probing cognitive science and philosophical ramifications of sentient machines.TIMESTAMPS:- 00:00:...00 Introduction- 00:02:10 Talk by David Chalmers on LLMs- 00:26:00 Panel with Ben Goertzel, Susan Schneider, and Curt JaimungalNOTE: The perspectives expressed by guests don't necessarily mirror my own. There's a versicolored arrangement of people on TOE, each harboring distinct viewpoints, as part of my endeavor to understand the perspectives that exist.THANK YOU: To Mike Duffy, of https://expandingideas.org and https://dailymystic.org for your insight, help, and recommendations on this channel.- Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!)- Crypto: https://tinyurl.com/cryptoTOE- PayPal: https://tinyurl.com/paypalTOE- Twitter: https://twitter.com/TOEwithCurt- Discord Invite: https://discord.com/invite/kBcnfNVwqs- iTunes: https://podcasts.apple.com/ca/podcast...- Pandora: https://pdora.co/33b9lfP- Spotify: https://open.spotify.com/show/4gL14b9...- Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeveryt...- TOE Merch: https://tinyurl.com/TOEmerchLINKS MENTIONED:- Podcast w/ Susan Schneider on TOE: https://www.youtube.com/watch?v=VmQXp...- Reality Plus (David Chalmers): https://amzn.to/473AKPw- Mindfest Playlist on TOE (Ai and Consciousness): https://www.youtube.com/playlist?list...- Mindfest (official website): https://www.fau.edu/artsandletters/ne...- Talk by Ben Goertzel on AGI timelines: https://youtu.be/27zHyw_oHSI- Podcast with Ben Goertzel and Joscha Bach on Theolocution: https://youtu.be/xw7omaQ8SgA- Talk by Claudia Passos, Garrett Mindt, and Carlos Montemayor on Petri Minds: https://www.youtube.com/watch?v=t_YMc...- Stephen Wolfram talk on AI, ChatGPT: https://youtu.be/xHPQ_oSsJgg
Transcript
Discussion (0)
Will future language models and their extensions be conscious?
In developing conscious AI may lead to harms to human beings,
may also lead to harms to these AI systems themselves.
So I think there needs to be a lot of reflection about these ethical questions.
This is a presentation by David Chalmers, as well as questions posed to him,
as well as me and Susan Schneider, by you, the in-person audience,
earlier this year at MindFest Florida, spearheaded and Susan Schneider, by you, the in-person audience earlier this year at MindFest
Florida, spearheaded by Susan Schneider. All of the talks on that AI and Consciousness Conference
are in the description. I'll list some of them aloud. For instance, there's one by Anand Vaidya.
This was his presentation on moving beyond non-dualism and integrating Indian frameworks
into the AI discussion. Ben Gortzal also appeared, giving his talk on AGI timelines.
You should also know that Ben Gortzl recently came on with Yoshibok for a theolocution,
and that's in the description as well.
There's Claudia Pesos, there's Garrett Mint, and Carlos Montemayor on Petri Mines
and what it takes to build non-human consciousness.
There's also a Stephen Wolfram talk on AI, chat GPT, and so on.
We're also going to be releasing a Stephen Wolfram talk on AI, chat GPT, and so on. We're
also going to be releasing the second Wolfram talk, as it's considered by many of the people
there his best. Why? Because it was aimed at high schoolers, even the lay public. And Stephen does
a great job at explaining the connection between AI and physics with his hypergraph slash Rulliad
approach. That one's taking a bit of time, unfortunately, because the audio became messed
up. And so what we have to do is reconstruct it using AI.
My name is Kurt Jaimungal, and this is a podcast called Theories of Everything, where I use my background in math and physics to investigate theories of everything, such as loop quantum gravity and string theory and even Wolframs.
But as well, we touch on consciousness and what role does consciousness have in fundamental law.
Is it an emergent property? Is it something that's more?
If you like those subjects as well as what you saw on screen,
then feel free to subscribe.
We've been having issues monetizing the channel with sponsorship,
so if you'd like to contribute to the continuation of Theories of Everything,
then you can donate through PayPal, Patreon, or through cryptocurrency.
Your support goes a long way in ensuring the longevity and quality of this channel.
Thank you.
Links are in the description.
Enjoy this episode with David Chalmers and then the subsequent panel with Susan Schneider
and myself and David Chalmers.
Okay, so let's go ahead and get started.
Me?
Oh, you need no introduction.
No, it's fine, it's fine.
Okay, well, thank you so much, Susan, for this amazing MindFest 2023.
And thanks so much to Stephen and Misha and Simone and everybody else who's been involved in putting together this conference.
It's been quite a memorable event already.
Yeah, so Susan just asked me to give a brief overview
of this paper I wrote on
Could a Large Language Model Be Conscious?
I gave this at the NeurIPS conference,
the Big Machine Learning Conference,
in late November and subsequently wrote it up.
Maybe some people will have gotten as far as having read the first few pages,
but I thought I'd just summarize some of the ideas.
I actually started out when I was a grad student like 30 plus years ago working on neural networks back in the early 90s in Doug Hofstadter's
AI lab.
I did a bunch of stuff related to models of language like active-pressive transformations,
models of the evolution of learning, and then I kind of got distracted by thinking about
consciousness for a little while.
So it's very cool to have this intersection of issues about
neural networks and machine learning and consciousness become so much to the fore
in the last year or so. I mean, well, I guess one high point of interest was this big brouhaha
was this big brouhaha last June where Blake Lemoine, an engineer at Google, said he detected sentience in one of Google's AI systems, Lambda 2. Maybe it had a soul, maybe it had consciousness,
and this led to a whole lot of skepticism, a whole lot of discussion. Google themselves said,
our team, including ethicists and technologists, has reviewed Blake's concerns, and we've informed
him that the evidence doesn't support his claims. There's no evidence that Lambda was sentient.
Lots of evidence against it. This was already very interesting to me. I thought, okay,
evidence, good. What is? No, they didn't actually lay out the evidence. I'm just curious, what was
the evidence in favor of his claims, and what was the evidence against it? And really, this talk
was in a way just a way to try and make sense of those questions,
evidence for, evidence against.
Asking questions like, are current language models conscious?
Could future language models be conscious?
Also, could future extensions of language models be conscious?
As when we combine language models, say, with perceptual mechanisms or with action mechanisms in a body or with database lookup and so on?
And what challenges need to be overcome on the path to conscious machine learning systems at each point trying to examine the reasons?
So here I'll just briefly go over
clarifying some of the issues, looking briefly some of the reasons in favor, looking at some
of the reasons against, and draw conclusions about where we are now and about possible roadmaps
between where we are now and machine consciousness.
So as I understand the term consciousness,
we've already had a lot of discussion of this at the conference,
especially yesterday morning, so the ground has been prepared.
These words like consciousness and sentience get used in many different ways, at least as I understand them, I basically mean subjective experience. A
being is conscious if there's something it's like to be that being, that is, if it
has subjective experience. Yesterday I think it was Anand talking about Nagels,
what is it like to be a bat? The question now is what is it like, is there anything
it's like to be a large language model or future extensions of large language models.
So consciousness includes many different kinds, many different varieties.
You can divide it into types in various ways.
I find it useful to divide it into a few categories.
Sensory experience, like seeing and hearing.
Affective experience, the experience of value, things, you know, feeling good or bad, like pain or pleasure.
Feeling emotions, like happiness and sadness.
Cognitive experiences, like the experience of thinking.
Agentive experience, experience of acting and deciding, like an agent.
And all of these actually can be combined with
self-consciousness, awareness of oneself. Although I take self-consciousness just to be one kind
of consciousness. Awareness, consciousness of the world may be present even in simple systems
that don't have self-consciousness. And some of these components may be more apt to be had by language
models than others. I think it's very important to distinguish consciousness from intelligence.
Intelligence, I understand, roughly in terms of behavior and reasoning that guides behavior.
Intelligence is roughly, you know, being able to do means and reasoning in order to be able to achieve multiple ends in many different environments.
Ultimately, a matter of behavior.
Consciousness comes apart from that.
I think consciousness may be present in systems which are fairly low on the intelligence scale.
Certainly doesn't require anything like human level intelligence.
Good chance that worms are conscious or fish are conscious.
Well sure.
So the issue of consciousness isn't the same as the issue of human level artificial general
intelligence.
Consciousness is subjective.
You might ask why consciousness matters. I mean, it would be nice, you know, to say, well,
one reason why consciousness matters is consciousness is going to give you all these
amazing capacities. Conscious systems will be able to do things that other systems can't do.
That may be true, but actually right now we understand the function of consciousness
sufficiently badly. There's nothing I can promise you that conscious systems can do that unconscious systems can't do.
But one reason, one fairly widely acknowledged reason why it matters is consciousness matters for morality and moral status.
If, say, an animal, like, say, a fish, is conscious, if it can suffer, that means, in principle, it matters how we treat the fish.
If it can suffer, that means in principle it matters how we treat the fish.
If it's not conscious, if it can't suffer and so on, then its consciousness doesn't matter.
And that, again, we were talking about yesterday. So if an AI system is conscious, suddenly it enters our moral calculations.
We have to think about, you know, boy, if the training we're inflicting on a machine learning system actually inflicts suffering,
Boy, if the training we're inflicting on a machine learning system actually inflicts suffering, a possibility some people have taken seriously,
we need to worry about whether we're actually creating a moral catastrophe by training these systems. At the very least, we ought to be thinking about methods of dealing with these AI systems that minimize suffering and other negative experiences.
and other negative experiences.
Furthermore, even if conscious AI doesn't suffice for human-level AI,
maybe it'll just be, say, fish-level or mouse-level AI,
it'll be one very important step on the road to human-level AI,
one that brings, you know, one that would be very,
if we could be confident of consciousness in an AI system, that would be a very significant step.
Okay, so reasons for and against.
I'll just go over to summarize a few reasons in favor of consciousness in current language models.
And I put this in the form of asking for a certain kind of request for reasons. I ask a proponent of
language models being conscious to articulate a feature x such that language models have x,
and also such that if a system has x, it's probably conscious, and ideally give good reasons for both of those claims. I'm
not sure there is such an X, but you know, a few have been at least articulated. For Blake Lemoine,
it seemed actually what moved him the most was self-report, the fact that Lambda 2 said, yes,
I am a sentient system. Let me watch your sentience and consciousness like.
It would explain this.
I'm sometimes happy and sad.
Interestingly, in humans, verbal report is actually typically our best guide to consciousness,
at least in the case of adults.
Claudia was talking about the cases like infants and animals where we lack verbal report.
So you might think we're in a more difficult position.
Well, actually, in the case of language models, we have verbal reports.
They would think, great.
They say they're conscious.
We'll use that as evidence.
Unfortunately, as Susan and Ed and others have pointed out,
this evidence is not terribly strong in a context where these language models have been trained on a giant corpus of text from human beings,
who are, of course, conscious beings and who talk about being conscious.
So it's fairly plausible that a language model has just learned to repeat those claims.
So in this special case, Susan know susan and ed's artificial consciousness
test is super interesting which basically look at your reactions to thought experiments about
consciousness but in this special case where they've been trained on so much text already i
think it carries less weight maybe i'll skip over seems consciousness which is relevant this
conversation you know a lot of people have been very, very impressed by the conversational ability of these recent systems.
I guess chat GPT was fine-tuned for conversation,
unlike the basic GPT-3 model.
And now we have GPT-4,
which also appears to have been fine-tuned for conversation.
And, you know, conversational ability is one of the classic criteria for thought in AI systems,
articulated by Turing back in his 1950 paper on the imitation game.
He basically thought if a machine behaved in a way sufficiently indistinguishable from a human,
then we might as well say it can think or that it's intelligent.
So these language models, I think they've not yet passed the Turing test.
Anyone who wants to probe sufficiently well can find glitches and mistakes and idiosyncrasies
in this system.
That said, through interacting, I got access to a GPT-4.
Through interacting with that just the last couple of days, really kind of feels a lot like, I was saying for like a GPT-3, it felt like talking
to a smart eight-year-old. I think now GPT-4, it's at least a teenager. Maybe it's even, it's
approaching sophisticated, you know, adult in a lot of its conversations. Yes, it makes mistakes.
You know, one Turing test you might do is ask it questions like,
what is the fifth word of this sentence?
And apparently it always gets this kind of thing wrong.
It'll say something like fifth.
No, actually the fifth word of the sentence was word.
What was the fifth letter of the third word of this sentence?
It'll get that wrong.
Okay, a lot of people will get that wrong too,
so that's not a guarantee that it's failing the Turing test, but getting close. I mean, that said, I think conversational ability
here is relevant because, not so much in its own right, but because it's a sign of general
intelligence. If you're able to talk about a whole lot of different domains and, you know,
play chess and code and talk about
social situations and scientific situations, that suggests general intelligence.
And a lot of people have thought there are at least correlations between, you know, domain
general abilities and consciousness.
Abilities you could only use, information you could only use for certain special purposes
are not necessarily conscious.
But information available for all kinds of different activities and different domains
are often held to go along with consciousness.
So that at least, I think, gives us some basic reasons.
I take probably this one as the most serious reason for taking seriously the possibility of consciousness in these systems.
That said, I don't think, just looking at the positive evidence,
I don't think any of this provides remotely conclusive evidence that language models are conscious,
but I do think the impressive general abilities give at least some limited initial support
for taking the hypothesis seriously,
at least taking it seriously enough now to look at the reasons against, to see, you know, what are
these? Everyone says, okay, look, there's all kinds of evidence. These systems are not conscious.
Okay, what are those reasons? I think they're worth examining. So the flip side of this is to
look at the reasons why language models are not conscious.
And the challenge here for opponents is articulate a feature x so that language models lack x.
And if a system lacks x, it's probably not sentient.
And then, again, try and give good reasons for one and two. And here, we have six different reasons that I consider in the paper.
The first one, which I just consider very briefly, is the idea that consciousness requires biology,
carbon-based biology.
Therefore, a silicon system is not biological and lacks consciousness. That would
rule out pretty much all AI consciousness if correct, at least all silicon-based AI consciousness.
This one is a really familiar issue in philosophy, goes back to issues of, you know, Seoul and the Chinese room and
all kinds of, yeah, some very, very well-trodden debates. I'm really here more interested in issues
more specific to language models, so I'll pass over this one quickly.
A more, maybe closer to the bone for language models specifically is the issue of having
senses and embodiment.
A standard language model has nothing like a human sense, you know, vision and hearing
and so on.
No sensory processing so they can't sense suggests they have no sensory consciousness.
Furthermore, they lack a body. No sensory processing, so they can't sense suggests they have no sensory consciousness.
Furthermore, they lack a body.
If they lack a body, it looks like they can't act. If so, maybe no agentive consciousness.
Some people have gone on to argue that because of their lack of senses,
they may not have any kind of genuine meaning or cognition at all.
We need senses for grounding the meaning of our thoughts.
There's a huge amount to say about this.
In fact, I recently had to give a talk at the American Philosophical Association
and just talk completely about this issue.
But one way just to briefly cut this issue off is to look at all of the developing
extensions of language models that actually have some forms of sensory processing. For example,
GPT-4 is already designed to process images. It's what's called a vision language model. And I
gather, although this is not yet fully public, that it can process sound files and so on. So it's a multimodal model.
This is DeepMind's Flamingo, which is another vision language model.
You might say, what about the body?
But people are also combining these language models with bodies now.
Here's Google SACAN, where actually a language model controls a physical robot.
a language model controls a physical robot.
Here is a this is one of DeepMind's model, MIA, which works, controls
a virtual body in a virtual world. And that's become a very
big thing too. That connects to the issues I'll be talking about later this afternoon about
virtual reality and virtual worlds. But we're already moving
to a point
where I think it's going to be quite standard quite soon
to have extended language models with senses and embodiment,
which will tend to overcome the objection
from lack of senses and embodiment.
Another issue is the issue of world models.
There's this famous criticism by Timnit Gebru,
Emily Bender and others that language models
are stochastic parrots,
they just minimize text prediction error,
they don't have genuine understanding, meaning,
world models, self models, there's a lot to this.
Again, I take world models to be the crucial part
of this question,
because world models are plausibly something
which is required for consciousness.
There's actually a lot of interesting work
in interpretability recently of actually trying
to detect world models in systems.
Here's someone trying to detect a world model
in GBT-3 playing Othello,
and they actually find some interesting models of the board. A slightly more technical issue is
the question that these language models are feed-forward systems that lack memory-like
internal state of the kinds you find in recurrent networks. Many theories of consciousness say that recurrent processing
and a certain form of short-term memory
is required for consciousness.
Here's a standard LSTM, a standard recurrent network,
whereas transformers are largely feed-forward.
They've got some quasi-recurrence
from the recirculation of inputs and outputs.
You know, that said, there's a, I take it, there's a, we don't know the architecture of GPT-4. There
are rumors that it involves more recurrence than GPT-3. So this also looks like a temporary
limitation. Fifth is the question of a global workspace, perhaps the leading theory,
the leading scientific theory of consciousness.
Claudia talked about it yesterday,
is that consciousness involves this global workspace
for connecting many modules in the brain.
It's not obvious that large language models have these.
On the other hand,
it's starting to look like many of these extensions,
these multimodal extensions of large language models do have something like a global workspace.
Here's the Persever architecture from DeepMind, where it looks like they actually developed something,
some kind of workspace for integrating information from images, information from audio, information from text,
from audio, information from text, which Ryota Kanai and others have argued behaves quite a lot like a classic global workspace. If you ask me, the feature that these many current language
models lack that seems most crucial is some kind of unified agency. It looks like these language
models can take on many different personas, like actors or chameleons. Yeah, you can
get lambda to say it's sentient. You can just as easily get it to say it's not sentient. You can
get it to simulate a philosopher or an engineer or a politician. They seem to lack stable goals
and beliefs of their own, which suggests a certain, a lack of a certain unity,
to which many people think consciousness requires more unity. But again, that said,
you know, there's a lot of things to say about this, but there is a whole emerging literature
on like agent modeling or person modeling, where you develop these systems. At the very least,
it's not very difficult, it's very easy to fine-tune these systems to to act more like a single individual
But there are projects of you know
Trying to train some of these systems from the ground up to model a certain individual and that's certainly that's certainly coming
Perhaps some reason to think those systems will be more
Unified okay, so then um
Look at those six reasons against
Okay, so then, look at those six reasons against.
That's okay.
Some of them are actually reasonably strong and plausible requirements for consciousness.
It's reasonably plausible that current language models lack them.
I think especially the last three I view as quite strong.
That said, all of those reasons look quite temporary.
We're already developing models of global workspace.
Recurrent language models exist and are going to be developed further. More unified models. There's a clear
research program there. So it's interesting to me that the strongest reasons all look fairly
temporary. So just to sum up my analysis, are current language models conscious? Well, no conclusive
reasons against this, despite what Google says, but still there are some strong reasons,
reasonably strong reasons to deny that they're conscious corresponding to those requirements.
And I think it would not be unreasonable to have fairly low confidence that current language
models are conscious. But looking ahead 10 years,
well, it was 2032 when I gave the talk, I guess now 2033, will future language models and their
extensions be conscious? I think there's good reason to think they may well have overcome
the most significant and obvious obstacles to conscious. So I think it would be reasonable
at least to have somewhat higher credence.
When asked to, you know, you shouldn't be too serious
with numbers on these things,
but when asked to put a number on this,
I'd say I'm at least, I think there's at least,
say a 20% chance that we'll have conscious AI by 2032.
And if so, that'll be quite significant.
So conclusion, questions about AI consciousness
are not going away.
Within 10 years, even if we don't have human level AGI,
we'll have systems that are serious candidates
for consciousness.
And meeting the challenges to consciousness
in language models could actually yield
a potential roadmap to conscious AI.
And actually, here I laid out in the longer version of the paper
something of a roadmap. I mean, there are some philosophical challenges to be overcome,
better evidence, better theory, better interpretability. The ethics is all important, but also some
technical challenges. Rich models in virtual worlds with robust world models of genuine memory and recurrence global
workspace unified person models and so on but you just said we had we actually overcame those
challenges all of these challenges look eminently uh eminently doable if not done within the next
decade or so just say by within a decade we've got some system, doesn't have to be human
level AGI, let's say mouse level capacities showing all of these features, then question,
would that actually be enough for consciousness?
Many people will say no, but then the question is, well, if those systems are not conscious,
what's missing?
I think at that point we have to take this very seriously.
And by the way, we do need to think very, very seriously about the ethical question
about whether it's actually okay for us to pursue.
I'm not necessarily recommending this research program.
It's a very serious ethical question.
In developing conscious AI, it may lead to harms to human beings, may also lead to harms to these AI systems themselves.
So I think there needs to be a lot of reflection about these ethical questions and philosophers as well as AI researchers in the broader community are going to have to think about that very hard.
So thanks.
Unfortunately, our AI voice enhancer couldn't bring clarity to much of this Q&A section.
So I'll interject to summarize the questions.
I guess I was just wondering about the unified agency.
That's the biggest thing that was missing.
And so one reason you might not think that is
if you think of like NetLock's Ant Bubbles machine,
which had a unified agency, right? So it was
just modeled after his Ant Bubbles, but it was, there was no reason to think that was
conscious.
Okay. So this Ant Bubbles machine, also sometimes known as Blockhead after my colleague, Ned
Block, who invented it, basically stores every possible conversation that one might have with, I guess, one's aunt
bubbles.
And just, you know, once it gets to like step 40 of the conversation, it looks over the
entire history of the conversation, looks up the right answer, and gives it.
I mean, totally impossible to create a system like this.
It would require a combinatorial explosion of memory.
would require combinatorial explosion of memory.
But Ned used this to argue that a system could pass the Turing test,
this system could pass the Turing test,
but it quite clearly would not be conscious or intelligent.
Now, Jake is suggesting that a system like this might nevertheless be unified.
Well, unified in the very weak sense of being based on a certain individual. But if you actually look at its processing, it looks extremely disunified.
To me, it's actually massively fragmented.
It's got a separate mechanism for every single conversation.
So I don't see the kind of mechanisms of integration there that philosophers have standardly required
for unity of consciousness.
It does bring up many interesting questions about what kind of unity is required for consciousness. There's no easy answer to that.
Okay, this may sound like a simple question, but if an entity presented as intelligent,
we would probably call it intelligent.
The speaker asks, what does it take to determine whether an entity is conscious?
Well, there's never any
guarantee with consciousness. You know, philosophers have argued that we can at least imagine beings
who are behaviorally, maybe even physically, just like a human being, but who are not conscious.
So even when I'm interacting with you, you may give every sign of consciousness. But at least
the philosophical question arises, are you conscious? Or are you a philosophical zombie? A system that lacks consciousness entirely. With other people,
we're usually prepared to extend the benefit of the doubt. You know, other people are enough like
us biologically, evolutionarily, that, you know, when they behave consciously and say they're
conscious, then we've got pretty good, if not totally conclusive reasons to think they are.
Now, once it comes to an AI system, well, they're unlike, their behavior may be like that of a
human being, but they may still be unlike us in various ways. The internal processing of a
large language model is extremely different from that in a brain. It's not just carbon versus
silicon. It's like the whole architecture is different and the behavior is different. So the reasons are
going to be weaker. At this point, I think this is why you actually need to start looking inside
the system, going beyond behavior to think about what the processes are. Let's look at our leading
current theories of consciousness, what they require for consciousness, global workspace,
theories of consciousness, what they require for consciousness, global workspace, world model,
perhaps recurrence, perhaps some kind of unity. If we can actually find all of that in addition to the behavior, I then give that very serious weight. Let's just say, look, it's still not
conclusive proof that a system is conscious, but if you can do all that, and then someone says it's
not conscious, then at that point I think I can reasonably ask them, what do you take to be missing?
Thanks, David. I think I want to go back to maybe your second slide. You had several points there.
My question was more about affective mechanisms in cognition. So I just want to delve in and see your thoughts on ethics
and how it relates because, you know, of course,
I think that's the missing piece of a unified agency
because that agent has to be interested or embossed by something.
Like if you want to take an example in a lab scenario,
say a bin comes in every day wearing a blue shirt
and Sophia sees that, maybe she has a positive response to that.
Or the inverse of it,
if you add an embodiment of cognition,
then maybe it's remulsed by a certain color
or something random in the environment.
So affect is what drives us,
what causes interest in us.
The AI is not committed to saying,
I'm conscious or not conscious.
It's the same to it.
But what would be a driving factor?
What would be affective
mechanisms, what would it look like
in the setting of AI?
And I think this might go to something
that Carlos was talking about the other day with
there has to be something at stake
for the agent. Otherwise,
something like that is a vulnerability
of the agitator.
Yeah, it's a good question.
An affective experience is obviously extremely central to human consciousness.
I've actually argued that it's not absolutely central to consciousness in general.
There could be a conscious being with little or no affective experience.
We already know human beings with wider and narrower affective range.
But while getting into thought experiments,
I quite like the thought experiment of the,
I think we were talking the other night
about the philosophical Vulcan,
inspired by Spock from Star Trek,
who was supposed to lack emotions.
I mean, Spock was a terrible example
because he was half human and often got emotional.
And even Vulcans go through Ponfar every few years.
However, a philosophical Vulcan is a purer case.
Conscious being sees, hears, thinks, reasons,
but never has an affective state.
I think that's perfectly coherent.
Humans aren't like that, but I think that's coherent.
I've argued a being like that would still have moral status.
So affect, suffering is not the be-all and end-all for moral status.
That said, affect is not the be-all and end-all for moral status. That said,
affect is very crucial for us. I mean, what would drive a philosophical Vulcan to do what they do?
Not their affective states, not feelings of happiness and frustration and so on. Rather,
I think it would be more of like the kind of creature described by Immanuel Kant, who has
these kind of colder goals and desires, it could still want to
advance science and protect his family and so on in the absence of affect. So I think it would be
possible, even if we didn't have good mechanisms for affective experience in AI, I think we could
still quite possibly design a conscious AI. That said, one very natural route to conscious AI is to try and build in affect, states like
pain and pleasure.
And here it's actually, as far as I can tell, first we don't actually understand the computational
mechanisms of affect at all well, even in humans.
And there is not a standard computational story to be told about affective experience
in AI.
I know a few people who are thinking about this who have their own story.
So it's a bit of a mystery right now exactly how best to build in AI. I know a few people who are thinking about this who have their own story, so it's a bit of a mystery right now exactly how best to build in affect. That said,
there are a whole bunch of different hypotheses, and I think it's one very interesting route.
Carla. I wonder what you think about this thing that I find interesting, this asymmetry,
that you were talking about mouse-level consciousness,
and we're all like, oh my god, if that happens, we should need to hire warriors or some of
these things.
What do you think about this asymmetry, that creatures that most of us think are conscious
should fall within the moral protections that we don't give them.
Yeah, I mean, I think any, my own view is that any conscious being
at least deserves moral consideration.
So I think absolutely mice deserve moral consideration.
Now, exactly how much moral consideration,
that's itself a huge issue in its own right.
Probably not as much moral consideration as a human being.
That is probably some.
I mean, I don't know,
you know, even a scientist running a lab where they put mice through all kinds of things,
I think they at least give the mice some moral consideration. They probably try not to make
the mice suffer totally gratuitously. Even that's a sign of some moral consideration. That said,
they may be willing to put them through all kinds of suffering when it's scientifically useful, which means they're not giving them that much moral consideration.
And I'm very prepared to believe we should give mice and fish and many animals much more
moral consideration than we do. Exactly what the right level is, I don't really know.
But yeah, and I think much of the same goes for AI. Maybe initially we'll have AI systems with the kind of something like,
maybe, I don't think it's out of the question that conscious AIs could have something like,
the current AI systems could have something like the conscious experience of,
you know, a worm or maybe even a fish or something like this,
and thereby already deserve some moral consideration.
That said, with AIs, unlike, say, mice,
well, I guess in flowers from Algernon, the mouse gets to be very smart very soon
and suddenly reaches human-level intelligence.
It doesn't happen with real mice, but with AIs, it's going to be one day mice,
one day fish, next day mice, next day primates, next day humans.
And obviously when the issue is really going to hit home
is when we have AI systems as sophisticated in many ways as humans,
such that we have reasons to think they have human-level consciousness.
I think at that point, look, I'm not at all confident
that we're going to suddenly extend the kind of moral consideration
we give to humans, to AI systems. But I think
there's a philosophical case that we should. May I jump in and follow up on that? That's
such an interesting question because you could also envision a hybrid AI system, such as an
animate, if you will, with a biological component as well as a non-biological component or a neuromorphic component and a non-neuromorphic
component. Maybe you get something like the consciousness of a mouse, but super intelligence,
right? I mean, we need to, I'm wondering if we should be more careful about assuming a correlation between level of consciousness and level of intelligence
and what this issue does in the moral calculus of concern yeah it's a great question do you
want to throw that one i know you want to throw it up into the audience at some point
i'm happy to take on this one i think i want you to answer okay yeah i agree consciousness
and intelligence are to some degree dissociable i think that want you to answer okay yeah i agree consciousness and intelligence are
to some degree dissociable i think that's especially so if i say sensory consciousness
affective consciousness and so on on the other hand cognitive consciousness i am inclined to
think has got some strong correlation with uh with consciousness uh relatively
unintelligent systems you know a worm and so, probably doesn't have much in the way of cognitive consciousness. Humans have far more developed cognitive consciousness. And even in
sensory consciousness and affective consciousness, we have rich sensory and affective states. That's
largely, I think, due in significant part to their interaction with cognitive consciousness.
And I'm inclined to think, people say, you know, Bentham said,
what matters for animals? Is it, can they talk? Can they reason? No, it's, can they suffer?
I'm actually like, maybe Bentham was a little bit too fast. It's like, you know, reasoning and
cognition is actually very important for moral status. And that, I think, does at least correlate
with intelligence. But affect, on the other hand, yeah, maybe a mouse could be suffering hugely.
And that suffering ought to get weighed in our considerations, to some degree,
independent of intelligence. I think her, from Theories of Everything, who is the
co-emcee, is going to jump in now with a question. Is that right? Sure. Okay. Is it all right if
Valerie answers one question? Because I know that she has one
that's been burning. A burning
question. Go.
My question is
trying to know what we hypnotized.
Because when you hypnotize
a person, you're going into
the subconscious to get the information
to bring it forward, to see what's
falling down the inside us.
What's your view?
That is a great question. I have no idea
about the answer, but maybe someone here does.
Anyone hypnotized an AI
system?
There are
people who have done simulations of
the conscious and the unconscious. You know any
AI systems simulating the
Freudian unconscious, Claudia?
Someone should be doing this for sure.
This reminds me, though, of issues involving testing machine consciousness,
and it reminds me of, for example, Ed Turner's,
and I guess it was sort of my view too,
on writing a test for machine consciousness that was probing to see
if there was a felt quality of experience and actually
think that cases of hypnosis, you know, if you could find
that kind of phenomenon at the level of machines, it could very well be an interesting
indication that something was going on.
But it leads us to a more general issue that we wanted to raise with the audience, which is what methodological requirements are appropriate for testing the machine and deciding whether a machine is conscious or not?
And maybe I'll turn it over to Ed Turner for the first answer and then back there, Ben Gertzel for the second.
Awesome.
Okay.
Razor blades are like diving boards. The longer the board, the more the wobble, the more the wobble, the more nicks, cuts,
scrapes. A bad shave isn't a blade problem. It's an extension problem. Henson is a family-owned
aerospace parts manufacturer that's made parts for the International Space Station and the Mars
Rover. Now they're bringing that precision engineering to your shaving experience. By using aerospace-grade CNC machines, Henson
makes razors that extend less than the thickness of a human hair. The razor also has built-in
channels that evacuates hair and cream, which make clogging virtually impossible. Henson Shaving
wants to produce the best razors, not the best razor business.
So that means no plastics, no subscriptions, no proprietary blades, and no planned obsolescence.
It's also extremely affordable.
The Henson razor works with the standard dual edge blades that give you that old school shave with the benefits of this new school tech.
It's time to say no to subscriptions and yes to a razor that'll last you a lifetime.
Visit hensonshaving.com slash everything.
If you use that code, you'll get two years worth of blades for free.
Just make sure to add them to the cart.
Plus 100 free blades when you head to h-e-n-s-o-n-s-h-a-v-i-n-g.com
slash everything and use the code everything.
I-N-G dot com slash everything and use the code everything.
Let me just quickly say the kind of meta idea behind the specific test that Susan and I published a few years ago is as follows. felt experience, subjective experience, you might ask, what do entities that have self-consciousness
learn from a self-conscious experience? Do they get
any information from that experience
that isn't otherwise available to them?
And if so, looking for that
testing to see if they have that information
would be an indirect way.
It's a proxy for the experience, basically.
And I think we use this with people a great deal.
And the example that Susan and I turned to
was almost everyone understands very easily
ideas like reincarnation, ghosts, out-of-body experiences, life after death,
because from their felt experience,
they perceive themselves as existing as an entity that has experience
as different from a physical object.
So if you say to someone,
you know, after you die, you'll be reincarnated into a new body,
that makes sense to people like that.
If you say to them, after you die, your son will be reincarnated,
that sounds inaccurate,
and you have to explain a lot what you could possibly mean from that.
And for a variety of human experiences,
the felt experience of things like a broken heart
and a romantic relationship, a culture shock, an anxiety attack,
synesthesia is a little more exotic.
If you're talking to someone, if you've had one of those experiences, and you speak to
someone who has not had them, or who has also had them, you can tell the difference very quickly.
They get it if they've already had the experience, and you can go ahead and talk about what it's like
to have an anxiety attack or whatever. If not, you have a lot of explaining to do to get them to understand what you're talking about. And so
the sort of structure of the type of tests that Susan and I
proposed was that you isolate the machine
from any information about what people
say about their thought experience and
then try to get them to understand
some of these concepts.
Thank you, Ed.
And now, Ben Gertl, I had a comment as well.
Yeah, I've thought about this topic a fair bit.
And then a partial solution I've come up with,
we can't do brain-to-brain or brain-to-machine
interfacing. So I think, I mean, very broadly speaking, people talk about first-person
experience, subjective feel of being yourself. Second-person experience, which is more like
a very high-balance experience it's directly perceiving the
mind and love of another person. The third-person experience, which is sort of
objection-ish, like sharing experience in the physical realm. What's interesting when you think
about the so-called hard problem of consciousness is the contrast of first-person experience,
problem of consciousness is your contrasted first-person experience,
which is your
subjective, mentally felt
failure with
science, which in essence is
about the proof of minds.
You'll commonly agree that a certain
item of data is in the shared
perception of everyone in that
community. Once we can sort of
wire or wifi
our brains together, let's say that I could wire my
brain to this gentleman right here, it would be an amazing experience, right? And you can increase
the bandwidth of that wire. Then we would feel like we were controlling twins. It seems like
this sort of technology, which is probably not super, super far off. It feels like this gives
it a different dimension. I view it as in some ways bypassing the hard problem of consciousness,
although not actually solving it, because it brings into the domain of shared, agreed
perception a sense of the subjective feeling of the person's consciousness. I feel this
has existed in a less advanced
form in the history of
Buddhism and various spiritual
traditions, where people are
following common meditation protocols
and psychedelic protocols,
and they have
a sense that they're co-experiencing
the minds of other people there.
Ben, that is fascinating.
And, you know, I've been telling my students about the craniopagus twin case.
I don't know if you know about that conjoined twins in Canada who have a thalamic bridge.
Wow.
It's a novel anatomical structure that we don't have it in nature, as far as I know, previous to this, or at least documented.
And, of course, everybody wants to study them.
The parents are very protective, however.
But it's well documented.
They don't dismiss each other, even though they're conjoined,
that when one eats peanut butter,
she'll do it to drive her twin crazy
because the other one hates the flavor.
So they have a shared conscious experience.
And, of course, philosophers will have a lot of fun with that as well,
you know, in relation to privacy.
But I also think it's very suggestive along the lines of what you just raised, right?
And I wanted to bring that over to Dave Chalmers
and see what your reaction is to Ben's point.
Oh, you know, I love the idea of mind merging.
As a test for consciousness, I'm not totally convinced,
because, of course, you could mind merge with a zombie,
and from the first-person perspective, it would still feel great.
You could probably mind merge with GPT-4, and it would be pretty...
So you're right about this, this is...
Ben asks, if you mind merged with a philosophical zombie,
could it feel the same as mind merging with a fully conscious being?
The kind of mind merging I'm having in mind, it's like we're still, there's still two minds here, right?
I mean, I'm experiencing, I'm still me experiencing you.
So it's really, you know, I'm a conscious being already, and this is having massive effects on my consciousness.
You know, psychedelic drugs could have massive effects on my consciousness, you know psychedelic drugs could have massive effects on my conscious consciousness
Without themselves being caught without themselves being conscious
So you have the idea of merging into become one common mind
I would still worry that a conscious being an unconscious being could merge into one unified, freaky,
conscious mind.
I think it would feel different though than merging with a conscious being.
Yeah, but now we need the criteria for which distinctive feelings are actually the feelings
that track the other being.
The other version I like is gradually turning yourself into that being.
You know, I mean the classic version is you replace your neurons by silicon chips in the
same organization,
but maybe you gradually transform yourself into the AI.
So you gradually transform yourself into a transformer and learn that simulation of yourself.
That may be extreme, but hey, at the other end, you get a system which says I'm conscious.
And maybe for that one brief moment, you have evidence that you're conscious,
but maybe you can't convince anybody else of this. They going to say you're a mere transformer neural network and you transform
yourself back and say i remember being conscious back then but to which some people are going to
say but how do you know you're not just a zombie who left memories of being conscious
so if we could do these things it would be it would be amazing and we'd be arguing about the
evidence for a long time so i I hope someone does this eventually.
Okay, so
this question is to everyone.
If we could
build the Machiscaeus, the question
still remains, should we?
Yes.
Okay, someone other than Ben.
If we can, we will.
But should we?
From an ethical standpoint, should we?
We have so many people suffering on this planet already.
Can we manage the empathy of the AI?
Stephanie asks, if we build conscious AI, how do we manage their empathy?
Stephanie's point is an excellent one.
So, you know, if we build conscious AI, how do we know it will be associated?
How do we know that it will be empathetic and treat us well?
Right?
I mean, we would obviously have to test the impact of consciousness on different AI systems and not making any assumptions.
That just because in the context of one AI architecture, the machine is generous and
kind, that in the context of other machine architectures, it will also be kind.
We'll have to bear in mind that machines must not going to be updated in architecture when they become super intelligent.
I think somebody else had their hand up too, though. Let's pass it down
to our new friend from the University of Kentucky.
Yeah, so I have a very quick novel
argument as to why we should build conscious and super
intelligent AI, right? So if we build whole argument as to why we should build conscious and super-intelligent AI.
So if we build conscious or super-intelligent AI,
then
it might as well be omniscient.
If it is as super-intelligent
as we might imagine it being
in a singularity, then it
might as well be omniscient.
Now, if it's omniscient
and it's all-knowing, it follows that it's also
omnipotent, right? Because if you know everything, then you know how to do everything, right? It's
all knowledge, practical or theoretical. Now, if it's omniscient and all-powerful or omnipotent,
the other thing that's left is omnibenevolent, right? Because if we are conscient about morality,
then the more rational we are, the more moral we are, right? And if we are conscient about morality, then the more rational we are, the more moral we are.
Right? And if we're utilitarians about
morality, then we're better at figuring
out how to maximize utility.
We are more rational, so we have better calculative
ability. So whether or not you're a
conscient or a utilitarian,
it still follows that the more rational
you are, the more capacity you have to be moral.
Right? And when we're designing
what we've always traditionally
have thought of as divinity.
And if it's omnivanivalence, which follows from omniscience,
then why not, right?
It will bring about the right moral state,
or I guess the right moral conditions
for all of us to thrive.
So that's my quick argument.
Thank you very much, Tapita.
That was really interesting.
So he was alluding to a lot of issues in theology about the definition of God, very suggested. So, you know, one thing I want to point out, and we just have to move on really quickly, though, is just because a machine is super intelligent does not involve all knowing, all all-knowing, all-powerful, and all-enowned limit, right?
Super-intelligence is defined as simply outsmarting humans, any single human in every respect,
scientific reasoning, social skills, and more.
But it does not at all entail that that entity has the classic traits in the Judeo-Christian
tradition of God. Okay, so that said, let me
turn to our next question. And again, this is for the audience. So going back to Dave's wonderful
paper, which really got things going. So one thing I was very curious about as I was very curious, guys, I was hearing this, and as I watched the whole Blake LeMoyne mess unfold,
I sort of wondered what was going on,
and I started to go down the rabbit hole
and listening to LeMoyne's podcasting,
where he said why he didn't put on a lot of shows,
and just listening to the details of his interaction
with Google behind the scenes.
So one thing that he reports is that they had a reading group over at Google,
some very smart engineers who were designing Lambda,
and they were studying philosophy.
So over there, they were reading Daniel Dennett on consciousness,
David Chalmers, and so on, studying it.
I thought that was cool. I thought
that was really cool. And the interesting thing, too, was in the media, Lemoyne was characterized
as being somewhat of a religious fanatic. But if you listen to the reasons he provided,
he was a computational functionalist who had been reading a lot of Daniel Dennett's argumentation was straight
from consciousness and related texts.
So what I have as a question for everybody is, given Google's reaction to the whole thing,
which was to sort of silence the debate and laugh at the loin, I'm wondering why would Google and maybe other big tech
companies not want to discuss the issue of large language model consciousness?
I'll just put that to the audience to see if there are any ideas.
Thanks.
If I can, we'd like to go back to the question you raised about should we.
Idea. And we'd like to go back to the question you raised about should we. Yeah, and I come from this as a retired surgeon
and not been bioethics for 30 years.
I've been more involved recently in changes in genetics
and the technology that's available for that
than I am with the computer field.
But they're not completely separate. And I'm reminded of
the period of eugenics in the world and in our country where very powerful people
had this belief that they could improve the species and make humans better with technology.
and make humans better with technology.
And in retrospect, they were horribly misguided at doing things like sterilization and integration.
You need to remember that these well-intentioned
but misguided people.
And you want to learn some humility.
Thank you. Now back to Kurt.
Sure. So I have two questions. I'll sneak in.
My question is, what questions are we not asking about DI?
For instance, we have plenty of talk here about ethics, consciousness.
What else is there that we're not focusing on that's just as exigent or more interesting?
So that's question number one.
And question number two is, are we overly concerned with type 1 errors of the material
test at the expense of type 2?
So that is, are we making the test so stringent that we allow ourselves to categorize some
conscious machines as unconscious?
Do we care more?
Unfortunately, the answers to my last question became garbled, and so we're not able to hear
the two audience members' responses, so feel free to add your own answers in the comment section
below. The episode is now over. If you liked this one, then one that I recommend is the
Josje Bak and Ben Gortzel one, because it touches AI and consciousness, as well as AGI timelines.
The links to that are in the description, as well as all of the other talks from MindFest,
that is where this conference took place,
at the Florida Atlantic State University Center for the Future Mind,
focusing on the AI and consciousness connection.
We've been having issues monetizing the channel with sponsorship,
so if you'd like to contribute to the continuation of Theories of Everything,
then you can donate through PayPal, Patreon, or through cryptocurrency.
Your support goes a long way in ensuring the longevity and quality of this channel.
Thank you.
Links are in the description.
The podcast is now concluded.
Thank you for watching.
If you haven't subscribed or clicked that like button, now would be a great time to
do so, as each subscribe and like helps YouTube push this content to more people.
You should also know that there's a remarkably active Discord and subreddit for Theories of Everything, where people explicate toes, disagree respectfully
about theories, and build as a community our own toes. Links to both are in the description. Also,
I recently found out that external links count plenty toward the algorithm, which means that
when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn
greatly aids the distribution on YouTube as well. Last but not least, you should know that this
podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in
theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts, and I read that in the comments.
Hey, Toll listeners also gain from replaying.
So how about instead re-listening on those platforms?
iTunes, Spotify, Google Podcasts, whichever podcast catcher you use.
If you'd like to support more conversations like this, then do consider visiting patreon.com
slash kurtjaimungal and donating with whatever you like. Thank you.