Lex Fridman Podcast - #103 – Ben Goertzel: Artificial General Intelligence
Episode Date: June 22, 2020Ben Goertzel is one of the most interesting minds in the artificial intelligence community. He is the founder of SingularityNET, designer of OpenCog AI framework, formerly a director of the Machine In...telligence Research Institute, Chief Scientist of Hanson Robotics, the company that created the Sophia Robot. He has been a central figure in the AGI community for many years, including in the Conference on Artificial General Intelligence. Support this podcast by supporting these sponsors: - Jordan Harbinger Show: https://jordanharbinger.com/lex/ - MasterClass: https://masterclass.com/lex This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 03:20 - Books that inspired you 06:38 - Are there intelligent beings all around us? 13:13 - Dostoevsky 15:56 - Russian roots 20:19 - When did you fall in love with AI? 31:30 - Are humans good or evil? 42:04 - Colonizing mars 46:53 - Origin of the term AGI 55:56 - AGI community 1:12:36 - How to build AGI? 1:36:47 - OpenCog 2:25:32 - SingularityNET 2:49:33 - Sophia 3:16:02 - Coronavirus 3:24:14 - Decentralized mechanisms of power 3:40:16 - Life and death 3:42:44 - Would you live forever? 3:50:26 - Meaning of life 3:58:03 - Hat 3:58:46 - Question for AGI
Transcript
Discussion (0)
The following is a conversation with Ben Gertzel, one of the most interesting minds in
the Artificial Intelligence community.
He's the founder of SingularityNet, designer of OpenCog AI Framework, formerly a director
of research at the Machine Intelligence Research Institute, and she's scientist of Hansen
Robotics, the company that created the Sophia Robot.
He has been a central figure in the AGI community for many years, including in
his organizing and contributing to the conference and artificial general intelligence. The 2020
version of which is actually happening this week, Wednesday, Thursday and Friday. It's
virtual and free. I encourage you to check out the talks, including by Yoshabah from episode
101 of this podcast.
Quick summary of the ads, two sponsors,
the Jordan Harbinger Show and Masterclass.
Please consider supporting this podcast
by going to JordanHarbinger.com slash Lex
and signing up at masterclass.com slash Lex.
Click the links by all the stuff.
It's the best way to support this podcast
and the
journey I'm on in my research and start up.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe to my YouTube, review it with 5 stars and an Apple Podcast,
support it on Patreon or connect with me on Twitter, Alex Friedman, spelled without
the E, just F-R-I-D-M-A-N. As usual, I'll do a few minutes of As Now and never any ads in the middle that can break
the flow of the conversation.
This episode is supported by the Jordan Harmage's show.
Go to JordanHarbinger.com slash Lex, it's how he knows I sent you.
On that page, there's links to subscribe to it on Apple Podcasts, Spotify, and everywhere
else.
I've been binging on his podcast.
Jordan is great.
He gets the best out of his guest, Dive's Deep, calls them out when it's needed, and makes
the whole thing fun to listen to.
He's interviewed, Kobe Bryant, Mark Cuban, Neil deGrasse Tyson, Kierke Sparrow, and many
more.
His conversation with Kobe is a reminder of how much focus and hard work is acquired for
greatness in sport, business and life.
I highly recommend the episode if you want to be inspired.
Again go to jordanharborgin.com slash lex.
It's how Jordan knows I sent you.
This show sponsored by Masterclass.
Sign up at masterclass.com slash Lex to get a discount
and to support this podcast. When I first heard about Masterclass, I thought it was too
good to be true. For 180 bucks a year, you get an all-access pass to watch courses from
to list some of my favorites. Chris Hadfield on Space Exploration, the other Grass Tyson
on Scientific Thinking and Communication, will right, creator of the greatest city building game ever, some city, and sims on game design,
Carlos Santana on guitar, Gareka Sparov, the greatest chess player ever on chess, Daniel
Nagrano on poker, many more. Chris Hadfield explaining how rockets work and the experience of being
launching the space alone is worth the money. Once again, sign up on masterclass.com slash
Lex to get a discount and to support the podcast. And now here's my conversation with Ben What books, authors, ideas had a lot of impact on you in your life in the early days.
You know what got me into AI and science fiction such in the first place wasn't a book
but the original Star Trek TV show which my dad watched with me like in its first run
it would have been 1968, 69 or something and that that was incredible because every every
show that visited a different a different alien
civilization with different culture and weird mechanisms, but that that got me into
science fiction and there wasn't that much science fiction to watch on TV at that stage. So that
got me into reading the whole the whole literature of science fiction, you know, from from the
beginning of the previous century until that time and there I mean there was so many
science fiction values who were inspirational to me. I'd say if I had to pick two it would have been
Stanislal Lomb, the Polish writer. Yeah, well,
Solaris and then he had a bunch of more obscure writings on on superhuman AI's that were engineered. Solaris was sort of a superhuman,
naturally occurring intelligence.
Then Philip K. Dick, who ultimately my fandom
for Philip K. Dick is one of the things
that brought me together with David Hanson,
my collaborator on robotics project.
So, you know, Stanislaus Lohm was very much an intellectual, right? So he had a very broad view of intelligence
going beyond the human and into what I would call, you know, open-ended superintelligence. The
Solaris superintelligent ocean was intelligent in some ways more generally intelligent than people, but in a complex and
confusing way so that human beings
could never quite connect to it, but it was still probably very, very smart. And then the
Golem IV supercomputer in one of one of Lem's Lem's books. This was engineered by people,
but eventually it became very intelligent in a different direction than humans and decided that humans were kind
of trivial and not that interesting. So it put some impenetrable shield around itself,
shut itself off from humanity and then issued some philosophical screed about the pathetic
and hopeless nature of humanity and all human thought and then disappeared. Now Philip K. Dick, he was a bit different. He was human focused, right? His main thing was, you know, human compassion and the human heart and soul are going to be the constant that will keep us going through whatever aliens, aliens we discover or telepathy machines or super AIs or whatever it might be.
So he didn't believe in reality, like the reality that we see may be a simulation
or a dream or something else we can't even comprehend, but he believed in love and compassion.
It's something persistent through the various simulated realities.
So those two science fiction writers had a huge impact on me. Then a little older than that, I got into Dostoevsky and Friedrich Nietzsche and Rambod
and a bunch of more literary type writings.
We talk about some of those things.
So on the Solaris side, Stanislaus Lamm, this kind of idea of there being intelligences
out there that are different than our own.
Do you think there are intelligences maybe all around us that we're not able to even detect?
So this kind of idea of maybe you can comment also on Stephen Wolfram thinking that there's
computations all around us and we're just not smart enough to kind of detect their intelligence or appreciate their intelligence.
Yes, so my friend Hugo de Garis, who I've been talking to about these things for many decades since their early 90s,
he had an idea he called SIPI, the search for intra-particulate intelligence.
So the concept there was, as A's get smarter and smarter and smarter,
you know, assuming the laws of physics as we know them now are still, are still what these
superintelligence has perceived holding are bound by. As they get smarter and smarter, they're
going to shrink themselves little and little because special relativity makes it, you know, sort of,
can communicate between two special
and distant points.
So they're going to get smaller and smaller.
But then ultimately, what does that mean?
The minds of the super, super, super intelligences, they're going to be packed into the interaction
of elementary particles or quarks or the partons inside quarks or whatever it is.
So what we perceive as random fluctuations
on the quantum or sub quantum level
may actually be the thoughts of the micro-microme
miniaturized super intelligences.
Because there's no way we can tell random
from structured but with an algorithmic information
more complex in our brains, right?
We can't tell the difference.
So what we think is random
could be the thought processes of some really tiny super minds. And if so, there is not a damn thing we can
do about it, except, you know, try to upgrade our intelligence and expand our minds so
that we can, we can perceive more of what's around us.
But if the, if those random fluctuations, like even if we go to quantum mechanics, if that's actually super intelligent
systems, aren't we then part of the super intelligence?
I don't really just like a finger of the entirety of the body of the super intelligent system.
It could be, I mean, a finger is a strange metaphor.
I mean, we...
Well, I think your dumb is what I mean, a finger is a strange metaphor. I mean, we...
I think her is dumb as well.
I mean, is...
But a finger is also useful and is controlled with intent by the brain, where it's
we may be much less than that, right?
I mean, yeah, we may be just some random epiphenomenon that they don't care about too much.
Like, think about the shape of the crowd emanating from a sports stadium or something, right? There's some emergent shape to the crowd. It's there. You could take a picture
of it. It's kind of cool. It's irrelevant to the main point of the sports event or where the
people are going or what's on the mind to the people making that shape in the crowd, right? So we
we may just be some semi arbitrary higher level patternary, higher level pattern popping out of a lower level
hyperintelligent self-organization.
And I mean, so be it, right?
I mean, that's one thing that's still fun, right?
Yeah, I mean, the older I've gotten, the more respect I've achieved for our fundamental
ignorance.
I mean, mine and everybody else's.
I mean, I look at my two dogs, two beautiful
little toy poodles, and they watch me sitting at the computer typing. They just think I'm seeing
their wiggling my fingers to exercise, and maybe regarding the monitor on the desk, that they have
no idea that I'm communicating with other people halfway around the world, let alone creating
complex algorithms running in
RAM on some computer server in St. Petersburg or something, right?
Although they're right there in the room with me.
What things are there right around us that we're just too super to close-minded to comprehend?
Probably quite a lot.
You're very, your very poodle could also be communicating across multiple dimensions with with other
with other beings in year two year two unintelligent to understand the kind of communication mechanism
they're going through.
There there there have been various TV shows and science fiction novels positing cats dolphins
mice and whatnot are actually super intelligence is here here to observe that. I would guess as one
or the other quantum physics founders said those theories are not crazy enough to be true,
that the reality is probably crazy than that. Beautiful. So on the human side,
with Philip Kiddick and in general, where do you fall on this idea that love
and just the basic spirit of human nature
persists throughout these multiple realities?
Are you on the side, like the thing that inspires
about artificial intelligence,
is it the human side of somehow persisting
through all of the different systems we engineer, or
is AI inspired you to create something that's greater than human, that's beyond human,
that's almost non-human.
I would say my motivation to create AGI comes from both of those directions, actually.
So when I first became passionate about
AGI when I was it would have been two or three years old after watching robots on Star Trek
I mean then it was really
Combination of intellectual curiosity like kind of machine really think how would you do that and yeah?
Just ambition to create something much better than all the clearly limited and fundamentally defective humans I saw around me.
Then as I got older and got more enmeshed in the human world and you got married, had children, some of my parents begin to age,
I started to realize, well, not only will age you, I let you go far beyond the limitations of the human, but it could also like stop us from dying and suffering and feeling pain and tormenting ourselves mentally.
So you can see AGI has amazing capability to do good for humans as humans alongside with its capability to go far, far beyond the human level. So, I mean, both aspects are there,
which makes it even more exciting and important.
So you mentioned the CSK Nietzsche.
What did you pick up from those guys?
I mean, that would probably go beyond the scope
of a brief interview, certainly.
But in both of those are amazing thinkers
who one will necessarily have a complex
relationship with, right? So, I mean, Dostyaevsky, on the, on the minus side, he's kind of a religious
fanatic, and he's sort of helped squash the Russian Nileist movement, which was very interesting,
because what Nileism meant originally in that period of the mid-late 1800s in Russia was not
taking anything fully 100% for grand. It was really more like what we'd call Bayesianism now where
you don't want to adopt anything as a dogmatic certitude and always leave your mind
open and how Dostoevsky parodied nihilism was a bit different. He parodies it as people who believe absolutely nothing.
So they must assign an equal probability way to every proposition,
which doesn't really work.
So on the one hand, I didn't really agree with Dusty Evesky
on his sort of religious point of view.
On the other hand, if you look at his understanding
of human nature and sort of the human mind
and heart and soul, it's really unparalleled.
And he had an amazing view of how human beings
construct a world for themselves
based on their own understanding
and their own mental predisposition.
And I think if you look in the brother's Karamazov in particular, the Russian literary
theorist, Mikhail Bakhtin wrote about this as a polyphonic mode of fiction, which means
it's not third person, but it's not first person for only one person really.
There are many different characters in the novel novel and each of them is sort of telling part of the story
from their own point of view.
So the reality of the whole story is an intersection
like synergetically of the many different characters'
worldviews and that really, it's a beautiful metaphor
and even a reflection, I think of how all of us
socially create our reality.
Like each of us sees the world in a certain way. Each of us in a sense is making the world,
as we see it, based on our own minds and understanding, but it's polyphony. Like in music,
where multiple instruments are coming together to create the sound. The ultimate reality that's
created comes out of each of our subjective understandings, you know, intersecting with each other. And that
was one of the many beautiful things in Dostoevsky.
So maybe a little bit to mention you have a connection to Russia and the Soviet culture.
I mean, I'm not sure exactly what the nature of the connection is, but they're at least
the spirit of your thinking.
Oh, yeah.
Well, my, my, my ancestry is three quarters Eastern European Jewish.
So I mean, my three of my great grandparents emigrated to New York from Lithuania and sort of border regions of, of Poland,
which were in and out of Poland in around Poland, around the time of World War I. And they were
socialist and communist, as well as Jews, mostly Mensheviks, not Bolshevik, and they sort of,
they fled at just the right time to the US for their own personal reasons. And then almost
all, or maybe all of my extended family that remained in Eastern Europe was killed either by Hitler's or Stalin's minions at some point. So the branch of the family then
emigrated to the US was was was pretty much the the only one. So how much of the spirit
of the people's in your blood still? Like, do you when you look in the mirror, do you
see, what do you see? I see a bag of meat that I want to transcend by uploading into some sort of superior reality.
But very...
I mean, yeah, very clearly...
I'll put it.
I mean, I'm not religious in a traditional sense, but clearly the Eastern European Jewish tradition
was what I was raised in.
I mean, my grandfather Leo's well was a physical chemist
to work with Laundice Pauling
and a bunch of the other early greats in quantum mechanics.
I mean, he was into x-ray diffraction.
He was on the material science side,
experimentalist rather than a theorist.
His sister was also a physicist. My father's
father, Victor Grutzel, was a PhD in psychology who had an unenviable job of giving psychotherapy
to the Japanese in internment camps in the US and in World War II. Like to counsel them
why they shouldn't kill themselves, even though they had all their stuff taken away and been imprisoned for no good reason. So I mean, there, yeah, there's a lot of
Eastern European Jewish tradition in my background. One of my great uncles was, I guess,
conductor of San Francisco Orchestra. So there's a lot of Mickey's Salkan,
there's a lot of Mickey's Salkan's, a bunch of music and they're also in,
clearly this culture was all about learning
and understanding the world,
and also not quite taking yourself too seriously
while you do it, right?
There's a lot of Yiddish humor in there.
So I do appreciate that culture,
although the whole idea that,
like the Jews that the chosen people of God never resonated with me too much.
The graph of the Gritzel family, I mean, just the people I've encountered, just doing some research and just knowing your work through the decades, it's kind of fascinating.
I'm just the number of PhDs. Yeah, yeah. I mean, my dad is a sociology professor who recently retired from Rutgers University,
but that clearly that gave me a head start in life.
I mean, my grandfather gave me all this quantum mechanics books and I was like seven or eight
years old.
I remember going through them and it was all the old quantum mechanics,
like Rutherford Adams and stuff. So I got to the part of wave functions, which I didn't
understand, although I was very bright kid. And I realized he didn't quite understand
it either, but at least like he pointed me to some professor he knew at you Penn nearby
who understood these things, right? So that's an unusual opportunity for a kid to have a ride.
My dad, he was programming Fortran when I was 10 or 11 years old
on like HV 3000 that made frames at Rutgers University.
So he got to do linear regression and Fortran
on punch cards when I was in middle school,
because he was doing, I guess, analysis of demographic and sociology
data. So, yes, certainly, certainly that gave me a head start and a push towards science
beyond what would have been the case with many, many different situations.
Why did you first fall in level, AI? Is it the programming side of Fortran? Is it maybe
the sociologist that college really picked up from your dad?
I saw a mobile AI when I was probably three years old
when I saw a robot on Star Trek.
It was turning around in a circle going,
error, error, error, error,
because Spock and Kirk had tricked
into a mechanical breakdown by presenting
with a logical paradox.
And that was just like, well, this makes no sense.
This AI is very, very smart.
It's been traveling all around the universe,
but these people could trick it with a simple logical paradox.
Like, if the human brain can get me on that paradox,
why can't this AI?
So I felt the screenwriters of Star Trek
had misunderstood the nature of intelligence.
And I complained to my dad about it.
And he wasn't going to say anything one way or the other.
But you know, before I was born, when my dad was at Antioch
College in the middle of the US, he led a protest movement
called SLAM, Student League Against Mortality.
They were protesting against death, wandering across the campus.
So he was into some futuristic things, even back then, but whether AI could confront
logical paradoxes or not, he didn't know.
But when I, 10 years after that, just something I discovered discovered Douglas Hofstider's book, Gordel Escherbach,
and that was sort of to the same point of AI and paradox logic, right?
Because he was over and over with Gordel's incompleteness theorem, and Canon AI really
fully model itself reflexively or does that lead you into some paradox?
Can the human mind truly model itself reflexively or does that lead you into some paradox? So I think that book, Gordel Estrebaugh, which I think I read when it first came out.
It would have been 12 years old or something. I remember it was like 16-hour day.
I read it cover to cover and then reread it. I reread it after that because there was a lot of weird things with
little formal systems in there that were hard for me at the time. But that was the first book I read
that gave me a feeling for AI as like a practical academic or engineering discipline that people
were working in because before I read Gordon Lusher Bach, I was in the AI from the point of view of
a science fiction fan. And I had the idea, well, it may be a long time before we can
achieve immortality in superhuman age GI so I should figure out how to build a
spacecraft traveling close to the speed of light go far away and then come back
to the earth in a million years when technology is more advanced and we can build
these things reading Gurdlesher Bach while it didn't all ring true to me a lot
of it did and but I could see like there are smart people right now
at various universities around me who are actually trying
to work on building what I would now call AGI,
although Hofstad didn't call that.
So really, it was when I read that book,
which would have been probably middle school,
that then I started to think, well, this is something
that I could practically work on. Yeah, it, this, this is something that I could,
I could practically work for.
Yeah, I suppose it's flying away and waiting it out, you can actually be the one of the
people that actually, yeah, exiles.
Yeah, exiles.
And if you think about, I mean, I was interested in what we'd now call nano technology and
in the human immortality and time travel, all the same cool things as every other like science
fiction loving kid
But AI seemed like if Hofsteader was right you just figure out the right programs that they're in type it like you don't you don't need to
You don't need to spend stars into weird configurations or get government approval to cut people up and fill with their DNA or something
It's just programming and then then of course, that can
achieve anything else. There's another book from back then, which was by, or the fine bomb,
Gerald Finebomb, who was a physicist at Princeton. And that was the Prometheus project.
And this book was written in the late 1960s, though I encountered it in the
mid 70s. But what this book said is in the next few decades, humanity is going to create
superhuman thinking machines, molecular nanotechnology and human immortality. And then the
challenge we'll have is what to do with it. Do we use it to expand human consciousness in a
positive direction? Or do we use it just to further vapid consumerism
and what he proposed was that the UN should do a survey on this and the UN should send
people out to every little village in remote Africa or South America and explain to everyone
what technology was going to bring the next few decades and the choice that we had about
how to use it and let everyone on the whole planet vote about whether we should develop, you know, super AI, nanotechnology and immortality
for expanded consciousness or for rampant, rampant consumerism.
And needless to say, that didn't quite happen.
And I think this guy died in the mid 80s, so we didn't even see his ideas start to become
more mainstream.
But it's interesting, many of the themes I'm engaged with now
from AGI and Immortality,
even to trying to democratize technology
as I've been pushing forward singularly
in my work in the blockchain world.
Many of these themes were there in, you know,
Fimbom's book in the late 60s even. And of course, Valentin Turchin, a Russian writer who
I, and a great Russian physicist, so I got to know when we both lived in New York in the
late 90s and early arts. I mean, he had a book in the late 60s in Russia, which was the phenomenon
of science, which laid out all these same things as well.
And Val died in, I remember, 2004 or 2005 or something of Parkinson's.
So, yeah, it's easy, easy for people to lose track now of the fact that the futurist and
singularitarian advanced technology ideas
that are now almost mainstream and are on TV all the time.
I mean, these are not that new, right?
They're sort of new in the history of the human species,
but I mean, these are all around in fairly mature form
in the middle of the last century,
we're written about quite articulately
by fairly mainstream people who are professors
at top universities. It's just until the enabling technologies got to a certain point, then
you couldn't make it real. So even in the 70s, I was sort of seeing that and living through
it right from Star Trek to Douglas Hofstater,
things were getting very, very practical
from the late 60s to the late 70s.
And the first computer I bought,
you could only program with hexadecimal machine code,
and you had to solder it together.
And then like a few years later, there's punch cards,
and a few years later, you could get like Atari 400
and Commodore Vicick 20 and you could
you could type on the keyboard and program in higher level languages alongside the assembly language.
These ideas have been building up a while and I guess my generation got to feel them
build up which is different than people coming into the field now now for whom these things have just been
Part of the ambiance of culture for their whole career or even there or even their whole life
Well, it's fascinating to think about you know, they're being all of these ideas kind of swimming
You know almost with the noise all around the world that all the different generations and then some kind of
non-linear thing happens where they
percolate up and capture the imagination of the mainstream.
And that seems to be what's happening with AI now.
I mean, Nietzsche, who you mentioned, had the idea of the Superman, right?
But he didn't understand enough about technology to think you could physically engineer a Superman
by piecing together molecules in a certain way. He was a bit vague
about how the Superman would appear, but he was quite deep at thinking about what the state of
consciousness and the mode of cognition of a Superman would be. He was a very astute analyst of
how the human mind constructs the illusion of a self,
how it constructs the illusion of free will, how it constructs values like good and evil
out of its own desire to maintain and advance its own organism.
He understood a lot about how human minds work.
Then he understood a lot about how post-human minds would work. I mean, this
Superman was supposed to be a mind that would basically have complete root access to its
own brain and consciousness and be able to architect its own value system and inspect
and fine tune all of its own biases. So that's a lot of powerful thinking there, which then fell in and sort of seated all of
postmodern cut-null philosophy and all sorts of things have been very valuable in development of culture and
indirectly even even of technology, but of course without the technology there it was all
some quite abstract thinking. So now we're at a time in history when a lot of these ideas can be made real,
which is amazing and scary, right?
It's kind of interesting to think,
what do you think Nietzsche would do
if he was born a century later or transported through time?
What do you think he would say about AI?
I mean, if he's born a century later,
we're transported through time.
Well, he'd be on TikTok and Instagram and he would never write different. If he's born a century later, we're transported through time. Well, he'd be on like TikTok and Instagram
and he would never write the great works he's written.
So yeah, I mean, maybe also Sprach Zartustra
would be a music video, right?
I mean, I mean, who knows?
Yeah, but if he was transported through time,
do you think,
that'd be interesting actually to go back,
you just made me realize that it's possible to go back and read Nietzsche with an eye of
is there some thinking about artificial beings.
I'm sure that he has, he had inklings.
I mean, with Frankenstein before him, I'm sure he had inklings of artificial beings
somewhere in the text. It'd be interesting to see, to try to read his work to see if he hadn't,
if a
Superman was
actually an AGI system, like if he had inklings of that kind of thinking. He didn't. He didn't. No, I would say not. I mean, he had
He had a lot of inklings of modern cognitive science, which would very interesting. If you look in like the third part of the collection that's been titled the Will to
Power, I mean in book three there, there's very deep analysis of thinking processes, but
he wasn't so much of a physical tinkerere type type guy, right?
It was very abstract.
Do you think, what are you think about the will to power?
Do you think human, what do you think drives humans?
Is it, is it, oh, an unholy mix of things.
I don't think there's one pure simple
and elegant objective function driving humans
by any means.
What do you think?
If we look at I know it's hard to look at humans in an aggregate, but do you think overall
humans are good?
Or do we have both good and evil within us that depending on the circumstances depending
on the whatever can can can can now perfectly to the top. Good and evil are
very ambiguous, complicated and in some ways silly concepts, but if we we could dig
into your question from a couple of directions. So I think if you look in evolution,
humanity is shaped both by individual selection and what biologists would call group selection,
like tribe level selection, right?
So individual selection has driven us in a selfish DNA sort of way, so that each of us
does to a certain approximation what will help us propagate our DNA to future generations.
That's why I've got to have four kids so far
and probably that's not the last one.
On the other hand, I like the ambition.
Tribal group selection means humans in a way
will do what will advocate for the persistence
of the DNA of their whole tribe or their social group.
And in biology, you have both of these, right?
Like a, and you can see, say an ant colony or a beehive, there's a lot of group selection
in the evolution of those social animals.
On the other hand, say a big cat or some very solitary animal.
It's a lot more biased toward individual selection.
Humans are an interesting balance.
And I think this reflects itself in what we would view as selfishness versus altruism to some extent.
So we just have both of those objective functions contributing to the makeup of our brains. And then
as Nietzsche analyzed in his own way, and others have
analyzed in different ways.
I mean, we abstract this as well.
We have both good and evil within us, right?
Because a lot of what we view as evil
is really just selfishness.
A lot of what we view as good is altruism,
which means doing what's good for the tribe.
And on that level, we have both of those just baked into us,
and that's how it is.
Of course, there are psychopaths and sociopaths,
and people who get gratified by the suffering of others.
And that's a different thing.
Yeah, those are exceptions.
But I think at core, we're not purely selfish,
we're not purely altruistic, we are a mix.
And that's a nature of it.
And we also have a complex constellation of values
that are just very specific to our evolutionary history. We love waterways and
mountains and the ideal place to put a house in an mountain overlooking the water. We care
a lot about our kids and we care a little less about our cousins and even less about our fifth
cousins. I mean, there are many particularities to human values,
which whether they're good or evil depends on your perspective. I spend a lot of time in Ethiopia
in Ades Ababa, where we have one of our AI development offices for my singularity in that project.
When I walk through the streets in Ades, there's people lying by the side of the road, like
just living there by the side of the road dying probably of curable diseases
without enough food or medicine. And when I walk by them, you know, I feel
terrible. I give them money. When I come back home to the developed world, they're
not on my mind that much. I do donate some, but I mean, I also
spent some of the limited money I have enjoying myself in frivolous ways rather than donating it
to those people who are right now like starving dying and suffering on the roadside. So,
does that make me evil? I mean, it makes me somewhat selfish and somewhat altruistic. Can we each
I mean, it makes me somewhat selfish and somewhat altruistic. Can we each balance that in our own way?
Right?
So that's whether that will be true of all possible AGIs is a subtler question.
So you have that's how humans are.
So you have a sense, you kind of mentioned that there's a selfish.
I'm not going to bring up the whole iron-rand idea
of selfishness being the core virtue.
That's a whole interesting kind of tangent
that I think will just distract ourselves on.
I have to make one amusing comment.
Sure.
Comment that has amused me anyway.
So the, yeah, I have extraordinary negative respect
for iron-rand extraordinary negative respect for on rent negative what's a negative respect
But when I work with a company called
Genescient
Which was evolving flies to have extraordinary long lives in in in Southern California
So we have flies that were evolved by artificial selection to five times a lifespan of normal fruit flies
that were evolved by artificial selection to have five times a lifespan of normal fruit flies.
But the population of super long live flies
was physically sitting in a spare room
at an iron-rand elementary school in Southern California.
So that was just like,
well, if I saw this in a movie, I wouldn't believe it.
Yeah.
Well, yeah, the universe has a sense of humor in that kind of way.
That fits in the humor of fits in somehow into this whole absurd existence.
But you mentioned the balance between selfishness and altruism as kind of being innate.
Do you think it's possible that's kind of an emergent phenomena, those peculiarities
of our value system?
How much of it is innate?
How much of it is something, how much of it is something
we collectively, kind of like a dusty asking novel, bring to life together as a civilization.
I mean, the answer to nature versus nurture is usually both. And of course, it's nature versus
nurture versus self-organization, as you mentioned. So clearly, there are evolutionary roots to individual and group
selection leading to a mix of selfishness and altruism. On the other hand, different cultures
manifest that in different ways. Well, we all have basically the same biology. And if you look
if you look at sort of pre-civilized cultures, you have tribes like the Yanomamo in Venezuela,
which their culture is focused on killing other tribes.
And you have other Stone Age tribes that are mostly piece of bone of big taboos against
violence.
So you can certainly have a big difference in how culture manifests these innate biological characteristics, but still
you know there's probably limits because they're given by our biology. I used to argue this with my
great grandparents who were Marxists actually because they believed in the
weathering away of the state. Like they believe that, you know, as you move from capitalism to socialism to communism,
people would just become more social-minded so that a state would be unnecessary and people
would just give everyone, would give everyone else what they needed.
Now, setting aside that that's not what the various Marxist experiments on the planet
seemed to be heading toward in
practice.
Just as a theoretical point, I was very dubious that human nature could go there.
I get at that time when my great grandparents are alive.
I was just like, you know, I'm a cynical teenager.
I think humans are just jerks.
The state is not going to whither away. If you don't have some structure keeping people from screwing
each other over, they're going to do it. Now, I actually don't quite
see things that way. I mean, I think my feeling now,
subjectively, as the culture aspect, is more significant than I thought it was
when I was a teenager. And I think you could have a human society that was dialed dramatically further toward,
self-awareness, other awareness, compassion, and sharing than our current society.
And of course, greater material abundance helps.
But to some extent, material abundance is a subjective perception also, because many
stone age cultures perceive themselves as living in great
material abundance, that they had all the food more than they wanted, they lived in a beautiful place, they had
sex lives, they had children, I mean they had
abundance without any factories, right? So I think
humanity probably would be capable of fundamentally more
positive and joy-filled
mode of social existence than what we have now. Clearly, Marks didn't quite have the right idea about how to get there. I mean, he missed a number of key aspects of human society and its evolution.
If we look at where we are in society now, how to get there is a quite different question
because they are very powerful forces pushing people in different directions than a positive
joyous compassionate existence. compassion it Existence right so if we were tried to you know Elon Musk is dreams of colonizing Mars at the moment
So we maybe you'll have a chance to start a new civilization
Within you governmental system and certainly there's quite a bit of chaos. We're sitting now
I don't know what the date is, but this is
June there's quite a bit of chaos and all different forms going on in the United States and all
over the world.
So there's hunger for new types of governments, new types of leadership, new types of systems.
And so what are the forces that play and how do we move forward?
Yeah, I mean, colonizing Mars, first of all, it's a super cool thing to do.
We should be doing it.
So you're, you love the idea.
Yeah, I mean, it's more important than making chocolate
or chocolates and sexier lingerie and many of the things
that we spend a lot more resources on as a species, right?
So I mean, we certainly should do it.
I think the possible futures in which
a Mars colony makes a critical difference for humanity are very few. I mean, I think,
I mean, assuming we make a Mars colony, people go live there in a couple of decades, I mean,
their supplies are going to come from Earth, the money to make the colony came from Earth. And whatever powers are supplying the goods
there from Earth are going to in effect be in control of that of that Mars colony. Of
course, there are outlier situations where, you know, Earth gets nuked into oblivion.
And somehow Mars has been made self-sustaining by that point
and then Mars is what allows humanity to persist. But I think that those are very, very, very
unlikely.
You don't think it could be a first step on a long journey?
Of course, it's a first step on a long journey, which is awesome. I'm guessing the colonization of the rest of the physical universe will probably be done by
AGI's that are better designed to live in space than by the meat machines that we are.
But I mean, who knows, we may cry out, preserve ourselves in some superior way to what we know now.
knows, we may cryopreserve ourselves in some superior way to what we know now and like shoot ourselves out to Alpha Centaurium beyond. I mean, that's all cool. It's very interesting,
and it's much more valuable than most things that humanity is spending its resources on.
On the other hand, with AGI, we can get to a singularity before the Mars colony becomes
sustaining for sure, possibly before it's even operational.
And so your situation is that that's the problem if we really invest resources and we can get
to faster than a legitimate full like self-sustaining colonization of Mars.
Yeah, and it's very clear that we will to me because there's so much economic value in getting from their AI toward AGI, whereas
the Mars colony, there's less economic value until you get quite far out into the future.
So I think that's very interesting.
I just think it's somewhat off to the side.
I mean, just as I think say, you know, art and music are very, very interesting.
And I want to see resources go into amazing art and music being created. And I'd rather see that.
Than a lot of the garbage that society spends their money on, on the other hand, I don't think Mars colonization or inventing amazing news genres of music is not one of the things
that is most likely to make a critical difference in the evolution of human or non-human life
in this part of the universe over the next decade.
Do you think AGI is really?
AGI is by far the most important thing that's on the horizon,
and then technologies that have direct ability
to enable AGI or to accelerate AGI are also very important.
For example, say quantum computing,
I don't think that's critical to achieve AGI,
but certainly you could see how the right quantum computing
architecture could massively accelerate
AGI, similar other types of nanotechnology, right? Now, the quest to cure aging and end
disease while not in the big picture as important as AGI, of course, it's important to all of us as individual humans. And if someone made a super longevity pill
and distributed it tomorrow, I mean, that would be huge and a much larger impact than a
Mars colony is going to have for quite some time.
But perhaps not as much as an AGI system.
No, because if you can make a benevolent AGI, then all the other problems are solved.
I mean, then the AGI can be, once it's as generally intelligent as humans, it can rapidly
become massively more generally intelligent than humans.
And then that AGI should be able to solve science and engineering problems much better
than human beings.
As long as it is in fact motivated to
do so, that's why I said a benevolent AGI, there could be other currents.
Maybe it's good to step back a little bit. I mean, we've been using the term AGI. People
often cite you as the creator, at least the popularizer of the term AGI, artificial general
intelligence. Can you tell the origin story of the term?
Sure.
So yeah, I would say I launched the term
AI upon the world first for what it's worth
without ever fully being in love with the term.
What happened is I was editing a book,
and this process started around 2001 or two.
I think the book came out 2005, finally.
I was editing a book which I provisionally was
titling real AI.
I mean, the goal was to gather together
fairly serious academic-ish papers on the topic of
making thinking machines that could really think in
the sense like people can or even more broadly than people can, right?
So then I was reaching out to other folks that I had encountered here there who were interested in that,
which included some other folks out of the who I knew from the transumist and singularitarian world,
like Peter Vose who has a company, H.I. Incorporated still in California and included a Shane Legg,
who had worked for me at my company,
Web Mind in New York in the late 90s, who by now has become rich and
famous. He was one of the co-founders of Google Deep Mind.
But at that time, Shane was, I Think he may have been have just started doing his PhD with Marcus Hooter who
At that time hadn't yet published his book Universal AI
Which sort of gives a mathematical foundation for artificial general intelligence?
So I reached out to Shane and Marcus and Peter Vos and
Pay Wang who was another former employee of mine who who had been Douglas Hofstadter's PG student, who had his own approach to AGI, and a bunch of some Russian folks provisional title, but I never loved it because in the end, you know, I was doing some,
what we would now call narrow AI as well, like applying machine learning to genomics data or
chat data for sentiment analysis. I mean, that work is real. And then in a sense, in a sense,
it's really AI. It's just a different kind of kind of AI. Ray Kurzweil wrote about narrow AI versus strong AI,
but that seemed weird to me because, first of all, narrow and strong are not
entent. But secondly, strong AI was used in the cognitive science literature to mean the hypothesis
that digital computer AI's could have true consciousness like human beings.
So there was already a meaning to strong AI, which was complexly different but related.
So we were tossing around on an email list, whether what title it should be. And so we talked about narrow AI, broad AI, wide AI, narrow AI, general
AI. And I think it was either Shane Legg or Peter Vos on the private email discussion we
had you said, but why don't we go with AGI artificial general intelligence and pay
Wang wanted to do Gai general artificial intelligence because in Chinese it goes in that order, right?
But we figured gay wouldn't work in in in US culture at that time, right?
Yeah.
So so we went with the AGI we used it for the for the title of that book and part of Peter
and Shane's reasoning was you have the G factor in psychology, which is IQ general intelligence,
right? So you have a meaning of GI general intelligence in psychology.
So then you're looking like artificial GI.
So then that makes a lot of sense.
Yeah, we use that for the title of the book.
And so I think I may be both Shane and Peter think they invented the term.
But then later after the book was published, this guy, Mark Gubrid, came up to me and he's like, well, I published an essay
with the term AGI in it in like 1997 or something.
And so I'm just waiting for some Russian to come out and say they published it in 1953.
But I mean, that term is not dramatically innovative or anything.
It's one of these obvious in hindsight things, which is also annoying in a way, because
Josh Habak, who you interview, is a close friend of mine.
He likes the term synthetic intelligence, which I like much better, but it hasn't actually
caught on right, because I mean artificial
is a bit off to me, because artifies is like a tool or something, but not all AGIs are
going to be tools. I mean, they may be now, but we're aiming toward making them agents
rather than tools. And in a way, I don't like the distinction between artificial and natural, because I mean,
we're part of nature also, and machines are part of nature.
You can look at evolved versus engineered, but that's a different distinction.
Then it should be engineered in general intelligence.
And then general, well, if you look at Marcus Hooters' book, universally, what he argues
there is,
within the domain of computation theory, which is limited, been interesting.
So if you assume computable environments,
the computable reward functions,
then he articulates what would be a truly general intelligence,
a system called AIXI, which is quite beautiful.
IXI.
IXI, and that's the middle name of my latest child actually. It was the first name. First name is
Quirky QR XI, which my wife came out with, but that that's an
acronym for Quantum Organized Rational Expanding Intelligence.
And his name is his middle name is ex exixifines actually, which
is a means means the former principal underlying AXE.
But in any case, you're giving Elon Musk a new child to run for.
I did it first. He copied me with this new freakish name.
But now if I have another baby, I'm going to have to outdo him.
I'm becoming an arm's race of weird geeky baby names.
We'll see what the babies think about it.
But I mean, my oldest son, Zara Thustra, loves his name.
And my daughter, Sharazard loves her name.
So far, basically, if you give your kids weird names,
they live up to it.
Well, you're obliged to make the kids weird enough
that they like the names, right?
So it directs their upbringing in a certain way.
But yeah, anyway, I mean, what Mark has shown in that book is that a truly general intelligence theoretically is possible, but would take infinite
computing power. So then the artificial is a little off. The general is not really
achievable within physics as as we know it. And I mean, physics as we know it may be limited,
but that's what we have to work with now intelligence.
So infinitely generally, you mean likely generally mean like information processing perspective.
Yeah.
Intelligent is not very well defined either.
I mean, what does it mean?
I mean, in AI now, it's fashionable to look at it as maximizing and expected reward over
the future.
But that sort of definition is pathological in various ways and my my friend David one bomb aka weaver
He had a beautiful PhD thesis on open-ended intelligence
Trying to conceive intelligence in a without a reward. Yeah, he's just looking at it differently
He's looking at complex self-organizing systems and looking at an intelligence system as being one that you know
Revises and grows and improves
itself in conjunction with its environment without necessarily there being one objective function
that's trying to maximize, although over certain intervals of time, it may act as if it's optimizing
a certain objective function. It's very much Solaris from Stastolums novels, right? So the point is
artificial general intelligence.
Don't work.
They're all bad.
On the other hand, everyone knows what AI is.
And AGI seems immediately comprehensible with people
with a technical background.
So I think the term has served as sociological function.
Now it's out there everywhere, which is.
It stuck.
It's baffles me.
It's like KFC.
I mean, that's it.
You're stuck with AGI, probably for a very long time,
until AGI systems take over and rename themselves.
Yeah.
And that's, and then we'll be, we're stuck with GPUs too,
which mostly have nothing to do with graphics anymore,
right?
I wonder what the AGI system will call us humans.
That was maybe grandpa.
GPs. grandpa processing unit, biological grandpa processing unit.
Okay. So maybe also just a comment on AGI representing before even the term existed, representing
a kind of community. Now, you've talked about this in the past, sort of AI has come in waves,
but there's always been this community of people who dream about creating general human
levels, super intelligent systems. Can you maybe give your sense of the history of this
community as it exists today, as it existed before this deep learning revolution, all throughout the winters in the summers of AI.
Sure.
First, I would say, as a side point, the winters in the summers of AI are greatly exaggerated
by Americans.
And if you look at the publication record of the artificial intelligence community since, say, the 1950s, you would find a pretty
steady growth in advance of ideas and papers.
And what's thought of as an AI winter or summer was sort of how much money is the US military
pumping into AI, which was meaningful.
On the other hand, there was AI going on in Germany UK and in Japan and in
Russia all over the place while US military got more and less less and enthused about AI. So what
I mean that happened to be just for people who don't know the US military happened to be the
main source of funding for AI research. So another way to phrase that is it's up and down of
of funding for AI research. So another way to phrase that is it's up and down of funding for artificial intelligence research. And I would say the correlation between funding and intellectual
advance was not 100 percent, right? Because I mean in Russia as an example or in Germany, there was
less dollar funding than in the US, but many foundational ideas were laid out, but it
was more theory than implementation, right?
And US really excelled at sort of breaking through from theoretical papers to working implementations,
which did go up and down somewhat with US military funding, but still, I mean, you can
look in the 1980s, Dietrich
Dürner in Germany had self-driving cars on the Autobahn, right? And I mean, this, it was
a little early with regard to the car industry, so it didn't catch on, such as has, has happened
now. But I mean, that whole advancement of self-driving car technology in Germany was
pretty much independent of AI military
summers and winners in the US. So there's been more going on in AI globally than
not only most people on the planet realized, but then most new AI PhDs realized because they've
come up with in a certain sub-field of AI and Evan had to look so much, so much beyond that. But I would say, when I got,
when I got my PhD in 1989 in mathematics, I was interested in AI already.
You filled it off here, but yeah, I started at NYU and I transferred to Philadelphia,
to Temple University, good old North Philly. North Philly, yeah, yeah, yeah, yeah. Pearl of the US.
You never stopped at a red light, then,
because you were afraid if you stopped at a red light,
some more carjacks.
So you drive through every red light.
Yeah.
Is it every day driving or bicycling
to temple for my house was like a new adventure.
But yeah, when the reason I didn't do a PC and AI
was what people were doing in the
academic AI field then was just astoundingly boring and seemed wrong headed to me. It was really
rule-based expert systems and production systems. And actually, I loved mathematical logic. I had
nothing against logic as the cognitive engine for an AI, but the idea that you could type in the
knowledge that AI would need to think seemed just completely stupid and wrong headed to
me. I mean, you can use logic if you want, but somehow the system has got to be learning,
right? It should be learning from experience. And the AI field then was not interested
in learning from experience.
I mean, some researchers certainly were, I mean, I remember in mid 80s, I discovered a book
by John Andres, which was, it was about a reinforcement learning system called Purpose,
PR-P-U-S-S, which was an acronym that I can't even remember what it was for, but Purpose anyway.
But he, I mean, that was a system that was supposed to be an AGI.
And basically, by some sort of fancy, like, Markov decision process learning,
it was supposed to learn everything just from the bits coming into it,
learn to maximize it, it's reward and become intelligent, right?
So that was there in academia back then, but it was like isolated, scattered, weird people.
But all these isolated, scattered, weird people in that period, I mean, they laid the intellectual
grounds for what happened later.
So you look at John Andreas at University of Canterbury with his purpose reinforcement learning markup system.
He was the peachy supervisor for John Cleary in New Zealand.
Now, John Cleary worked with me when I was at Wacada University in 1993 in New Zealand.
And he worked with Ian Witten there and they launched WECA, which was the first open source machine learning toolkit,
which was launched in, I guess, 93 or 94
when I was at Wacada University.
Ritten in Java, unfortunately.
Ritten in Java, which was a cool language back then.
I guess it's still, well, it's not cool anymore,
but it's powerful.
I find, like most programmers now,
I find Java unnecessarily bloated. But back then it was like Java or C++ basically and Java was
oriented so it was easier for students.
Amusingly a lot of the work on WECA when we were in New Zealand was funded by a US sorry, New Zealand government grant.
Grant, cheese machine learning to predict the menstrual cycles of cows. So, in the US all the grant funding for AI was about how to kill people or spy on people.
In New Zealand, it's all about cows or kiwi fruits, right?
Yeah.
So yeah, anyway, I mean, John Andreas had his probability theory based reinforcement
learning proto-AGI, John Cleary was trying to do much more
ambitious probabilistic AGI systems. Now, John Cleary helped do WACA, which is
the first open source machine learning tool, gets it a predecessor for TensorFlow
and Torchet and all these things. Also, Shane Legg was at Wicada working with John Cleary and Ian Witten and this whole group.
And then working with my own company's, my company, WebMond, and AI Company I had in the
late 90s with a team there at Wicada University, which is how Shane got his head full of AGI,
which led him to go on and with Demis of Sobba's found deep mind. So what you can see through that lineage is,
you know, in the 80s and 70s, John Andreas was trying to build probabilistic
reinforcement in AGI systems. The technology, the computers just weren't there
to support his ideas were very similar to what people are doing now.
But, you know, although he's long since passed away and didn't become that
famous outside of Canterbury, I mean, the lineage of ideas passed on from him to his students
to their students. You can go trace directly from there to me and to deep mind, right? So
that there was a lot going on in AGI that did ultimately lay the groundwork for what we have today, but there wasn't a community, right?
And so when I started trying to pull together an AGI community, it was in the, I guess the
early arts when I was living in Washington, DC and making a living doing AI consulting
for various US government agencies. And I organized the first AGA workshop in 2006.
And I mean, it wasn't like it was literally
in my basement or something.
I mean, it was in the conference room
with a Maryoud in Bethesda.
It's not that edgy or underground, unfortunately,
but still.
People attended.
That's 60 or something. That's not bad. I mean,
DC has a lot of AI going on. Probably until the last five or 10 years, much more than Silicon Valley,
although it's just quiet because of the nature of what happened, what happens in DC. Their business
isn't driven by PR. Mostly when something starts to work really well, it's taken black and becomes even even more quiet.
Right.
But yeah, the thing is that really had the feeling of a group of starry eyed mavericks, like
huddled in a basement, like plotting how to overthrow the narrow AI establishment.
And, you know, for the first time in some cases coming together with others who shared their
passion for AGI and a technical seriousness about working on it, right?
And that, I mean, that's very, very different than what we have today.
I mean, now, now it's a little bit different.
We have AGI conference every year and there's several hundred people rather than 50.
Now it's more like this is the main gathering of people who want to achieve AGI and think
that large-scale non-linear regression is not the golden path to AGI.
So I mean it's...
A K and you all know what?
Yeah, yeah, yeah.
Well, certain architectures for learning using neural networks.
So yeah, the AGI conferences are sort of now the main
concentration of people not obsessed with deep neural net
and deep reinforcement learning, but still interested in AGI.
Not the only ones.
I mean, there's other little conferences
and groupings interested in the human level AI
and cognitive architectures and so forth.
But it's been a big shift.
Like back then, you couldn't really,
it'll be very, very edgy then
to give a university department seminar
that mentioned AGI or human level AGI.
It was more like you had to talk about something more short-term and immediately practical
than in the bar after the seminar.
You could bullshit about AGI in the same breath as time travel or the simulation hypothesis
or something, right?
Whereas now AGI is not only in the academic seminar room,
like you have Vladimir Putin, those with AGI is, and he's like, Russia needs to become the leader
in AGI, right? So national leaders and CEOs of large corporations, I mean, the CTO of Intel,
Justin Ratner, this was years ago,, singularity summit conference, 2008 or something,
he's like, we believe very curious while the singularity will happen in 2045, and it will have
Intel inside. So, I mean, so it's gone from being something which is the pursuit of like
crazed mavericks, crack pots and science-striction phonetics to being a marketing term
for large corporations and the national leaders,
which is a astounding transition.
But yeah, in the course of this transition,
I think a bunch of sub-communities have formed.
And the community around the AGI conference series
is certainly one of them.
It hasn't grown as big as I might have liked too.
On the other hand, you know,
sometimes a modest sized community can be better
for making intellectual progress.
Also like you get it with society for neuroscience conference.
You have 35 or 40,000 neuroscientists.
On the one hand, it's amazing.
On the other hand, you're
not going to talk to the leaders of the field there if you're an outsider.
Yeah, in the same sense, the AAAI, the artificial intelligence, the main kind of generic artificial
intelligence conference, is too big. It's too amorphous, like it doesn't make,
it doesn't, it doesn't,
and Nip says become a company advertising outlet
in the whole of this.
So yeah, so I mean, to comment on the role of AGI
in the research community, I'd still,
if you look at Nurebs, if you look at CVPR,
if you look at these I clear,
AGI is still seen as the outcast.
I would say in these main machine learning, in these main artificial intelligence conferences
amongst the researchers, I don't know if it's an accepted term yet.
What I've seen bravely, you mentioned Shane Legg's deep mind, and then Open AI are the two places that are,
I would say unapologetically, so far, I think it's actually changing unfortunately, but so far they've been pushing the idea that the goal is to create an AGI.
Well, they have billions of dollars behind them. So I mean that they're in the public mind that that certainly
carries some on fright. I mean, I mean
I mean, but they also really strong researchers, right? They do. They're great teams. I mean deep mind in particular
But yeah, and they have I mean deep mind has Marcus hotter walking around. I mean, there's all these folks who
Basically their full-time position involves I I mean, dreaming about creating AGI.
I mean, Google Brain has a lot of amazing AGI-oriented people
all some.
And I mean, so I'd say from a public marketing view,
DeepMind and OpenAI are the two large, well-funded
organizations that have put the term and concept AGI out there sort of as part of their
public image, but I mean they're certainly not. There are other groups that are doing research
that seems just as AGI is to me. I mean including a bunch of groups in Google's main main mountain view office. So yeah, it's true. AGI is somewhat away from
the mainstream now, but if you compare it to where it was, you know, 15 years ago, there's
there's been an amazing mainstreaming. You could say the same thing about super longevity
research, which is one of my application areas that
I'm excited about.
I mean, I've been talking about this since the 90s, but working on this since 2001 and back
then, really to say you're trying to create therapies to allow people to live hundreds
of thousands of years, you were way, way, way out of the industry, academic mainstream. But now, you know, Google
had had project, Calico, Craig Ventura, at Human Longivity Incorporated, and then once,
once the suits come marching in, right? I mean, once there's big money in it,
then people are forced to take it, take it seriously, because that's the way modern society works.
So it's still not as mainstream as cancer research,
just as AGI is not as mainstream as automated driving
or something, but the degree of mainstreaming that's happened
in the last 10 to 15 years is astounding
to those of us who've been at it for a while.
Yeah, but there's a marketing aspect to the term,
but in terms of actual full force research
that's going on under the header of AGI,
it's currently, I would say dominated,
maybe you can disagree, dominated by neural networks research
that the nonlinear regression, as you mentioned,
like what's your sense with open cog, with your work,
but in general?
I was a logic based systems and expert systems. For me, always seemed to capture a deep
element of intelligence that needs to be there. Like you said, you need to learn, you need to be
automated somehow, but that seems to be missing from a lot of research currently.
So what's your sense?
I guess one way to ask this question,
what's your sense of what kind of things
will an AGI system need to have?
Yeah, that's a very interesting topic
that I've thought about for a long time.
And I think there are many, many different approaches that can work for getting to human
level AI.
So I don't think there's like one golden algorithm, one golden design that can work.
And I mean, flying machines is the much more
in analogy here, right?
Like, I mean, you have airplanes, you have helicopters,
you have balloons, you have self bombers,
they don't look like regular airplanes,
you've got all blips.
Birds, too.
Birds, yeah, and bugs, right?
Yeah.
And there are certainly many kinds of flying machines
that, and there's a catapult that you can just launch.
And there's bicycle powered like flying machines, right?
Nice.
Yeah.
So, now these are all analogsable by a basic theory
of aerodynamics, right?
Now, so one issue with AGI is we don't yet have the analog of the theory of aerodynamics, right? Now, so one issue with AGI is we don't yet have the
analog of the theory of aerodynamics. And that's that's what Marcus Hooder was
trying to make with the AXE and his general theory of general intelligence. But
that theory in its most clearly articulated parts really only works for
either infinitely powerful machines or almost or insanely impractically
powerful machines. So I mean, if you're going to take a theory-based approach to AGI, what you
would do is say, well, let's take what's called AXETL, which is Hudders AXET machine that can work
on merely insanely much processing power
rather than infinitely much.
What does TLS 10 for?
Time and length.
Okay.
So you're basically how it,
like constrained somehow.
Yeah, yeah, yeah.
So how AXC works basically is each,
each, each action that it wants to take.
Before taking that action, it looks at all its history.
Yeah.
And then it looks at all possible programs that it could use to make a taking that action. It looks at all its history. And then it looks
at all possible programs that it could use to make a decision. And it decides like which
decision program would have let it make the best decisions according to its reward function
over its history. And the uses that decision program to take to make the next decision.
Right. It's not afraid of infinite resources. It's searching through the space of all possible computer programs in between each action
and each next action.
No, XCTL searches through all possible computer programs that have runtime less than t and
length less than l. So it's, which is still an impractically humongous space, right?
So what you would like to do to make an AGI and what will probably be done 50 years from
now to make an AGI is say, okay, well we have some constraints. We have these processing power
constraints and you know, we have space and time constraints on the program. We have energy
utilization constraints and we have this particular class environments,
class of environments that we care about,
which may be, say, manipulating physical objects
on the surface of the Earth,
communicating in human language,
I'm in, whatever our particular,
not an highlighting humanity,
whatever our particular requirements happen to be.
If you formalize those requirements
in some
formal specification language, you should then be able to run automated program specializer
on AXETL, specialize it to the computing resource constraints and the particular environment
and goal. And then it will spit out like the specialized version of actl to your resource
restrictions in your environment, which will be your agi, right?
And that I think is how our super agi will create new agi systems, right?
But that's a very rough, it seems really inefficient.
That's a very Russian approach, by the way, like the whole field of program specialization
came out of Russia.
Can you backtrack so what is program specialization?
So it's basically...
Take sorting, for example.
You can have a generic program for sorting lists.
But what if all your lists you care about are length 10,000 or less?
You can run an automated program specializer on your sorting algorithm
and it will come up with the algorithm that's optimal for sorting lists of length 10000 or 10,000 or less, right?
It's kind of like, isn't that the kind of the process of evolution is a program specializer
to the environment.
So you're evolving human beings or you're literally, I mean, your Russian heritage is showing
you.
So with Alexander Vitya, I mean, Peter and Okin and so on, I mean, there's Alexander Vittia, I mean, and Peter and Okin and so on,
I mean, there's a long history of thinking about evolution
that way also, right?
So, my point is that what we're thinking of
as a human level general intelligence,
if you start from narrow AI's,
like, are being used in the commercial AI
field now, then you're thinking, okay, how do we make it more and more general? On the
other hand, if you start from AXC, your Schmidt-Uber's Gertel machine, or these infinitely powerful,
but practically infeasible, AIs, then getting to a human level, AGI, is a matter of specialization.
It's like how do you take these maximally general learning processes and how do you
specialize them so that they can operate within the resource constraints that you have,
but will achieve the particular things that you care about.
Because we are not, we humans are not maximally general intelligence, right? If I ask you to run a maze in 750 dimensions,
you're probably very slow. Whereas it's two dimensions, you're probably,
you're probably way better, right? So I mean, we're special because our,
our hippocampus has a two dimensional map in it, right? And it does not have a
750 dimensional map in it. So I mean, we. I mean we are you know a peculiar mix of
generality and specialization. We'll probably start quite general at birth.
Not obviously still narrow but like more general than we are at age 20 and 30 and 40 and 56.
I don't think that I think it's more complex than that
because I'm in some sense a young child is less biased
and the brain has yet to sort of crystallize
into appropriate structures for processing aspects
of the physical and social world.
On the other hand, the young child
is very tied to their centaurium,
whereas we can deal with abstract mathematics,
like 750 dimensions, and the young child cannot,
because they haven't grown what Piaget
called the formal capabilities.
They haven't learned to abstract yet, right?
And the ability to abstract
gives you a different kind of generality than what a baby has. So there's both more
specialization and more generalization that comes with the development process, actually.
I mean, I guess just the trajectories of the specialization are most controllable at
the young age, I guess,'s one way to put it.
Do you have kids? No. They're not as controllable as you think. So you think it's interesting.
I think honestly, I'm interesting. A human adult is much more generally intelligent than a human
baby. Babies are very stupid. I mean, they're, I mean, they're cute. They're cute,
yeah, which is, which is why we put up with their repetiveness and then stupidity. But
they, and they have what the zen guys will call a beginner's mind, which is a beautiful
thing. But that doesn't necessarily correlate with that with a high level of, of, of,
of intelligence. On the plot of like cuteness and stupidity, there, there's a, there's
a process that allows us to put up with
their stupidity.
Yeah, you get become a top here and ugly.
Oh man, like me, you got to get really, really smart to
come to the conversation.
Okay.
But yeah, going going back to your your original question.
So the way I look at human level, a GI is how do you
specialize, you know, unrealistically inefficient, superhuman,
brute force learning processes to the specific goals that humans need to achieve, and the
specific resources that we have.
And both of these, the goals and the resources in the environment. So all this is important. And on the resources side, it's important
that the hardware resources we're bringing to bear
are very different than the human brain.
So the way I would want to implement
AGI on a bunch of neurons in a vat that I could rewire
arbitrarily is quite different than the way I
would want to create a GI on say a modern server form of CPUs and GPUs, which in turn may be quite
different in the way I would want to implement a GI on whatever quantum computer will have in
10 years, supposing someone makes a robust quantum turing machine or something, right?
So I think there's been co-evolution of the patterns of
organization in the human brain and the physiological
particulars of the human brain over time.
And when you look at neural networks, that is one powerful class
of learning algorithms. But it's one powerful class of learning algorithms,
but it's also a class of learning algorithms that evolved to exploit the particulars of
the human brain as a computational substrate.
If you're looking at the computational substrate of a modern server farm, you won't necessarily
want the same algorithms that you want on the human brain.
And, you know, from the right level of abstraction, you could look at maybe the best algorithms on
the brain and the best algorithms on a modern computer network as implementing the same abstract
learning and representation processes, but finding that level of abstraction is its own
AGI research project then.
So that's about the hardware side and the software side, which
follows from that. Then regarding one of the requirements, I wrote the paper years ago on what
I called the embodied communication prior, which was quite similar in intent to Yoshua Benjio's
recent paper on the consciousness prior, except I didn't want
to wrap up consciousness in it because to me the quail, your problem and subjective experience
is a very interesting issue also, which we can chat about.
But I would rather keep that philosophical debate distinct from the debate of what kind
of biases do you want to put in a general intelligence to give it human-like general
and tell and i'm not sure your show benjo is really addressing
that kind of i he's just using the term i love yoshua to to pieces like he's by far my favorite of the
the lines of of deep learning yeah he's such such a good-hearted guy. He's a great speaker. Yeah, for sure. I am not I'm not sure he has plumbed to the depth to the philosophy of
Consciousness. No, he's using it as a sexy. Yeah. Yeah. Yeah. So I
When I called it was the embodied communication prior and you can maybe explain it a little bit. Yeah. Yeah. What when I meant was
You know, what are we humans of evolved for? You can say being human,
but that's that's very abstract, right? I mean, we, our minds control individual bodies, which are
autonomous agents moving around in a world that's composed largely of solid objects, right? And
we've also evolved to communicate via language with other solid object agents that are going
around doing things collectively with us in a world of solid objects.
These things are very obvious, but if you compare them to the scope of all possible intelligences
or even all possible intelligences that are physically realizable, that actually constrains
things a lot. So if you start to look at, you know,
how would you realize some specialized or constrained version
of universal general intelligence in a system
that has, you know, limited memory and limited speed of processing,
but whose general intelligence will be biased toward controlling
a solid object agent, which biased toward controlling a solid object agent,
which is mobile in a solid object world, from manipulating solid objects and communicating via
language with other similar agents in that same world, right? Then starting from that, you're
starting to get a requirements analysis for human level general intelligence and then that that leads you into cognitive science and you can look at say what are the different types of memory that the human mind and brain has and this this is
matured over the last decades and I got into this a lot and
So after you my PG and math. I was an academic for eight years. I was in departments of
So after you my PG and math, I was an academic for eight years. I was in departments of
mathematics, computer science and psychology when I was in the psychology department at University of Western Australia I was focused on
cognitive science of memory and and perception actually I was I was teaching neural nets and deep neural nets and it was multi-layer
perceptrons or most psychology
electrons, right? Most psychology. Yeah. Cognitive science, it was cross-disclary among engineering math, psychology, philosophy, linguistics, computer science. But yeah, we were teaching psychology
students to try to model the data from human cognition experiments using multilayer perceptrons,
which was the early version of a deep neural network. Very, very, yeah, recurrent back prop
was very, very slow to train back then, right?
So this is the study of these constraints,
systems that are supposed to deal with this glob.
So if you look at cognitive psychology,
you can see there's multiple types of memory,
which are to some extent represented
by different subsystems in the human brain.
So we have episodic memory which shakes into account our life history and everything that's happened to us.
We have declarative or semantic memory which is like facts and beliefs abstracted from the particular situations that they occur in.
There is sensory memory which to some extent is sense modality specific.
And to some extent is unified across sense modalities.
There's procedural memory, memory of how to do stuff, like how to swing the tennis racket, right?
Which is there's motor memory, but it's also a little more abstract than motor memory involves cerebellum and cortex working together. Then there's memory linkage
with emotion, with the linkages of cortex and limbic system. There's specifics of spatial
and temporal modeling connected with memory, which has to do with hippocampus and phalamus
connecting to cortex. The basal ganglia, which influences goals, so we have
specific memory of what goals, sub-goles, and sub-goles we wanted to perceive in which context in the past.
Human brain has substantially different subsystems for these different types of memory,
and substantially differently tuned learning, like differently tuned modes of long-term
potential to do with the types of neurons and neurotransmitters
and the different parts of the brain, corresponding to these different types of
knowledge. These different types of memory and learning
in the human brain, I mean you can back these all into
embodied communication for controlling agents and in worlds of
solid objects. Now if you look at building an
AGI system, one way to do it, which starts more from cognitive
science and neuroscience, is to say, okay, what are the types of memory that are necessary
for this kind of world?
Yeah, yeah, necessary for this sort of intelligence.
What types of learning work well with these different types of memory, and then how do
you connect all these things together, right?
And of course, the human brain did it incrementally through evolution because each of the subnetworks
of the brain, and when it's not really the lobes of the brain, it's the subnetworks, each of which
is widely distributed, which of the subnetworks of the brain co-evolves with the other subnetworks
of the brain, both in terms of its patterns
of organization and the particulars of the neurophysiology.
So they all grew up communicating and adapting to each other.
It's not like they were separate black boxes that were then glommed together, right?
Whereas as engineers, we would tend to say, let's make the declarative memory box here
and the procedural memory box here and the procedural memory box
here and the perception box here and why are them together.
And when you can do that, it's interesting.
I mean, that's how a car is built, right?
But on the other hand, that's clearly not how biological systems are made.
The parts co-evolves was to adapt and work together.
So that's, by the way, how every human engineered system that flies, that was, we're using
that analogy before it's built as well.
So do you find this at all appealing?
Like, there's been a lot of really exciting, which I find strange that it's ignored work
in cognitive architectures, for example, throughout the last few decades.
Do you find that?
Yeah.
I mean, I had a lot to do with that community.
And you know, Paul Rosenblum, who was one of the,
and John Laird, who built the SOAR architecture
or friends of mine.
And I learned SOAR quite well in Act R
and these different cognitive architectures him.
How I was looking at the AI world about 10 years ago
before this whole commercial deep learning explosion was, on the one hand,
you had these cognitive architecture guys who were working closely with psychologists
and cognitive scientists who had thought a lot about how the different parts of a human
like mine should work together.
On the other hand, you had these learning theory guys who didn't care at all about the architecture,
but were just thinking about like how do you
recognize patterns and large amounts of data. And in some sense, what you needed to do was to
get the learning that the learning theory guys were doing and put it together with the architecture
that the cognitive architecture guys were doing. And then you would have what you needed. Now,
guys were doing. And then you would have what you needed. Now, you can't, unfortunately, when you look at the details, you can't just do that without totally rebuilding what
is happening on both the cognitive architecture and the learning side. So I mean, they tried
to do that in sore, but what they ultimately did is like take a deep neural net or something
for perception. And you include it as one of the
one of the black boxes. It's one it becomes one of the boxes. The learning mechanism becomes one
of the boxes as opposed to fundamental. Yeah, that that doesn't part. Now you could look at some of
the stuff deep mind is like the deep the differential neural computer or something. That sort of has a
neural net for deep learning perception. It has another neural net, which is like a memory matrix.
It's sort of say the map of the London subway or something.
So probably Demis is so robust with thinking about this,
like part of cortex and part of hippocampus.
Cassippa campus as a spatial map.
And when he was a neuroscientist,
he was doing a bunch of cortex hippocampus interconnection.
So there, the DNC would be an example of folks
from the deep neural net world trying to take a step in the cognitive architecture direction,
by having two neural modules that correspond roughly to two different parts of the human
brain that deal with different kinds of memory and learning. But on the other hand, it's
super, super, super crude from the cognitive architecture view, right? Just just what,
what John Laird and Sword did with neural nets
was super, super crude from a learning point of view,
because a learning was like,
off to the side not affecting the core representations, right?
And when you weren't learning the representation,
you were learning the data that feeds into the,
you were learning abstractions of perceptual data
to feed into the representation
that was not learned, right?
So yeah, this was clear to me a while ago,
and one of my hopes with the AGI community
was to sort of bring people from those two directions together.
That didn't happen much in terms of...
Not yet.
Or what I was going to say,
it didn't happen in terms of bringing like the lines of cognitive architecture
together with the lines of deep-learning, it did work in
the sense that a bunch of younger researchers have had their heads filled with both of those ideas.
This comes back to a saying, my dad, who was a university professor, often quoted to me,
which was a science advances one funeral at a time, which I'm trying to avoid.
Like I'm 53 years old and I'm trying to invent amazing,
weird-ass new things that nobody ever thought about,
which we'll talk about in a few minutes.
But there is that aspect, right?
Like the people who have been AI a long time,
and have made their career at developing one aspect,
like a cognitive architecture or a deep learning approach.
It can be hard once you're old
and have made your career doing one thing.
It can be hard to mentally shift gears.
I mean, I try quite hard to remain flexible, my-
Have you been successful somewhat in changing?
Maybe have you changed your mind on some aspects of what it takes to build an a. G.I. like technical things the hard part is that the world doesn't want you to
Now in a world or your own brain the world well that one point is that your brain doesn't want to the other part is that the world doesn't want you to like like the people who
Have followed your ideas get mad you if
you change your mind. And, you know, the media wants to pigeonhole you as an avatar of a certain
idea. But, yeah, I've changed my mind on a bunch of things. When I started my career,
I really thought quantum computing would be necessary for AGI. And I doubt it's
necessary now, although I think it will be a super major enhancement. But I'm also, I'm
now in the middle of embarking on the complete rethink and rewrite from scratch of our open
coG AGI system together with Alexei Podopov and his team in St. Petersburg
who's working with me in singularity net. So now we're trying to like go back to basics, take
take everything we learned from working with the current open cog system, take everything,
everybody else has learned from working with it with their their protoAGI systems and design the best framework for the next stage.
And I do think there's a lot to be learned from the recent successes with deep neural nets
and deep reinforcement systems.
I mean, people made these essentially trivial systems work much better than I thought they
would.
And there's a lot to be learned from that. And I want to incorporate that knowledge appropriately in our open cog 2.0 system. On the other hand,
I also think current deepener on that architecture is as such. We'll never get you anywhere
near AGI. So I think you want to avoid the pathology of throwing the baby out with the bathwater and like saying,
well, these things are garbage because foolish journalists overblow them as being the path to
AGI and a few researchers overblow them as well. There's a lot of interesting stuff to be learned
there, even though those are not the golden path. So maybe this is
a good chance to step back. You mentioned OpenCog 2.0, but go back to OpenCog guys. 1.0, which is
excellent. Yeah. Yeah. Maybe talk to the history of OpenCog and you're thinking about these ideas.
I would say OpenCog 2.0 is a term we're throwing around sort of tongue
in cheek because the existing open cog system that we're working on now is not remotely
close to what we consider we'd consider a one point. I mean, it's an early, it's been
around what 13 years or something, but it's still an early stage research system,
right?
And actually, we are going back to the beginning in terms of theory and implementation,
because we feel like that's the right thing to do.
But I'm sure what we end up with is going to have a huge amount in common with the current system.
I mean, we all still like that the general approach. So first of all, what is OpenCog?
Sure. OpenCog is an open source software project that I launched together with several others in 2008.
And probably the first code written toward that was written in 2001 or
two or something that was developed as a proprietary code base within my AI company, Novamente,
LLC, then we decided to open source it in 2008, cleaned up the code throughout some things
that added some new things. And well, language is it written?
It's C++.
Primarily, there's a bunch of scheme as well, but most of it
C++ and it's separate from that something we'll also talk about
is singularity net.
So it was born as a non network thing.
Correct.
Correct.
Well, there are many levels of networks and involved here.
No connectivity to the internet.
Oh no.
Oh, at birth.
Yeah.
I mean, singularity net is a separate project
and a separate body of code.
And you can use singularity net as part of the infrastructure
for a distributed open-cog system.
But they are different layers.
Yeah. for a distributed open-cog system, but they are different layers.
Yeah.
So open-cog, on the one hand, as a software framework,
could be used to implement a variety of different AI
architectures and algorithms.
But in practice, there's been a group of developers,
which I've been leading together with
Linus Vepstus, Neil Geysviler, and a few others,
which have been using the OpenCog platform and
infrastructure to implement certain ideas about how to make an AGI.
So there's been a little bit of ambiguity about OpenCog,
the software platform versus OpenCog,
the AGI design OpenCog, the the the
AGI design, because in theory, you could use that software to do, you could use it to
make a neural net. You could you could use it to make a lot of different.
So what kind of stuff does the software platform provide?
So in terms of utility, so it's like what?
Yeah, let me first tell about OpenCog as a software platform.
Yeah. And then I'll tell you the specific AGI R&D we've been building, building on top of it. So the core
component of OpenCog as a software platform is what we call the
Adam space, which is a weighted labeled hypergraph ATOM
Adam space.
Adam space. Yeah, yeah, not Adam like Adam and Eve, although that
would be cool. Yeah. So you have a yeah, not not Adam like Adam and Eve, although that would be cool too.
Yeah, so you have a hypergraph, which is like a so a graph in this sense is a bunch of nodes with links between them. A hypergraph is like a graph, but links can go between more than two nodes. So
you have a link between three nodes. And in fact, open cod's Adam Space would properly be called a
metagraph because you can have links pointing to links or you
can have links pointing to whole subgraphs. Right. So it's a
extended hypergraph or a metagraph.
There's a metagraph of technical term. It is now a technical
term. Interesting. But I don't think it was yet a technical
term when we started calling
this a generalized hypergraph. But in any case, it's a way that labeled generalized hypergraph
or weighted labeled metagraph. The weights and labels mean that the nodes and links can
have numbers and symbols attached to them. So they can have types on them. They can have
numbers on the represent, say a truth value or an importance value for a certain
purpose.
And of course, like with all things, you can reduce that to a hypergraph and then a hypergraph
to a graph.
You can reduce the hypergraph to a graph and you can reduce the graph to an adjacency matrix.
So, I mean, there's always multiple representations.
But there's a layer representation that seems to work well here.
Right, right, right. But there's a layer of representation that seems to work well here. Got it. Right. Right. And so similarly, you could have a link to a whole graph because a whole graph could
represent say a body of information.
And I could say, I reject this body of information.
Then one way to do that is make that link go to that whole sub graph representing the
body of information.
Right.
I mean, there are many, there are many alternate representations.
But that's
anyway, what we have an open cog, we have an adm space, which is this weighted label generalized
hypergraph, knowledge store, it lives in RAM, there's also a way to back it up to disk,
there are ways to spread it among, among multiple different machines, then there are various
utilities for dealing with that.
So there's a pattern matcher, which lets you specify a sort of abstract pattern
and then search through a whole item space with labeled hypergraph
to see what sub hypergraphs may match that pattern, for an example.
So then there's something called the the COG server in open cog which lets you run a bunch of different
agents or processes
in a scheduler and each of these agents
Basically it reads stuff from the atmosphere and it writes stuff to the atmosphere
So this is sort of the basic
Operational model like you get a software framework.
Right.
And of course, that's, there's a lot there
just from a scalable software engineering standpoint.
So you could use this, I don't know if you've,
have you looked into the Stephen Wolfram's physics project
recently with the hypergrasson stuff?
Could you theoretically use like the software framework
to play with it?
You certainly could, although Wolfram would rather
die than use anything but mathematical for his work. Well Well, that's yeah, but there's a big community of people who are,
you know, would love integration. Like you said, the young minds love the idea of integrating
of connecting. Yeah, that's right. And I would add on that note, the idea of using hypergraph type
models in physics is not very new.
Like if you look at the Russians did it first.
Well, I'm sure they did.
And a guy named Ben Dribis, who's a mathematician, a professor in Louisiana or somewhere, had
a beautiful book on quantum sets and hypergraphs and algebraic topology for discrete models
of physics and carried it much farther than than than
Wolfram has, but he's he's not rich and famous. So it didn't didn't get in the headlines.
But yeah, Wolfram aside, yeah, certainly that's a good way to put it. The whole open
cog framework, you could use it to model biological networks and simulate biology processes.
You could use it to model physics and on discrete graph models of physics., you could use it to model physics on discrete graph models of physics.
So you could use it to do, say, biologically realistic
neural networks, for example.
And that's, so that's a framework.
What do agents and processes do?
Do they grow the graph?
Do they, what kind of computations
just get a sense, are they supposed to do? So in theory the graph? What kind of computations just get a sense?
So in theory, they could do anything they want to do.
They're just C++ processes.
On the other hand, the computation framework is designed for agents where most of their
processing time is taken up with reads and writes to the adm space.
And so that's a very different processing model than say the matrix multiplication based
model has underlies most deep learning systems, right?
So, you could create an agent that just factored numbers for a billion years.
It would run within the OpenCog platform, but it would be pointless, right?
I mean, the point of doing OpenCog
is because you want to make agents that are cooperating
via reading and writing into this weighted labeled
hypergraph, right?
And so that has both cognitive architecture importance
because then this hypergraph is being used
as a sort of shared memory among different cognitive
processes,
but it also has software and hardware implementation implications
because current GPU architectures are not so useful for OpenCug,
whereas a graph chip would be incredibly useful, right?
And I think Graphcore has those now,
but they're not ideally suited for this.
But I think in the next, let's say three to five years, we're going to see new chips where like a graph is put
on the chip. And, and, you know, the back and forth between multiple processes, acting
simd and mimdi on that, on that graph is going to be fast. And then that may do for open
cog type architectures, What GPUs did for deep
neural architecture.
It's a small tangent. Can you comment on thoughts about neuromorphic computing? So like hardware
implementations of all these different kind of, I mean, are you interested? Are you excited
by that possible? I mean, I mean, I'm excited about graph processors because I think they
can massively speed up, speed up open cog, which is a class of architectures that I'm
working on. I think if, you know, in principle, no more for computing should be amazing. I haven't
yet been fully sold on any of the systems that are out there. Like, Memoristers should be amazing
too, right? So a lot of these things have
obvious potential but I can't even put my hands on a system that that seemed to manifest that system should be amazing but the current system was not being great. I mean, for example,
if you wanted to make a biologically realistic hardware neural network, like taking, making a circuit in hardware that emulated
like the Hodgkin-Huxley equation or the Ishikovic equation, like equate differential equations
for a biologically realistic neuron and putting that in hardware on the chip that would
seem that would make more feasible to make a large scale truly biologically realistic
neural network.
No, what's been done so far is not like that.
So I guess personally as a researcher,
I mean, I've done a bunch of work in cognitive neural
and sorry, in computational neuroscience
where I did some work with IARPA in DC
intelligence, advanced research project agency.
We were looking at how do you make a
biologically realistic simulation
of seven different parts of the brain cooperating
with each other using like realistic non-linear
dynamical models of neurons,
and how do you get that to simulate what's going on
in the mind of a geo-int intelligence analyst
while they're trying to find terrorists on a map, right?
So if you wanna do something like that,
having neuromuffer hardware that really let you simulate
like a realistic model of the neuron would be amazing,
but that's sort of with my computational neuroscience
had on, right?
With an AGI had on, I'm just more interested
in these hypergraph knowledge representation
based architectures, which would benefit more from various types of graph processors,
because the main processing bottleneck is reading writing to RAM. It's reading writing
to the graph in RAM. The main processing bottleneck for this kind of proto-AGI architecture
is not multiplying matrices. And for that reason, GPUs, which are really good at multiplying matrices,
don't apply as well. There are frameworks like gunrock and others that try to boil down graph
processing to matrix operations and they're cool, but you're still putting a square peg into a round hole in
a certain way. The same is true. I mean, current quantum machine learning, which is very
cool. It's also all about how to get matrix and vector operations in quantum mechanics.
And I see why that's natural to do. I mean, quantum mechanics is all unitary matrices
and vectors, right? On the other hand, you could also try to make graph-centric quantum computers
Which I think is is is is is where things will go and then then we can have then we can make like take that open cog implementation layer
Implement it in a uncollapsed state inside a quantum computer
But that that may be the singularity squared, right? I'm not sure
we need that to get to human level.
Human level. That's already beyond the first singularity.
Can we just go back to open code? No, yeah, in the hypergraph and open code.
That's the software framework, right? So the next thing is our cognitive architecture
tells us particular algorithms to put there.
Got it.
Can we backtrack on the kind of, is this graph designed, is it in general supposed to be
sparse and the operations constantly grow and change the graph?
Yeah, the graph is sparse.
And but is it constantly adding links and so on?
Yeah, it is a self-modifying hypergraph.
So it's not, so the right and read operations
you were referring to, this isn't just a fixed graph
to which you changed the way.
It's not the growing graph.
Yeah, that's true.
So it is different model than say,
current, deep neural nets,
and they have a fixed neural architecture
and you're updating the weights. Although there have been like cascade correlational neural net architectures
that grow new nodes and links, but the most common neural architectures now have a fixed
neural architecture, you're updating the weights. And then open cog, you can update the weights,
and that certainly happens a lot. But adding new nodes, adding new links,
removing nodes and links is an equally critical part
of the system's operations.
Yeah. So now when you start to add these cognitive algorithms
on top of this open cog architecture,
what does that look like?
So what?
Yeah, so that within this framework,
then creating a cognitive architecture
is basically two things.
It's choosing what type system you want to put on the nodes and links in the hypergraph,
what types of nodes and links you want.
And then it's choosing what collection of agents, what collection of AI algorithms or processes
are going to run to operate on this hypergraph.
And of course those two decisions are closely connected to each other.
So in terms of the type system, there are some links that are more neural net
like that just like have weights to get updated by heavy and learning and
activation spreads along them.
There are other links that are more logic-like
and nodes that are more logic-like.
So you could have a variable node.
And you can have a node representing a universal
or existential quantifier as in predicologic or term logic.
So you can have logic-like nodes and links
or you can have neural-like nodes and links.
You can also have procedure-like nodes and links, as in, say,
combinatorological or lambda calculus representing programs. So you can have nodes and links representing
many different types of semantics, which means you could make a horrible ugly mess,
or you could make a system where these different types of, all interpenetrate and synergize with each other beautifully, right?
So, you can... So, the... So, the... So, the hypograph can contain programs.
Yeah, it can contain programs, although...
It can... In the current version, it is a very inefficient way to guide the execution of programs,
which is one thing that we are aiming to resolve with our... our rewrite of the system now.
thing that we are aiming to resolve with our rewrite of the system now. So what to use the most beautiful aspect of OpenCog? Just to you personally,
some aspect that captivates your imagination from beauty or power. What fascinates me is finding a common representation that underlies
abstract
declarative knowledge and
sensory knowledge and movement knowledge and procedural knowledge and episodic knowledge
Finding the right level of representation where all these types of knowledge are stored
in a sort of universal and interconvertible, yet practically manipulable way.
So that's to me, that's the core, because once you've done that,
then the different learning algorithms can help each other out.
Like what you want is, if you have a logic engine that helps with declarative knowledge,
and you have a deep neural net that gathers perceptual knowledge,
and you have say an evolutionary learning system
that learns procedures, you want these
to not only interact on the level of sharing results
and passing inputs and outputs to each other,
you want the logic engine when it gets stuck
to be able to share its intermediate
state with the neural net and with the evolutionary learning algorithm so that they can help each
other out of bottlenecks and help each other solve common-tural explosions by intervening
inside each other's cognitive processes.
But that can only be done if the intermediate state of a logic engine, the evolutionary
learning engine and a deep
neural net are represented in the same form. And that's what we figured out how to do by putting
the right type system on top of this weighted labeled hypergraph. So is there, can you maybe elaborate
on what are the different characteristics of a type system that can coexist amongst all these different kinds of knowledge that needs to be represented.
And I mean, like, is it hierarchical? Just any kind of insights you can give on that kind of
type. So this gets very nitty gritty and mathematical, of course. But one key part is switching from
mathematical of course, but one key part is switching from predicate logic to term logic. What is predicate logic? What is term logic? So term logic was invented by Aristotle,
or at least that's the oldest recollection we have of it. But term logic breaks down
basic logic into basically simple links between nodes. Like an inheritance link between node A and node B.
So in term logic, the basic deduction operation is A implies B, B implies C,
therefore A implies C. Whereas in predicate logic,
the basic operation is modus ponens, like A, A implies B, therefore B. So there,
there's a slightly different way of breaking down logic. But by breaking down logic into
term logic, you get a nice way of breaking logic down into into into nodes and links. So your
concepts can become nodes, the logical relations become links. And so then inference is like, so if this link is A implies B,
this link is B implies C,
then deduction builds a link A implies C.
And your probabilistic algorithm
can assign a certain weight there.
Now, you may also have like a heavy and neural link
from A to C,
which is the degree to which,
thinking the degree to which A being
the focus of attention should make be the focus of attention.
So you could have done a neural link and you could have a symbolic logical inheritance link
in your term logic and they have separate meaning, but they could be used to guide each other
as well.
If there is a large amount of neural weight on the link between A and B, that may
direct your logic engine to think about, well, what is the relation? Are they similar?
Is there an inheritance relation? Are they similar in some context? On the other hand, if
there's a logical relation between A and B, that may direct your neural component to think,
well, when I'm thinking about A, should should I be directing some attention to be also, because there's a logical relation. So in terms of logic, there's a lot
of thought that went into how do you break down logic relations, including basic sort of
propositional logic relations, as Aristotle and in terms of logic deals with, and then
quantifier logic relations also, how do you break those down elegantly into a hypergraph?
Because you, I mean, you can boil logic expression to do a graph in many different ways.
Many of them are very ugly, right?
We tried to find elegant ways of sort of hierarchically breaking down complex logic expression into
nodes and links so that if you have say different nodes representing,
you know, Ben AI, Lex interview or whatever, the logic relations between those things are compact
in the in the node and link representation. So then when you have a neural net acting on those
same nodes and links, the neural net and the logic engine can sort of interoperate with each
other.
And also interpretable by humans.
Is that is that an important?
That's tough.
Yeah, in simple cases, it's interpretable by humans, but then honestly, you know, I would
say logic systems give more potential for transparency and comprehensibility
than neural net systems, but you still have to work at it.
Because I mean, if I show you a predicate logic proposition
with like 500 nested universal and existential quantifiers
and 217 variables, that's no more comprehensible
than the weight matrix of a neural network, right?
So I'd say the logic expressions in AI learns from its experience are mostly totally opaque to human beings and
maybe even harder to understand than that. Because I mean, when you have multiple nested
quantifier findings, it's a very high level of abstraction. There is a difference though
in that within logic, it's a little more straightforward to pose the problem of like normalize this
and boil this down to a certain form.
I mean, you can do that in neural nets too.
Like, you can distill a neural net to a simpler form, but that's more often done to make a neural
net that'll run on an embedded device or something.
It's harder to distill a net to a comprehensible form than is to simplify logic expression to
a comprehensible form, but it doesn't come for
free.
What's in the AI's mind is incomprehensible to a human unless he do some special work
to make it comprehensible.
So on the procedural side, there's some different and interesting voodoo there.
If you're familiar in computer science, there's something called the Curry Howard correspondence, which is a one-to-one mapping between proofs and programs. So every program
can be mapped into a proof. Every proof can be mapped into a program. You can model this using
category theory and a bunch of nice math, but we want to make that practical. So that if you
have an executable program that like moves the robots arm or figures out in what order to say things in a dialogue, that's a procedure represented in open COGS hypergraph.
But if you want to reason on how to how to improve that procedure, you need to map that procedure into logic using curry Howard isomorphism so that the logic, the logic engine can reason about how to improve that procedure
and then map that back into the procedural representation that is efficient for execution.
So again, that comes down to not just can you make your procedure into a bunch of nodes and links,
because I mean, that can be done trivially.
A C++ compiler has nodes and links inside it.
Can you boil down your procedure
into a bunch of nodes and links
in a way that's like hierarchically decomposed
and simple enough?
It can reason a bunch.
Yeah, yeah, that given the resource constraints at hand,
you can map it back and forth to your term logic
like fast enough and without having
a bloated logic expression, right?
So there's just a lot of,
there's a lot of nitty-gritty particulars there, but by the same token,
if you ask a chip designer,
like how do you make the Intel i7 chip so good?
Right?
There's a long list of technical answers there,
which will take a while to go through, right?
And this has been decades of work.
I mean, the first AI system of this nature,
I tried to build was called WebMind in the mid 1990s.
And we had a big graph operating in RAM,
implemented with Java 1.1, which is a terrible,
terrible implementation idea.
And then each node had its own processing.
So like, the core loop loop through all nodes in the network and let each node had its own processing. So like that they're the core loop loop
through all nodes in the network
and let each node enact what that little thing was doing.
And we had logic and neural nets in there,
but an evolution they're learning.
But we hadn't done enough of the math
to get them to operate together very cleanly.
So it was really, it was quite a horrible mess.
So as well as shifting an implementation
where the graph is its own object and the agents are separately scheduled, we've also done
a lot of work on how do you represent programs, how do you represent procedures, you know,
how do you represent genotypes for evolution in a way that the interoperability between the different types of learning,
associated with these different types of knowledge actually works.
And that's been quite difficult.
It's taken decades and it's totally off to the side of what the commercial mainstream
of the AI field is doing, which isn't thinking about representation at all really,
although you could see like in the DNC,
they had to think a little bit about,
how do you make representation of a map
in this memory matrix work together
with a representation needed for say,
visual pattern recognition in the hierarchical neural network.
But I would say we have taken that direction
of taking the types of knowledge you need for different
types of learning, like declarative procedural, attentional.
And how do you make these types of knowledge represent in a way that allows cross-learning
across these different types of memory?
We've been prototyping and experimenting with this within OpenCog.
And before that web minds
since the mid 1990s.
Now, disappointingly to all of us,
this has not yet been cashed out in an AGI system, right?
I mean, we've used this system within our consulting business
so we've built natural language processing
and robot control and financial analysis, we built a bunch of
sort of vertical market specific proprietary AI projects
that use OpenCog on the back end,
but we haven't, that's not the AGI goal, right?
That's interesting, but it's not the AGI goal.
So now what we're looking at with our rebuild at the system 2.0.
Yeah, we're also calling it true agi.
So we're not quite sure what the name is yet that we made a website for two agi.io.
But we haven't put anything on there yet.
So we make them up with any even better name.
But it's kind of like the real AI starting point for your
example.
But I like true better because true has,
like, you can be true-hearted, right?
You can be true to your girlfriend.
So true has, true has a number,
and they also has logic in it, right?
Because logic is a key part of the market.
I like it.
So yeah, with the true AI system,
we're sticking with the same basic architecture,
but we're trying to build on what we've learned.
One thing we've learned is that we need type checking among dependent types to be much
faster and among probabilistic dependent types to be much faster.
So as it is now, you can have complex types on the nodes and links. But if you want types to be first class citizens,
so that the types can be variables,
and then you do type checking among complex higher order
types, you can do that in the system now,
but it's very slow.
This is stuff like it's done in cutting edge program languages
like Agdars, something these obscure research languages.
On the other hand, we've been doing a lot
time together deep neural nets with symbolic learning.
So we did a project for Cisco, for example,
which was on, this was street scene analysis,
but they had deep neural models for a bunch of cameras
watching street scenes,
but they trained a different model for each camera
because they couldn't get the transfer learning to work
which in camera A and camera B.
So we took what came out of all the deep neural models for the different cameras. We
fed it into an open called symbolic representation. Then we did some pattern mining it and some
reasoning on what came out of all the different cameras within the symbolic graph. And that
worked well for that application. I mean, you go a lot to pee from Cisco, give a talk,
touching on that at last year's AGI conference, it was in Shen-Jem.
On the other hand, we learned from there,
it was kind of clunky to get the deep neural models
to work well with the symbolic system
because we were using Torch,
and Torch keeps a sort of state computation graph,
but you needed like real-time access
to that computation graph within our hypergraph. And we certainly did it. Alexei Pondapov, who leads our St. Petersburg team, wrote a great paper
on cognitive modules in OpenCog, explaining, so how do you deal with the torch compute graph inside
OpenCog. But in the end, we realized that just hadn't been one of our design thoughts when we built
OpenCog, right? So between wanting really fast dependent type checking and wanting much more efficient
inter operation between the computation graphs of deep neural net frameworks and OpenCog's
hypergraph and adding on top of that, wanting to more effectively run an OpenCog hypergraph
distributed across RAM in 10,000 machines, which is, we're doing dozens of machines now, but it's just not, we didn't architect it with that sort of modern scalability in mind.
So these performance requirements, or what have driven us to want to re-architect the base,
but the core AGI paradigm doesn't really change.
Like the mathematics is the same.
It's just, we can't scale to the level that we want
in terms of distributed processing or speed
of various kinds of processing with the current infrastructure
that was built in the phase 2001 to 2008,
which is hardly shocking.
Yeah, so.
Well, I mean, the three things you mentioned are really interesting.
So, what do you think about, in terms of interpretability, communicating with the computational
graph of neural networks?
What do you think about the representations that neural networks form?
I'm there bad, but there's many ways that you could deal with that.
So, I've been wrestling with this a lot in some work
on supervised grammar induction.
And I have a simple paper on that.
They'll give it the next AGI conference,
the online portion of which is next week, actually.
What is grammar induction?
So this isn't AGI either,
but it's sort of on the verge between an AGI
or something.
Unsupervised grammar induction is the problem.
Throw your AI system a huge body of text
and have it learn the grammar of the language
that produced that text.
So you're not giving it labeled examples.
So you're not giving it like a thousand sentences
where the parses were marked up by graduate students.
So it's just got to infer the grammar from from from the text. It's like it's like the Rosetta stone, but worse, right?
Because you only have the one language. Yeah, and you have to figure out what what is the grammar. So that's not really
AGI because I mean the way a human learns language is not that right. I mean we learned from language
That's used in context.
So it's a social embodied thing.
We see how a given sense is grounded in observation.
There's an interactive element, I guess.
Yeah, yeah, yeah.
On the other hand, so I'm more interested in that.
I'm more interested in making an AGS system
learn language from its social and embodied experience.
On the other hand, that's also more of a pain
to do. And that would lead us into Hanson robotics and their robotics work. I've no
much to talk about in a few minutes. But just as an intellectual exercise, as a learning exercise,
trying to learn grammar from a corpus is very, very interesting, right? And that's been a field
in AI for a long time. No one can do it very well.
So we've been looking at transformer neural networks
and tree transformers, which are amazing.
These came out of Google Brain, actually.
And actually, on that team was Lucas Kaiser,
who used to work for me in one in the period 2005 through
8 or something.
So I've been fun to see my former sort of AGI employees disperse
and do all these amazing things.
Why do you money sucked into Google, actually?
Well, anyway, we'll talk about that, too.
Lucas Kaiser and a bunch of these guys,
they create transformer networks that classic paper,
like attention is all you need,
and all these things following on from that.
So we're looking at transformer networks and like these are able to, I mean, this is
what underlies GPC 2 and GPC 3 and so on, which are very, very cool and have absolutely
no cognitive understanding of any of the texts we're looking at.
Like they're very intelligent idiots, right?
So sorry to take, but the small, I'll bring us back,
but do you think GPT-3 understands?
No, not, I'll understand nothing.
Is it complete idiot?
So you don't think GPT-20 will understand like that?
No, no, no, that's not going to buy you understanding.
And any more than the faster car is going to get you the most
Yeah, okay. It's a completely different kind of thing. I mean
These networks are very cool and as an entrepreneur I can see many how valuable uses for them and as
As an as an artist. I love them right so I mean I
We're using our own neural model which is along those lines to control the Philip K. Dick robot now
And it's amazing to like train train a neural model on the robot Philip K. Dick and see it come up with like craze
Stoned philosopher pronouncements
Very much like what Philip K. Dick might have said right like that
These models are super cool and I'm I'm working with Hanson Robotics now
on using a similar bit more sophisticated one for Sophia,
which we haven't launched yet.
But so I think it's cool.
But note these are recognizing a large number
of shallow patterns.
They're not forming an abstract representation.
And that's the point I was coming to
when we're
looking at grammar induction, we tried to mine patterns out of the structure, the transformer network.
And you can, but the patterns aren't what you want. They're nasty. So I mean, if you do supervised learning, if you look at senses where you know the correct parts of a sentence,
you can learn a matrix that maps between the internal representation of the transformer and the parts of the sentence.
And so then you can actually train something that will output the sentence parts from the transformer networks internal state.
And we did this, I think, Christopher Manning, some others have not done this also.
But I mean, what you get is that the representation is hardly ugly and is scoured all over the
network and doesn't look like the rules of grammar that you know are the right rules of
grammar, right?
It's kind of ugly.
So what we're actually doing is we're using a symbolic grammar learning algorithm, but
we're using the transform on neural network as a sentence probability
oracle. So like what went when you if you have a rule of grammar and you aren't sure if
it's a correct rule of grammar or not, you can generate a bunch of senses using that rule
of grammar and a bunch of senses violating that rule of grammar. And you can see the transformer
model doesn't think the senses are being the rule of grammar are more probable than the senses disobeying the rule of grammar. So in that way, you can use the neural model doesn't think the sense is a bang the rule of grammar or more probable than the sense is disabeing the rule of grammar.
So in that way you can use the neural model as a sense probability oracle to guide guide a.
Symbolic grammar learning process.
And that seems to work better than trying to milk the grammar out of the neural network that doesn't have an end there. So I think the thing is these neural nets are not getting a semantically meaningful representation
internally by and large.
So one line of research is to try to get them to do that.
And in infogame was trying to do that.
So like if you look back like two years ago, there was all these papers on like ed word,
this probabilistic programming neural net framework that Google had which came out of info again
So the the idea there was like you could train an info again neural net model
Which is a gerative associate network to recognize and generate faces and the model would automatically learn a
Variable for how long the nose is and automatically learn a variable for how wide the eyes are or how big the lips are or something, right? So it automatically learned these variables which have a semantic meaning.
So that that was a rare case where a neural net trained with a fairly standard
GAN method was able to actually learn the semantic representation. So for many years,
many of us tried to take that the next step and get a Gantype neural network that would have not just a list of
semantic latent variables, but would have, say, a base net of semantic latent variables with dependencies
between them. The whole programming framework Edward was made for that. I mean, no one got it to work,
right? And it could be-
I think it's possible. Yeah, do you?
I don't know. It might be that back propagation just won't work for it because the
gradient search are too screwed up.
Yeah, maybe you could get to work using CMAES or some like flowing
point evolutionary algorithm.
Right.
We tried.
We didn't get to work.
Eventually, we just paused that rather than gave it up.
We paused that and said, well, okay, let's, let's try more
innovative ways to learn, implicit to learn what are the representations implicit in that network
without trying to make it grow inside that network. I described how we're doing that in
language. You can do similar things in vision. So what is an Oracle?
Yeah, yeah, yeah. So you can, that's one way is you use a structure learning algorithm, which is symbolic. And then you use the deep neural net as an Oracle to guide the structure
learning algorithm. The other way to do it is like infogame was trying to do and try to tweak
the neural network to have the symbolic representation inside it. I tend to think what the brain is doing
inside it. I tend to think what the brain is doing is more like using the deep neural net type thing as an oracle. Like I think the visual cortex or the cerebellum are probably
learning a non-sematically meaningful opaque tangled representation. And then when they interface
with the morcognopart to the cortex, the cortex is sort of using those as an oracle and learning the abstract representation.
So if you do sports, say, take, for example, serving in tennis, right?
I mean, my tennis service, okay, not great, but I learned it by trial and error, right?
And I mean, I learned music by trial and error too.
I just sit down and play.
But then if you're an athlete, which I'm not a good athlete, I mean, then you'll watch videos of yourself serving and your coach will help you think about
what you're doing. And you'll then form a declarative representation, but your Sarah
Ballam maybe didn't have a declarative representation. Same way with music, like I will hear something
in my head, I'll sit down and play the thing like I heard it. And then I will try to study what my fingers did
to see like what did you just play?
Like how did you do that?
Right?
Because if you're composing,
you may want to see how you did it
and then declaratively morph that in some way
that your fingers wouldn't think of, right?
But the physiological movement may come out of some
opaque, like, cerebellar reinforcement learned thing, right? And so that's, I think,
trying to milk the structure I have a neural net by treating as an oracle, maybe more like
how your declarative mind post-processes, what your visual or motor cortex. I mean, in vision, it's the same way.
You can recognize beautiful art much better than you can say why.
You think that piece of art is beautiful.
But if your train is an art critic, you do learn to say why.
And some of it's bullshit, but some of it isn't.
Some of it is learning to map sensory knowledge
into declarative and linguistic knowledge, yet without necessarily making the sensory system
itself use a transparent and easily communicable representation.
Yeah, that's fascinating. To think of neural networks as dumb question answers that you can just milk to build up a knowledge base.
And it could be multiple networks, I suppose, from different to...
Yeah, yeah.
So I think if a group like DeepMind or OpenAI were to build a AI, and I think DeepMind
is like a thousand times more likely from where I could tell.
Because they've hired a lot of people would
broad minds and many different approaches
and angles on the on the AGI worse.
Open AI is also awesome, but I see them as more of like
a pure deep reinforcement learning.
Yeah, that's time I got you.
So there's a lot of you're right.
There's I mean, there's so much interdisciplinary work
a deep mind like neuroscience.
And you put that together with Google Brain, which I'm granted they're not working
there closely together now, but you know, my oldest son is our thustras doing his PhD in
machine learning applied to automated theorem proving in Prague under Joseph Irban.
So the first paper deep math, which applied deep neural nets to guide
theorem proofing was out of Google Brain.
I mean, by now, the automated theorem proofing community is going way, way, way beyond anything
Google was doing.
But still, yeah, but anyway, if that community was going to make an AGI, probably one way
they would do it was, you know, take 25 different neural modules,
architected in different ways, maybe resembling different parts of the brain, like a basal ganglia
model, cerebellum model, a thalamus module, a few hippocampus models, a number of different
models representing parts of the cortex, right? Take all of these and then wire them together to to to co-train and like learn them together like that.
That would be an approach to creating an and an AGI. One could implement something like that efficiently on top of our our true AGI like open
cog 2.0 system. What once it exists, although obviously Google has their own highly
efficient implementation architecture.
So I think that's a decent way to build AGI.
I was very interested in that in the mid 90s,
but I mean, the knowledge about how the brain works
sort of pissed me off.
Like it wasn't there yet.
Like, you know, in the hippocampus,
you have these concept neurons.
Like the so-called grandmother neuron,
whichever one left at it's actually there.
Like, I have some Lex Friedman neurons that fired differentially when I see you and not
when I see any other person, right?
Yeah.
So how do these Lex Friedman neurons, how do they coordinate with the distributed representation
of Lex Friedman I have in my cortex, right?
There's some back and forth in cortex and hippocampus that lets these
discrete symbolic representations in hippocampus correlate and cooperate with the distributed
representations in cortex. This probably has to do with how the brain does its version of abstraction
and quantifier logic, right? Like you can have a single neuron in hippocampus that activates a whole
distributed activation pattern in cortex. Well, this, this may be how the
brain does, like, symbolization and abstraction as in in functional programming or something. But we
can't measure it. Like we can, we don't have enough electrodes stuck between the cortex and the
and the hippocampus in any known experiment to measure it. So I got, I got frustrated with that
direction, not because it's impossible.
We just don't understand off yet. We don't, and of course, it's a
valid research direction. You can try to understand more and more
and we are measuring more and more about what what happens in the
brain now that than ever before. So it's quite interesting. On
the other hand, I sort of got more of an engineering mindset
about a GI, I'm like, well, okay, we don't know how the brain works that well.
We don't know how birds fly that well yet either.
We have no idea how a hummingbird flies in terms of the aerodynamics of it.
On the other hand, we know basic principles of like flapping and pushing the air down.
And we know the basic principles of how the different parts of the brain work.
So let's take those basic principles and engineers something that embodies those basic principles, but you know, it's
well designed for the hardware that we have on hand right now.
Yes. Did you think we can create a GI before we understand the brain works?
Yeah. I think that's probably what will happen. And maybe the AGI will help us do better brain imaging
that will then let us build artificial humans,
which is very, very interesting to us
because we are humans, right?
I mean, building artificial humans is super worthwhile.
I just think it's probably not the shortest path to AGI.
So it's fascinating idea that we will build AGI
to help us understand ourselves.
So, it's fascinating idea that we will build a GI to help us understand ourselves.
You know, a lot of people ask me if, you know, young people interested in doing artificial intelligence, they look at sort of, you know, doing graduate level, even undergrads, but
graduate level research. And they see the artificial intelligence community stands. Now,
it's not really a GI type research for the most part.
Yeah.
So the national question they ask is, what advice would you give?
I mean, maybe I could ask, if people were interested in working on OpenCog or in some
kind of direct or interconnection to OpenCog or a GI research, what would you recommend? OpenCog, first of all, is OpenSource project. There's a Google group discussion list, there's
a GitHub repository. So if anyone's interested in landing hand with that aspect of AGI, introduce
yourself on the OpenCog email list, and there's a slack as well. I mean we're certainly
interested to have inputs into our redesigned process for a new version of open cog, but also we're
doing a lot of very interesting research. I mean we're working on data analysis for COVID
clinical trials. We're working with heads andotics. We're doing a lot of cool things
with the current version of OpenCog now.
So there's certainly opportunity to jump into OpenCog
or various other open source, a GI oriented project.
So would you say there's like masters
and PhD thesises in there?
Plenty, yeah, plenty, of course.
I mean, the challenge is to find a supervisor
who wants to foster that sort of research,
but that's way easier than it was when I got my PhD.
You're right.
So, okay, great.
We talked about OpenCog, which is kind of one,
the software framework, but also the actual attempt
to build an AGI system.
And then there is this exciting idea of singularity net. So maybe can you say first
what is singularity net? Sure, sure. Singularity net is a platform for realizing a decentralized network of artificial intelligences.
So Marvin Minsky, the AI pioneer who I knew a little bit,
he had the idea of a society of minds,
like you should achieve an AI,
not by writing one algorithm or one program,
but you should put a bunch of different AI's out there,
and the different AI's will interact with each other, each playing their
own role.
And then the totality of the society of A.I.s would be the thing that displayed the human
level intelligence.
And I had, when he was alive, I had many debates with Marvin about this idea.
And he, I think he really thought the mind was more like a society than I do. I think
you could have a mind that was as disorganized as a human society, but I think a human like
mind has a bit more central control than that. I mean, we have this phalamus and the Madula
and limbic system. We have a sort of top-down control system that guides
much of what we do more so than a society does. So I think he's stretched that metaphor a little
too far, but I also think there's something interesting there. And so in the 90s, when I started my
first sort of non-academic AI project, WebMind, which was
an AI startup in New York, in the Silicon Valley area in the late 90s, what I was aiming
to do there was make a distributed society of AI's, the different parts of which would
live on different computers all around the world, and each one would do its own thinking
about the data local to it.
But they would all share
information with each other and outsource work with each other and cooperate and the intelligence
would be in the whole collective. And I organized a conference together with Francis Hailingen at
free University of Brussels in 2001, which was the Global Brain Zero Conference. And we're
planning the next version, the Global Brain one conference at the free University of Brussels for next year of 2021
So 20 20 years after then we maybe we can have the next one 10 years after that like
Externally faster until the singularity comes right?
The time is right. Yeah. Yeah. Yeah. So the idea with the global brain was know, maybe the AI won't just be in a program
on one guy's computer, but the AI will be, you know, in the internet as a whole with a cooperation
of different AI modules living in different places. So one of the issues you face when
architecting a system like that is, you know, how is the whole thing controlled? Do you have like
a centralized control unit that pulls the puppet strings of all the whole thing controlled? Do you have like a centralized control unit
that pulls the puppet strings
of all the different modules there?
Or do you have a fundamentally decentralized network
where the society of AIs is controlled
in some democratic and self-organized way
by all the AIs in that society, right?
And Francis and I had different view of many things, but we
both wanted to make like a global society of AI minds with a decentralized or organizational
mode. Now, the main difference was he wanted the individual AI's to be all incredibly simple and all the intelligence to be on the
collective level. Whereas I thought that was cool, but I thought a more practical way to
do it might be if some of the agents in the society of minds were fairly generally intelligent
on their own. So like you could have a bunch of open cogs out there and a bunch of simpler
learning systems. And then these are all cooperating
coordinating together, sort of like in the brain. Okay, the brain as a whole is the general
intelligence. But some parts of the cortex, you could say have a fair bit of general intelligence
on their own, whereas they part to the cerebellum or limbic system have very little general intelligence
on their own. And they're contributing to general intelligence, you know, by way of their connectivity to other modules. Do you see instantiations of the same kind of, you know,
maybe different versions of OpenCog, but also just the same version of OpenCog and maybe many
instantiations of it as part of the open. That's what David and Herznoy wanted to do with many
Sophia and other robots. Each one has its own individual mind living on a server, but there's also a collective
intelligence infusing them and a part of the mind living on the edge in each robot, right?
Yeah.
So, the thing is, at that time, as well as webmind being implemented in Java 1.1 as like
a massive distributed system, the blockchain wasn't there yet. So how do they do this decentralized
control? We sort of know it, we knew about distributed systems, we knew about encryption.
So I mean, we had the key principles of what underlies blockchain now, but we didn't put it
together in the way that it's been done now. So when when Vitalik buterin and colleagues come out
with Ethereum blockchain, many, many years later like 2013 or something
Then that was like well, this is interesting like this is
Solid e-scripting language. It's kind of dorky in a way and I don't see why you need a term complete language for this purpose
But on the other hand
This is like the first time I could sit down and start to like script
infrastructure for decentralized control of the AIs in a society of minds in a tractable
way. Like you could hack the Bitcoin code base, but it's really annoying.
Where salady is Ethereum scripting language is just nicer and easier to use.
I'm very annoyed by that by this point, but like Java like Java I mean these languages are amazing when they first come out
So then I came up with the idea that turned into singularity that okay, let's let's make a decentralized agent system
Where a bunch of different aIs you know wrapped up and say different Docker containers or Alex C containers different aIs
Can each of them have their own identity on the blockchain,
and the coordination of this community of AIs has no central controller, no dictated, right?
The control, and there's no central repository of information. The coordination of the
society of minds is done entirely by the decentralized network in a decentralized way,
by the algorithms, right? Because the model of Bitcoin is in math we trust, right?
And so that's what you need.
You need the society of minds to trust only in math,
not trust only in one centralized server.
So the AI systems themselves are outside of the blockchain,
but then the communication between them.
At the moment, yeah, yeah, we, I would have loved to put the AI's
operations on chain in some sense,
but in Ethereum, it's just too slow.
You can't, you can't, you can't do that.
Somehow, it's the basic communication between AI systems that's, uh, yeah, yeah.
So basically, an AI is just some software in singularity.
AI is just some software process living in a container.
And there's input and apple.
There's a proxy that lives in that container along with the AI
that handles the interaction with the rest of singularity net.
And then when one AI wants to contribute with another one
in the network, they set up a number of channels.
And the setup of those channels uses the Ethereum blockchain.
But once the channels are set up, then data flows along
those channels without having to be on the blockchain.
All that goes on the blockchain is the fact that some data went along that channel.
So you can do...
So there's not a shared knowledge.
Well, the identity of each agent is on the blockchain, on the Ethereum blockchain.
If one agent rates the reputation of another agent,
that goes on the blockchain.
And agents can publish what APIs they will fulfill
on the blockchain.
But the actual data for AI and the results of AI
is not on the blockchain.
Do you think it could be?
Do you think it should be?
In some cases, it should be.
In some cases, maybe it shouldn't be.
But I mean, I think that, I'll give you an example. You some cases, maybe it shouldn't be. I mean, I think that...
So I'll give you an example.
You think Ethereum, you can't do it.
Using now, there's more modern and faster blockchains
where you could start to do that in some cases two years ago.
That was less so.
It's a very rapidly evolving ecosystem.
So like one example, maybe you can comment on something I worked on a lot on is autonomous vehicles.
You can see each individual vehicle as a AI system. And you can see vehicles from Tesla, for example,
and then Ford and GM and all these as also like a larger, I mean they all are running the same kind of system on each sets of vehicles.
So it's individual AI systems and individual vehicles, but it's all different.
The station is the same AI system within the same company.
So, you know, you can envision a situation where all of those AI systems are put on singularity net.
Right.
Yeah.
And how do you see that happening and what would be the benefit and could they share data?
I guess the biggest things is the power there is in the decentralized control, but the
benefit is really nice if they can somehow share the knowledge in an open way if they choose
to.
Yeah, yeah, yeah.
Those are all quite good points.
So I think the benefit from being on the decentralized network,
as we envision it, is that we want the AI's in the network
to be outsourcing work to each other
and making API calls to each other frequently.
I gotcha. So the real benefit would be if that AI wanted to outsource some cognitive processing or data processing or data pre-processing
whatever to some other AI's in the network which specializes in something different.
And this really requires a different way of thinking about AI software development, right?
So just like object-oriented programming was different than imperative programming.
And now object-oriented programers all use these frameworks to do things,
rather than just libraries even.
You know, shifting to agent-based programming where AI agent is asking
other like live real-time evolving agents for feedback
and what they're doing.
That's a different way of thinking.
I mean, it's not a new one.
There was loads of papers on agent-based programming in the 80s and all the word.
But if you're willing to shift to an agent-based model of development, then you can put less
and less in your AI and rely more and more on interactive calls to other AI's running in the network.
Of course, that's not fully manifested yet because although we've rolled out a nice work conversion of singularly in that platform,
there's only 5200 AI's running in there now. There's not tens of thousands of AI's.
So we don't have the critical mass
for the whole society of mine to be doing what we want.
The magic really happens when there's just a huge number of agents.
Yeah, yeah, yeah.
Exactly. In terms of data, we're partnering closely with another blockchain project called
Ocean Protocol. An Ocean Protocol, that's the project of Trent McConaughey, who developed Big Chain
DB, which is a blockchain based database. So ocean protocol is basically blockchain based
big data, and names that make making it efficient for for different AI processes or
Earth statistical processes, whatever to to share large large data sets or one process
consent to clone of itself to work on the other guys data set and
send results back and so forth. So by getting ocean and you have data lakes, so this is the data
ocean right? So again, by getting ocean and singularity net to to interoperate, we're aiming to
aiming to take into account of the big data aspect also, but it's quite challenging because to build this all
decentralized blockchain based infrastructure,
I mean, your competitors are like Google Microsoft,
to Alibaba and Amazon, which have so much money
to put behind their centralized infrastructures,
plus they're solving simpler algorithmic problems
because making it centralized in some ways is easier, right?
So they're very
major computer science challenges. And I think what you saw with the whole ICO boom in
the blockchain and cryptocurrency world is a lot of young hackers who were hacking Bitcoin
or Ethereum, and they see, well, why don't we make this decentralized on blockchain? Then
after they raised some money through an ICO, they realized how hard it is.
It's like, like, actually, we're wrestling with incredibly hard
computer science and software engineering and distributed systems
problems which can be solved, but they're just very difficult to solve.
And in some cases, the individuals who started those projects were not
difficult to solve. And in some cases, the individuals who started those projects were not, were not well equipped to actually solve the problems that they want.
So you think, would you say that's the main bottleneck? If, if you look at the future
of currency, you know, the question is, will currency the main bottleneck is politics?
Like it's government was so the fans of armed thugs that will shoot you if you bypass their currency restriction.
That's right.
So, like, your sense is that versus the technical challenges,
because you kind of just suggested
that technical challenges are quite high as well.
I mean, for making a distributed money,
you could do that on Algorand right now.
I mean, so that, while Ethereum is too slow,
there's Algorand and there's a few other,
more modern, more scalable
blockchains. It would work fine for a decentralized global currency. So I think there were technical
bottlenecks to that two years ago. And maybe Ethereum 2.0 will be as fast as Algorand.
I don't know. That's not fully written yet. Right. So I think the obstacle to currency being put on the blockchain is that,
I think the currency will be on the blockchain.
It'll just be on the blockchain in a way that enforces centralized control and government
hedge money rather than otherwise.
Like the ERMB will probably be the first global, the first currency on the blockchain.
The E-Rubo, maybe next.
There already were.
E-Rubo.
I mean, that's hilarious.
Digital currency, you know, makes total sense,
but they would rather do it in the way that Putin and Xi Jinping have have have access to the
global keys for everything, right? So, and then the analogy to that in terms of singularity net.
I mean, there's echoes, I think you've mentioned before that Linux gives you hope.
And AI is not as heavily regulated as money, right?
Not yet, right?
Not yet.
Oh, that's a lot slipperier than money, too, right?
I mean, money is easier to regulate because it's kind of easier to define.
Whereas AI is, it's almost everywhere inside everything.
What, where's the boundary between AI and software?
I mean, if you're gonna regulate AI,
there's no IQ test for every hardware device
that has a learning algorithm.
You're gonna be putting like head to my regulation
on all software and I don't rule out that that
to have the data software.
And how do you tell a software's adapt
to every software's gonna be adaptive, I mean.
Or maybe they, maybe the, you know, maybe we are living in the golden age of open
source that will not, that will not always be open.
Maybe it'll become centralized control of software by government.
It is entirely possible.
And part of what I think we're doing with things like singularityNet Protocol is creating a tool set that can be used to counteract that
sort of thing.
Say, it's similar thing about mesh networking, right?
Plays a minor role now, the ability to access internet directly from to phone.
On the other hand, if your government starts trying to control your use of the internet,
suddenly having mesh networking there can be very convenient, right?
And so right now, something like a decentralized blockchain-based AGI framework or narrow
AI framework, it's cool, it's nice to have.
On the other hand, if government start trying to tamp down on my AI, interoperating with someone's
AI in Russia or somewhere, then suddenly having a decentralized protocol that nobody owns or controls
becomes an extremely valuable part of the toolset. And we've put that out there now, it's not perfect, but it operates. And it's pretty blockchain agnostic.
So we're talking to Algorand about making
part of Singularity run on Algorand.
My good friend, Tufi Saleba, has a cool blockchain project
called Tota, which is a blockchain without a distributed
ledger.
It's like a whole other architecture.
So there's a lot of more advanced
things you can do in the blockchain world, singularity that could be ported to a whole
bunch of, it could be made multi-chain, important to a whole bunch of different blockchains.
And there's a lot of potential and a lot of importance to putting this kind of tool set out there.
If you compare the OpenCog, what you could see is open cog allows tight
integration of a few AI algorithms that share the same knowledge store in real time in RAM,
singularity net allows loose integration of multiple different AIs. They can share knowledge,
but they're mostly not going to be sharing knowledge in RAM on
the same machine.
And I think what we're going to have is a network of network of networks, right?
Like, I mean, you have the knowledge graph inside the open cog system.
And then you have a network of machines inside distributed open cog mind, but then that open cog will interface with other AI's doing deep neural nets or
custom biology data analysis or whatever they're doing in
singularity net, which is a looser integration of different
AI, some of which maybe they're their own networks, right? And I
think at a very loose analogy, you could see that in the human
body. Like the brain has regions like cortex or hippocampus, which tightly interconnects,
like cornical columns within the cortex, for example. Then there's looser connection
with the different lobes at the brain, and then the brain interconnects with the endocrine system
in different parts of the body, even even more loosely. Then your body interacts even more loosely with the other people that you talk to.
So you often have networks within networks within networks with progressively
looser coupling as you get higher up in that hierarchy. I mean, you have that in biology.
You have that in the internet as a just networking medium.
And I think that's what we're going to have in the network of software processes leading
to AGI.
That's a beautiful way to see the world.
Again, the same similar question as with OpenCog.
If somebody wanted to build a NAS system and plug into the singularity net, what would
you recommend?
I can tell you a little about that.
So that's much easier.
I mean, OpenCog is still a research system,
so it takes some expertise to, and we have tutorials,
but it's somewhat cognitively labor intensive
to get up to speed on OpenCog.
And I mean, what's one of the things we hope to change
with the TrueAGI OpenCog 2.0 version
is just make the learning curve more similar
to TensorFlow or Torch or something.
Is it right now?
OpenCog is amazingly powerful, but not simple to do.
One.
On the other hand, singularity net,
as an open platform was developed, a little more with usability and mind over the
blockchain is still kind of a pain.
So, I mean, if you're a command line guy, there's a command line interface, it's quite easy
to take NAAI that has an API and lives in a Docker container and put it online anywhere.
And then it joins the global singularity net and anyone who puts a request
for services out into the singularity net that peer-to-peer discovery mechanism will find your
your AI and if it does what what was asked it will it will it can then start a conversation with
your AI about whether it wants to ask your AI to do something for it how much it would cost and so on.
That's that's fairly simple. If you wrote an AI and want it listed on like
official singularity net marketplace, which is on our website, then we have a published
report on, then there's a KYC process to go through because then we have some legal liability
for what goes on that website. So in a way, that's been an education too. There's sort of two layers.
Like there's the open decentralized protocol.
And there's the market.
Yeah, anyone can use the open decentralized protocol.
So say some developers from Iran,
and there's brilliant AI guys in the University of Isfahan
and Tehran, they can put their stuff
on singularity net protocol.
And just like they can put something on the internet,
I don't control it. But the fork and the list something on the singularity net protocol, and just like they can put something on the internet, right? I don't control it.
But if we're going to list something on the singularity net marketplace and put a little
picture and a link to it, then if I put some Iranian AI geniuses code on there, then Donald
Trump considered a bunch of jack booted thugs to my house to arrest me for doing business
with Iran, right?
So, I mean, we already see in some ways
the value of having a decentralized protocol,
because what I hope is that someone in Iran
will put online an Iranian singularly in the marketplace,
right, which you can pay in the cryptographic token,
which is not owned by any country.
And then if you're in like Congo or somewhere
that doesn't have any problem with Iran,
you can subcontract
AI services that you find on that marketplace, even though US citizens can't buy US law.
Right now, that's kind of a minor point.
As you alluded, if regulations go in the wrong direction, it could become more of a major
point.
But I think it also is the case that
Having these workarounds to regulations in place is a defense mechanism against those regulations
being put into place and you can see that in the music industry, right?
I mean Napster just happened and bit to it just happened and now most people in my kids generation
They're baffled by the idea of paying for music.
Right?
I mean, my dad pays for music.
I mean, yeah, that's true.
Because these decentralized mechanisms happened, and then the regulations followed, right?
And the regulations would be very different if they'd been put into place before there was
Napster and BitTort and so forth.
So in the same way, we got to put AI out there
in a decentralized vein and big date out there
in a decentralized vein now,
so that the most advanced AI in the world
is fundamentally decentralized.
And if that's the case,
that's just the reality the regulators have to deal with.
And then as in the music case,
they're gonna come up with regulations
that sort of work with the decentralized reality. to deal with. And then as in the music case, they're going to come up with regulations that
sort of work with the decentralized reality.
Beautiful. You are the chief scientist of Hanson Robotics. You're still involved with Hanson
Robotics, doing a lot of really interesting stuff there. This is for people who don't know
the company that created Sophia, the the robot Can you tell me who?
Sophia is
I'd rather start by telling you who David Hansen is
David is the brilliant mind behind the Sophia robot and he remains so far
He remains more interesting than his than his creation although she may be improving faster than he is
He's yeah, so yeah,, it's a good point.
I met David, maybe 2007 or something at some futurist conference.
We were both speaking at it.
And I could see we had a great, great deal in common.
I mean, we were both kind of crazy, but we also, we both had a passion for AGI and the
singularity, and we were both huge fans of the work of Philip Kedick, the science fiction
writer, and I wanted to create benevolent AGI that would create massively better life for
all humans and all sentient beings, including animals, plants,
and superhuman beings. And David, he wanted exactly the same thing, but he had a different idea of
how to do it. He wanted to get computational compassion. Like he wanted to get machines that
that would love people and empathize with people. And he thought the way to do that was to make a machine
that could look people either eye, face to face,
look at people and make people love the machine
and the machine loves the people back.
So I thought that was a very different way of looking at it
because I'm very math oriented.
And I'm just thinking like what is the abstract cognitive Algorithm that will let the system you know internalize the complex patterns of human values blah blah blah
where it sees like look you in the face in the eye and love you, right? So I
we we hit it off quite well and
We talked to each other off and on then I moved to Hong Kong in
2011 so I'd been I mean I've been I've been living We talked to each other often on, then I moved to Hong Kong in 2011.
So I'd been, I mean, I've been living all over the place. I've been in Australia and New Zealand and my academic career
then in Las Vegas for a while.
Within New York in the late 90s, starting my entrepreneurial career
within DC for nine years, doing a bunch of US government consulting stuff,
then moved to Hong Kong in 2011 mostly because I met a Chinese girl who I fell in love with. We got married. She's
actually not from Hong Kong. She's from Henlan, China, but we converge together in Hong Kong.
So married now. Have an average two-year-old baby. So I want to Hong Kong to see about a girl, I guess. Yeah, pretty pretty pretty much.
Yeah.
And on the other hand, I started doing some cool research
there with the Geno you at Hong Kong Polytechnic University.
I got involved with a project called
Idea using machine learning for stock and futures prediction,
which was quite interesting.
And I also got to know something about the consumer
electronics and hardware manufacturer ecosystem in Chen-Gen across the border, which is like the only place in the world that makes sense to make complex consumer electronics at large scale and low cost.
It's just astounding the hardware ecosystem that you have in South China, like you, you ask people here cannot imagine what it, what it's like. So
David was starting to explore that also. I invited him to Hong Kong to give a talk at Hong Kong Polyu and I introduced him in Hong Kong to some investors who were interested in his robots and
he didn't have Sophia then. He had a robot of Philip K. Dick, our favorite science fiction writer.
He had a robot Einstein.
He had some little toy robots that looked like his son Zino.
So through the investors I connected him to, he managed to get some funding to basically
port Hansen robotics to Hong Kong.
And when he first moved to Hong Kong, I was working on a GI research and also on this machine
learning trading project.
So I didn't get that tightly involved with Hanson Robotics.
But as I hung out with David Moore more, as we were both there in the same place,
I started to think about what you could do to make his robots smarter than they were.
you could do to make his robots smarter than they were.
And so we started working together. And for a few years, I was chief scientist
in head of software at Hansen Robotics.
Then when I got deep into the blockchain side of things,
I stepped back from that and co-founded SingularityNet.
David Hansen was also one of the co-founders
of SingularityNet. So part of our goal there had been to make the blockchain-based like cloud
mine platform for Sophia and the other other Heather. Sophia would be just one of
the robots in this in the singularity net. Yeah yeah exactly. Sophia, many
copies of the Sophia robot would be among the user interfaces to
that globally distributed the Singularity Net Cloud mind. And I mean, David and I talked
about that for quite a while before co-founding Singularity Net.
By the way, in his vision, in your vision, was Sophia tightly coupled to a particular AI system or was the idea
that you can plug it? You can just keep plugging in different AI systems within the head of it.
David's view was always that Sophia would be a platform much like the Pepper Robot is a platform
from SoftBank. It should be a platform with a set of nicely designed APIs
that anyone can use to experiment with their different AI algorithms
on that platform.
And SingularityNet, of course, fits right into that, right?
Because SingularityNet, it's an API marketplace.
So anyone can put their AI on there.
OpenCog is a little bit different.
I mean, David likes it, but I'd say it's my thing.
It's not his.
David has a little more passion for biologically-based approaches
to AI than I do, which makes sense.
I mean, he's really into human physiology and embality.
He's a character sculptor, right?
So, yeah, he's interested in,
but he also worked a lot with
Rulebase and Logic-based AI systems too.
So yeah, he's interested in,
not just Sophia, but all the hints and robots
as a powerful social and emotional robotics platform.
And, you know, what I saw in Sophia
was a way to, you know, get AI algorithms out there in
front of a whole lot of different people in an emotionally compelling way.
And part of my thought was really kind of abstract connected to AGI ethics.
And you know, many people are concerned, AGI is going to enslave everybody, you're turned everybody into into
computronium to make extra hard drives for their cognitive engine or whatever. And, you
know, emotionally, I'm not driven to that sort of paranoia. I'm really just an optimist
by nature, but intellectually, I have to assign the non-zero probability to
those sorts of nasty outcomes because if you're making something 10 times as smart as you,
how can you know what it's going to do? There's an irreducible uncertainty there, just as
my dog can't predict what I'm going to do tomorrow. So it seemed to me that based on our current state of knowledge,
the best way to bias the AGIs we create toward benevolence
would be to infuse them with love and compassion,
the way that we do our own children.
So you want to interact with AIs in the context of doing compassionate,
loving, and beneficial things.
And in that way, as your children will learn by doing compassionate, beneficial, loving
things alongside you, in that way, the AI will learn in practice what it means to be compassionate,
beneficial, and loving.
It will get a sort of ingrained intuitive sense of this, which it can then abstract in its own
way as it gets more and more intelligent.
Now David saw this the same way, that's why he came up with the name Sophia, which means
wisdom.
So, it seemed to me making these beautiful loving robots to be rolled out for beneficial
applications would be the perfect way to roll out early stage
AGI systems so they can learn from people and not just learn factual knowledge but learn
human values and ethics from people while being their, you know, their home service robots,
their education assistance, their nursing robots. So that was the grand vision. Now, if you've
ever worked with robots,
the reality is quite different, right? Like the first principle is the robot is always broken.
I mean, I'd work with robots in the 90s, a bunch, when you had to solder them together yourself,
and I'd put neural nets during reinforcement learning on like overturned,
solid ball type robots in the 90s, when I was a professor. Things,
of course, advanced a lot, but the principal stole all the robots. Always broken stole.
Yeah. So faced with the reality of making Sophia do stuff, many of my Robo AGI aspirations
were temporarily cast aside. And I mean, there's just a practical problem of making this robot interact in the meaningful
way because like, you know, you put nice computer vision on there, but there's always glare.
And then you have a dialogue system.
But at the time I was there, like, no speech, text algorithm could deal with Hong Kong,
Hong Kong East people's
English accents.
So the speech of text was always bad.
So the robot always sounded stupid because it wasn't getting the right text, right?
So I started to view that really as what in software engineering you call a walking skeleton,
which is maybe the wrong metaphor to use for Sophia, or maybe the right one.
I mean, when a walking skeleton is in software development is,
if you're building a complex system, how do you get started?
Well, one way is to first build part one well, then build part two well,
then build part three well, and so on.
Another way is you make like a simple version of the whole system
and put something in the place of every part the whole system will need,
so that you have a whole system that put something in the place of every part the whole system will need so that you
have a whole system that does something and then you work on improving each part in the
context of that whole integrated system.
So that's what we did on a software level in Sophia.
We made like a walking skeleton software system where so there's something that sees, there's
something that hears, there's something that moves, there's something that there's something
that remembers, there's something that hears, there's something that moves, there's something that there's something that remembers, there's something that learns.
You put a simple version of each thing in there and you connect them all together so that
the system will do its thing.
So there's a lot of AI in there.
There's not any AGI in there.
I mean, there's computer vision to recognize people's faces, recognize when someone comes
in the room and leaves, trying to recognize whether two people are together or not. I mean, the dialogue system,
it's a mix of like hand-coded rules with deepener on that, that come up with their own responses.
And there's some attempt to have a narrative structure and sort of try to pull the conversation into
something with the beginning, beginning, middle, and end in the sort of story arc.
So it's, I mean, like, if you look at the lobna prize and the, the systems that beat
the touring test currently, they're heavily rule based because like you said, narrative
structure to create compelling conversations, you currently, you know, networks cannot do that
well, even with Google Meena. When you actually look at full-scale conversations, it's just
not. Yeah, this is a thing. So we've been, I've actually been running an experiment the last
couple of weeks taking Sophia's chatbot and then Facebook's transformer chatbot, which they open
the model. We've had them chatting to each other for a number of weeks on the server.
That's funny.
Generating training data of what Sophia says and a wide variety of conversations.
But we can see compared to Sophia's current chatbot, the Facebook deep neural chatbot comes up with a wider variety of fluent sounding sentences.
On the other hand, it rambles like
mad, the Sophia chatbot. It's a little more repetitive in the sentence structures it uses.
On the other hand, it's able to keep like a conversation arc over a much longer period,
right? So there, now, you can probably surmount that using reformer and like using various other deep neural
architectures to improve the way these transform models are trained.
But in the end, neither one of them really understands what's going on.
And I mean, that's the challenge I had with Sophia is if I were doing a robotics project
aimed at AGI, I would want to make like a robot toddler
that was just learning about what it was seeing because then the language is grounded in the experience
of the robot. But what Sophia needs to do to be Sophia is talk about sports or the weather or
robotics or the conference she's talking at. She needs to be fluent talking about
any damn thing in the world and she doesn't
have grounding for all those things. So there's just like I mean Google, Meena and Facebook's chat
but I don't have grounding for what they're talking about about either. So in a way the need to speak
fluently about things where there's no non-linguistic grounding, pushes what you
can do for Sophia in the short term a bit away from...
From Asia, I mean, a bit away from Asia.
I mean, it pushes you towards IBM Watson situation where you basically have to heuristic
and hard-coast stuff and rule-based stuff.
I have to ask you about this. Okay. So because,
you know, in part, Sophia is like an art creation because it's beautiful. She's beautiful
because she inspires through our human nature of anthropomorphized things.
We immediately see an intelligent being there.
Because David is a great sculptor.
He is a great sculptor.
So, in fact, if Sophia just had nothing inside her head, said nothing.
If she just sat there, we already prescribed some intelligence to her.
There's a long selfie line in front of her after every
talk. That's right. So it captivated the imagination of the of many people. I won't say the
world, but yeah, I mean a lot of people and billions of people, which is amazing. It's amazing.
Right. Now, of course, many people have prescribed much greater,
prescribed essentially AGI type of capabilities
to Sophia when they see her.
And of course, friendly French folk like Jan Lecune
immediately see that of the people from the AI community
and get really frustrated because-
It's understandable.
So what, and then they criticize people like you who sit back and don't say anything about,
like basically allow the imagination of the world, allow the world to continue being captivated.
So what's your sense of that kind of annoyance that the AI community has?
Well, I think there's several parts to my reaction there.
First of all, if I weren't involved with Hansen Rebox and didn't know David Hansen personally,
I probably would have been very annoyed initially at Sophia as well.
I mean, I can understand the reaction.
I would have been like, wait, all these stupid people out there think this is an AGI, but
it's not an AGI, but they're tricking people that this very cool robot is an AGI.
And now those of us, you know, trying to raise funding to build a GI, you know, people will
think it's already there and already works.
Right?
So I, on the other hand, I think even if I went directly involved with it, once I dug
a little deeper into David and the robot and the intentions behind it, I think I would
have stopped being, being pissed off, whereas folks like Jan Lecune have
remained pissed off after their initial career.
That's his thing.
That in particular struck me as somewhat ironic because Jan Lecune is working for Facebook,
which is using machine learning to program the brains of the people in the world toward
vapid
consumerism and
political extremism. So if if your ethics allows you to use
machine learning in such a blatantly destructive way
Why would your ethics not allow you to use machine learning to make a lovable
Why would your ethics not allow you to use machine learning to make a lovable theatrical robot that draws some foolish people into it?
It's theatrical illusion.
Like if the pushback had come from Yoshua Benjiyo, I would have felt much more humbled by
it because he's not using AI for blatant evil, right?
On the other hand, he also is a super nice guy
and doesn't bother to go out there
trashing other people's work for no good reason, right?
So?
Shots fired, but I get you, I mean, that's, yeah.
I mean, if you're gonna ask, I'm gonna answer.
No, for sure.
I think we'll go back and forth.
I'll talk to Jan again.
I would add on this though.
I mean, David
Hansen is an artist and he often speaks off the cuff. And I have not agreed with everything
that David has said or done regarding Sophia. And David also was not agreed with everything
David has said or done about about a important point. I mean, David, David is an artistic
wild man and that's
that's part of his
charm. That's that's part of his
genius. So certainly
there have been conversations
within Hanson robotics and
between me and David where I
was like, let's
let's be more open about how
this thing is working.
And I did have some influence in nudging Hanson robotics to be more open
about about how so feel was working. And, and David wasn't especially opposed to this.
And, you know, he was actually quite right about it.
What he said was, you can tell people exactly how it's working.
And they won't care. They want
to be drawn into the illusion. And he was 100% 100% correct. I'll tell you what. This
wasn't Sophia. This was Philip K. Dick, but we did some interactions between humans and
Philip K. Dick robot and Austin, Texas a few years back. And in this case, the Philip
K. Dick was just teleoperated by another human in the other room.
So during the conversations, we didn't tell people
the robot was teleoperated.
We just said, here, have a conversation with Phil Dick.
We're gonna film you, right?
And they had a great conversation with Philip Kiddick
teleoperated by my friend, Stefan Bugai.
After the conversation, we brought the people
in the back room to see Stefan, who was controlling
the Philip Kedic robot, but they didn't believe it. These people were like, well, yeah,
but I know I was talking to Phil, like maybe Stefan was typing, but the spirit of Phil was
animating his mind while he was typing. So like, even though they knew was a human in the loop,
animating his mind while he was typing. Yeah.
So like, even though they knew
was a human in the loop,
even seeing the guy there,
they still believed that was
filled they were talking to.
A small part of me believes
that they were right, actually,
because our understanding...
Well, we don't understand the universe.
That's the thing.
There is a cosmic mind field
that we're all embedded in
that yields many strange synchronicities
in the world, which is a topic
we don't have time to go ahead too much.
I mean, there's something to this where our imagination about Sophia and people like
Yanlequin being frustrated about it is all part of this beautiful dance of creating artificial
intelligence that's
almost essential. You see with Boston Dynamics, I'm a huge fan of as well, you know, the
kind of, I mean, these robots are very far from intelligent, but I played with their last
one, actually, I think, with the spot menu. Yeah, very cool. It reacts quite in a fluid
and flux them away. But we immediately ascribe the kind of intelligence.
We immediately ascribe AGI to them.
Yeah, yeah.
If you kick it and it falls down and goes, oh, you feel bad, right?
You can't help it.
Yeah.
And I mean, that's part of, that's going to be part of our journey in creating intelligence
systems more and more and more and more.
As Sophia starts out with a walking skeleton, as you add more and more intelligence,
I mean, we're going to have to deal with this kind of idea.
Absolutely.
And about Sophia, I would say, I mean, first of all, I have nothing against Young Lecune.
No, no, this is fun.
This is all fun.
No, this is fun.
This is a nice guy.
If he wants to play the media, media banter game, I'm happy to play him.
He's a good researcher and a good human being.
I'd happily work with the guy.
The other thing I was gonna say is,
I have been explicit about how Sophia works.
And I've posted online,
and what H plus magazine, an online web scene,
I mean, I posted, I've moderately detailed
article explaining like, there are three software systems we've used inside Sophia. There's,
there's a timeline editor, which is like a rule based authoring system where she's really just
being an outlet for what a human scripted. There's a chat bot, which has some rule based on some
neural aspects. And then sometimes we've used OpenCog behind Sophia where there's more learning and reasoning.
And you know, the funny thing is, I can't always tell which system is operating here, right?
I mean, so whether she's really learning or thinking or just appears to be over half hour I could tell,
but over like three or four minutes of interaction, I just, I just,
I'm even having three systems that's already sufficiently
complex where you can't really tell right away.
Yeah, the thing is, even if you get up on stage
and tell people how Sophia's working,
and then they talk to her,
they still attribute more agency and consciousness to her
than is really there.
So I think there's a couple of levels of ethical issue there.
One issue is should you be transparent about how Sophia is working?
And I think you should.
And I think we have been.
I mean, there's articles online that there's some TV special
that goes through me explaining the three subsystems
behind Sophia.
So the way Sophia works is out there much more clearly
than how Facebook say, I work or something, right?
I mean, we've been fairly explicit about it.
The other is, given that telling people how it works
doesn't cause them to not attribute too much intelligence
agency to it anyway, then should you keep fooling them
when they want to be fooled?
And I mean, the whole media industry is based on fooling
people the way they want to be fooled.
And we are fooling people
100% toward a good end. I mean, we are playing on people's sense of empathy and compassion
so that we can give them a good user experience with helpful robots. And so that we can
fill the AI's mind with love and compassion. So I've been talking
a lot with Hanson Robotics lately about collaborations in the area of medical robotics.
And we haven't quite pulled the trigger on a project in that domain yet, but we may
well do so quite soon. So we've been talking a lot about robots can help with
elder care. Robots can help with kids, David's and a lot of things with autism therapy
and robots before. In the COVID era, having a robot that can be a nursing assistant in
various senses can be quite valuable. The robots don't spread infection and they can also
deliver more attention than human nurses can give, right?
So if you have a robot that's helping a patient with COVID, if that patient attributes more
understanding and compassion and agency to that robot, then it really has because it looks like a human.
I mean, is that really bad? I mean, we can tell them it doesn't fully understand you
and they don't care because they're lying there with a fever and they're sick
But they'll react better to that robot with its loving more and facial expression
That then they would to a pepper robot or a metallic looking looking robot. So it's it's really
It's about how you use it right if you made a human looking like door-to-door sales robot that used its human looking appearance to
To scan people out of their
money, then you're using that connection in a bad way, but you could also use it in a good way.
But then that's the same problem with every technology, right?
Beautifully put. So like you said, we're living in the era of the COVID, this is 2020, one of the craziest
years in recent history.
So, if we zoom out and look at this pandemic, the coronavirus pandemic, maybe let me ask you this kind of thing in viruses in general. When you look at viruses,
do you see them as a kind of intelligent system? I think the concept of intelligence is not that
natural of a concept in the end. I mean, I think human minds and bodies are a kind of complex,
minds and bodies are a kind of complex self-organizing adaptive system.
And viruses certainly are that, right? They're very complex, self-organizing adaptive system. If you want to look at intelligence as Marcus Huda defines it as sort of
optimizing computable reward functions over computable environments, for, viruses are doing that, right? And I mean, in doing so,
they're causing some harm to us. And so the human immune system is a very complex, self-organizing
adaptive system, which has a lot of intelligence to it. And viruses are also adapting and dividing
into new mutant strains and so forth.
And ultimately, the solution is going to be nanotechnology, right?
I mean, the solution is going to be making little nanobots that fight the viruses.
Well, people will use them to make nestier viruses, but hopefully we can also use them to just detect combat and kill the viruses. But I think now we're
start with the biological mechanisms to combat these viruses.
We've been, AGI is not yet mature enough to use against COVID,
but we've been using machine learning and also some machine reasoning in
OpenCog to help some doctors to do personalized medicine against COVID.
So the problem there is given to persons genomics and given their clinical medical indicators,
how do you figure out which combination of antivirals is going to be most effective against COVID for that person?
And so that's something where machine learning is interesting, but also
we're finding the abstraction we get an open code with machine reasoning is interesting because
it can help with transfer learning when you have not that many different cases to study and
qualitative differences between different strains of a virus or people of different ages who
may have COVID. So there's a lot of different disparate data to work with and a small
data sets and somehow integrating them. You know, this is one of the shameful
things that's very hard to get that data. So I mean, we're working with a couple
groups doing clinical trials and and they're sharing data with us like under
non-disclosure, but what should be the case is like every COVID clinical trial
should be putting data online somewhere
like suitably encrypted to protect patient privacy
so that anyone with the right AI algorithms
should be able to help analyze it.
And any biologists should be able to analyze it by hand
to understand what they can, right?
Instead, instead that data is like siloed inside whatever hospital is running the clinical trial,
which is completely assinine and ridiculous. So, why the world works that way? I mean, we could
all analyze why, but it's insane that it does. You look at this hydra-chloroquine, right?
All these clinical trials were done were for the bii- surgesphere, some little company, no one ever heard of.
And everyone paid attention to this so they were doing more clinical trials based on that
than they stopped doing clinical trials based on that.
Then they started again and why isn't that data just out there?
So everyone can analyze and see what's going on, right?
Yeah, I hope that we'll hope that the data will be out there
eventually for future pandemics?
I mean, do you have hope that our society
will move in the direction of future?
Not in the immediate future,
because the US and China frictions are getting very high.
So it's hard to see US and China
as moving in the direction of openly sharing data
with each other, right?
It's not. There's some sharing of data, but different groups are keeping their data private
until they've milked the best results from it, and then they share it, right? So it's...
So yeah, we're working with some data that we've managed to get our hands on.
Something we're doing to do good for the world, and it's a very cool playground for,
for like putting deep neural nets and open-cog together. So we have like a bioadden space full of all sorts of knowledge from many
different biology experiments about human longevity and from biology knowledge bases online.
And we can do like graph to vector type embeddings where we take nodes from the hypergraph
embed them into vectors which can then feed into neural nets for different types of analysis.
And we were doing this in the context of a project called the Rejuve that we spun off from
singularity net to do longevity, longevity analytics.
I can understand why people lived to 105 years or over and other people don't.
And then we had this spin off singularity, where we're working with some healthcare companies
on data analytics.
But so there's bio-admspace we built
for these more commercial and longevity data analysis
purposes were repurposing and feeding COVID data
into the same bio-admspace and playing around
with like graph embeddings from that graph into neural
nuts for bioinformatics. So it's both being a cool testing ground, some of our bio AI
learning and reasoning, and it seems we're able to discover things that people weren't
seeing otherwise, because the thing in this case is, for each combination of antivirals,
you may have only a few patients who've tried that combination.
And those few patients may have their particular characteristics, like this combination of
three was tried only on people age, eight year over.
This other combination of three, which has an overlap with the first combination was tried
more on young people.
So how do you combine those, those different pieces of data?
It's a very dodgy transfer learning problem,
which is the kind of thing that the probabilistic reasoning
algorithms we have inside OpenCog are better at
than deep neural networks.
On the other hand, you have gene expression data,
where you have 25,000 genes at the expression level of each
gene and the peripheral blood of each person.
So that sort of data, either deep neural nets or tools
like XG boost or cat boosts
decision forest trees are better at dealing with an open cog because it's just these huge
messy floating point vectors that are annoying for a logic engine to deal with but are
perfect for a decision for us or neural net. So it's a great playground for like hybrid AI
methodology and we can have singularity that have open cog in one agent and XG boost in a different So it's a great playground for hybrid AI methodology,
and we can have singularity that have open cog
and one agent and XG boost and a different agent
and they talk to each other.
But at the same time, it's highly practical, right?
Because we're working with,
we're working with, for example,
some physicians on this project,
in the group, physicians in the group
called Enthapinion based out of Vancouver and Seattle,
who are these guys are working every day
like in the hospital with patients dying of COVID.
So it's quite cool to see like neural symbolic AI,
like where the rubber hits the road,
trying to save people's lives.
I've been doing bio AI since 2001, but mostly human longevity research and fly longevity
research, trying to understand why some organisms really live a long time.
This is the first time race against the clock and try to use the AI to figure out stuff
that, like if we take two months longer to solve the AI problem, some
more people will die because we don't know what combination of antivirals together.
Yeah.
At the societal level, the biological level, at any level, are you hopeful about us as a
human species getting out of this pandemic?
What are your thoughts on any general?
Well, the pandemic will be gone in a year or two once there's a vaccine for it. So I mean,
that's a lot of pain and suffering can happen in that time. So I mean, that could be reversible.
I think if you spend much time in sub-Saharan Africa, you can see there's a lot of pain and suffering
happening all the time.
Like you walk through the streets of any large city in Sub-Saharan Africa and there are
loads, I mean tens of thousands, probably hundreds of thousands of people lying by the side
of the road, dying mainly of curable diseases without food or water and either ostracized
by their families or they left their family else
because they didn't want to infect their family, right?
I mean, there's tremendous human suffering on the planet
all the time, which most folks in the developed world
pay no attention to.
And COVID is not remotely the worst.
How many people are dying of malaria all the time?
I mean, so COVID is bad.
It is by no mean the worst thing happening.
And setting aside diseases, I mean,
there are many places in the world where you're at risk
of having like your teenage son kidnap by armed militias
and forced to get killed in someone else's war,
fighting tribe against tribe.
I mean, so humanity has a lot of problems, and forced to get killed in someone else's war, fighting tribe against tribe, I'm in some.
Humanity has a lot of problems,
which we don't need to have given the state
of advancement of our technology right now.
And I think COVID is one of the easier problems to solve
in the sense that there are many brilliant people
working on vaccines.
We have the technology to create vaccines,
and we're gonna create new vaccines. We should be technology to create vaccines and we're going to create new vaccines.
We should be more worried that we haven't managed to defeat malaria after so long and after
the Gates Foundation and others putting so much money into it. I mean, I think clearly
the whole global medical system, global health system and the the global political and socioeconomic system are incredibly
unethical and unequal and badly designed.
And, I mean, I don't know how to solve that directly.
I think what we can do indirectly to solve it is to make systems that operate in parallel and off to the side of
the governments that are nominally controlling the world with their armies and militias.
And to the extent that you can make compassionate, peer-to-peer, decentralized frameworks for
doing things, these are things that can start out unregulated. And then if they get traction before the regulators come in, then they've influenced the way the world works.
Right. SingulardNet aims to do this with AI, Rejuven, which is a spin-off from SingulardNet. You can see Rejuvened IO that are EJU V E Rejuvened IO that aims to do the same thing for medicine.
So it's like peer to peer sharing of medical data.
So you can share medical data into a secure data wallet.
You can get advice about your health and longevity through through apps that that
that Rejuven will launch within the next couple months.
And then singularly an AI can analyze all this data
But then the benefits from that analysis are spread among all the members of the network But I mean, of course, I'm gonna hot my particular projects
But I mean whether or not singularity and and and reduce our answer
I think it's key to create decentralized
mechanisms for everything.
I mean, for AI, for human health, for politics, for jobs and employment, for sharing social
information, and to the extent decentralized peer-to-peer methods designed with universal
compassion at the core can gain traction, then these will just decrease the role that government has.
And I think that's much more likely to do good
than trying to like explicitly reform the global government
system.
I mean, I'm happy other people are trying
to explicitly reform the global government system.
On the other hand, you look at how much good the internet
or Google
did or mobile phones did, or even you're making something that's decentralized and throwing
it out everywhere, and it takes hold, then government has to adapt. And I mean, that's
what we need to do with AI and with health. And in that light, I mean, the centralization of healthcare and of AI is certainly not ideal,
right? Like most AI PhDs are being sucked in by, you know, half dozen to a dozen big companies.
Most AI processing power is being bought by a few big companies for their own proprietary
good. And then most medical research is within a few pharmaceutical companies and
clinical trials run by pharmaceutical companies will stay silent within those pharmaceutical
companies. You know, these large centralized entities, which are intelligences in themselves,
these corporations, but they're mostly malevolent psychopathic and sociopathic intelligences,
not saying the people involved are, but the
corporations as self-organizing entities on their own, which are concerned with maximizing
shareholder value as a sole objective function.
I mean, AI and medicine are being sucked into these pathological corporate organizations
with government cooperation and Google cooperating with
British and US government on this as one among many many different examples 23
and me providing you the nice service of sequencing your genome and then
licensing the genome to GlaxoSmithCline on an exclusive basis right right now
you can take your own DNA and do whatever you want with it but the pooled
collection of 23 and me sequence DNA is just to to to GlaxoSmithCline. Someone else could reach out to everyone who who
had worked with 23andMe to sequence their DNA and say, give us your DNA for our our open
and decentralized repository that will make available to everyone. But nobody's doing
that because it's a pain to get organized. And the customer list is proprietary to 23 may rush.
So yeah, I mean, this.
This I think is a greater risk to humanity from AI than rogue AGI is turning the
universe into paper clips or a computer.
Because what you have here is mostly good-hearted and nice people who are sucked into a mode of organization of
large corporations, which has evolved just for no individual's fault, just because that's
the way society has evolved.
It's not all choice, because self-interested and becomes psychopathic, like you said.
The corporation is psychopathic, even if the people are not.
Right, exactly.
That's really the disturbing thing about it, because the corporations can do things that are quite bad
for society, even if nobody has a bad intention.
And then no individual member of that corporation
has a bad intention.
No, some probably do, but it's not necessary
that they do for the corporation.
Like, I mean, Google, I know a lot of people in Google,
and there are with very few exceptions there are very nice people who genuinely want what's
good for the world and Facebook I know fewer people but it's probably most
it's probably mostly true it's probably like
Fanny on Geek suit who want to build cool technology I actually tend to believe
they even the leaders even Mark Zuckerberg one of the most dislike people in tech is also wants to do good for the world. Do you think about Jamie
Demon? Who's Jamie Demon? Oh the heads of the great banks may have a different psychology. Oh boy.
Yeah. Well, I tend to I tend to be naive about these things and see the best in
I I tend to agree with you that I think the individuals want to do good
by the world, but the mechanism of the company can sometimes be its own intelligence.
I mean, there's a, I, and my cousin Mario Gutz was worked for Microsoft since 1985 or something,
and I can see for him, I mean, as well as just working on cool projects,
you're coding stuff that gets used by like billions
and billions of people.
And you think, if I improve this feature,
that's making billions of people's lives easier, right?
So of course, of course, that's cool.
And, you know, the engineers are not in charge
of running the company anyway.
And of course, even if you're Mark Zuckerberg or Larry Page, I mean, you still have a
fiduciary responsibility in you responsible to the shareholders, your employees, who you
want to keep paying them and so forth.
So, yeah, you're in mesh in this system.
And, you know, when I worked in DC, I worked a bunch with InSKOM, US Army Intelligence.
And I was heavily, politically opposed to what the US Army was doing in Iraq at that time,
like torturing people in Abu Ghra'a.
But everyone I knew in US Army in InSKOM, when I hung out with them, was very nice person.
They were friendly to me.
They were nice to my kids and my dogs, right?
And they really believed that the US was fighting the forces of evil.
And as you ask me about Abu Grya, they're like, well,
But these Arabs will chop us into pieces. So how can you say we're wrong to
Waterboard them a bit, right? Like that's much less than what they would do to us.
It's just in in their worldview
What they were doing was really genuinely for the good for the good of humanity like
None of them woke up in the morning and said like
I want to do harm to good people because I'm just a nasty guy, right? So
Yeah, most people on the planet setting aside a few genuine psychopaths and sociopaths
I mean most people on the planet never heavy dose of
benevolence and wanting to do good. And also
heavy capability to convince themselves whatever they feel like doing or whatever is best for them
is for the good of humankind. So the more we can decentralize control. Decentralization,
the democracy is horrible, but this is like Winston Churchill said, you know, it's the worst possible
system of government except for all the others, right? I mean, I think the whole mess of humanity
has many, many very bad aspects to it, but so far the track record of elite groups who know what's
better for all of humanity is much worse than the track record of the whole teaming democratic participatory mass
of humanity, right?
I mean, none of them is perfect by any means.
The issue with a small elite group that knows what's best is even if it starts out as truly
benevolent and doing good things in accordance with its initial good intentions, you find
out you need more resources, you need a bigger organization, you pull in more people, internal politics arises, differences of opinions
arise, and bribery happens, like some opponent organization takes a second and command
down to make some the first and command of some other organization.
And I mean, that's, there's a lot of history of what happens with elite groups, and they
know what's best for the human race.
So yeah, if I have to choose, I'm going to reluctantly put my faith in the vast democratic
decentralized mass.
And I think corporations have a track record of being ethically worse than their constituent
human parts.
And, you know, democratic governments have a more mixed track record,
but there are at least-
That's the best we've got.
Yeah, I mean, you can, there's Iceland, very nice country, right?
I mean, very democratic for 800 plus years, very benevolent,
beneficial government.
And I think, yeah, there are track records
of democratic modes of organization.
Linux, for example, some of the people in charge
of Linux are overtly complete assholes, right?
And trying to reform themselves in many cases,
in other cases not.
But the organization as a whole, I think it's done a good job overall.
It's been very welcoming in the third world for example.
And it's allowed advanced technology to roll out on all sorts of different embedded
devices and platforms in places where people couldn't afford to pay for proprietary software.
So I'd say the internet, Linux, and many democratic nations are examples of how certain open decentralized democratic methodology can be ethically better than the some of the parts rather than worse and corporations that has happened only for a brief period and then and then it goes sour, right? I mean, I'd say a similar thing about universities. Like, university is a horrible way
to organize research and get things done,
yet it's better than anything else we've come up with,
right?
The company can be much better,
but for a brief period of time,
and that stops being so good, right?
So then I think if you believe that AGI
is gonna emerge sort of incrementally out
of AIs doing practical stuff in the world, like controlling humanoid robots or driving
cars or diagnosing diseases or operating killer drones or spying on people and reporting
under the government, then what kind of organization creates more and more
advance narrow AI, verging toward the AGI,
may be quite important because it will guide
like what's in the mind of the early stage AGI
as it first gains the ability to rewrite its own code base
and project itself toward super intelligence.
And if you believe that AI may move toward AGI out
of this sort of synergetic activity
of many agents cooperating together,
rather than just to have one person's project,
then who owns and controls that platform for AI cooperation
becomes also very, very important, right?
And is that platform AWS, is it Google Cloud?
Is it Alibaba, or is it something more like the internet
or singularity net, which is open and decentralized?
So if all of my weird machinations come to pass,
I mean, we have the Hanson robots being a beautiful user
interface, gathering information on human values and
being loving and compassionate to people in medical, home service, robot, office applications.
You have singularity in the backend, networking together, many different AI's toward cooperative
intelligence, fueling the robots among many other things.
You have OpenCog 2.0 and TrueAGI as one of the sources of AI inside this decentralized network,
powering the robot and medical AI's,
helping us live a long time and cure diseases among other things.
And this whole thing is operating in a democratic
and decentralized way, right?
I think if anyone can pull something like this off,
whether using the specific technologies
I've mentioned or something else,
I mean, then I think we have a higher odds
of moving toward a beneficial technological singularity
rather than one in which the first super AGI
is indifferent to humans and just considers us in inefficient use of molecules
That was a beautifully articulated vision for the world. So thank you for that. Well, let's talk a little bit about life and death
I'm pro life and the anti death
Well you for most people those few exceptions that I won't mention here. I'm glad just like
your dad, you're taking a stand against death. You have, by
the way, you have a bunch of awesome music, where you play
piano online, one of the songs that I believe you've written,
the lyrics go, by the way way I like the way it sounds
people should listen to it's awesome. I was I considered I probably will cover it's
a good song. Tell me why do you think it is a good thing that we all get old and die.
It's one of the songs I love the way it sounds. But let me ask you about death first.
Do you think there's an element to death that's essential
to give our life meaning like the fact that this thing ends? Well let me say I'm a
pleased and a little embarrassed you've been listening to that music I put online. That's awesome.
One of my regrets in life recently is I would love to get time to really produce music well. Like
I I haven't touched my sequence yourself for in like five years. Like I I would love to
like rehearse and produce and edit and but the with a two-year-old baby and and trying to
create the singularity there's no time. So I just made the decision to when I'm playing
random shit in an off moment just record it just just record it
Oh, she's like there like like whatever maybe maybe if I'm unfortunate enough to die
Maybe that can be input to the AGI when it tries to make an accurate mind upload of me, right death is bad
I mean that's very simple is backling we should have to say that I mean of course
People can make meaning out of
death. And if someone is tortured, maybe they can make beautiful meaning out of that torture,
and write a beautiful poem about what it was like to be tortured, right? I mean, we're very creative.
We can, we can melt beauty and positivity, out of even the most horrible and shity things. But
just because if I was tortured, I could write a good song about what it was like to
be tortured, doesn't make torture good.
And just because people are able to derive meaning and value from death, doesn't mean they
wouldn't derive even better meaning and value from ongoing life without death, which I
very definite.
Yeah, yeah.
So if you could live forever, would you live forever?
Forever. I My my goal with longevity research is to
abolish the plague of involuntary death. I don't think people should die unless they choose to die.
If I had to choose forced immortality versus dying, I would choose forced immortality. On the other hand, if I
chose, if I had the choice of immortality with the choice of suicide, whenever I felt like it,
of course, I would take that instead. And that's the more realistic choice. I mean, there's no
reason you should have forced immortality. You should be able to live until you get, you
until you get sick of living, right? I mean, that's, and that will seem insanely obvious
to everyone 50 years from now, and there will be,
so, I mean, people who thought death gives meaning to life,
so we should all die.
They will look at that 50 years from now,
the way we now look at the anabaptists in the year 1000,
who gave away all their positions,
went on top of the mountain for Jesus,
for Jesus to come and bring them to the
to the ascension. I mean, it's ridiculous that people think
death is good because
because you gain more wisdom as you approach dying. Of course, it's true. I mean, I'm 53 and
you know, the fact that I might have only a few more decades left.
It does make me reflect on things differently.
It does give me a deeper understanding of many things.
But I mean, so what?
You could get a deep understanding in a lot of different ways.
Pain is the same way.
Like we're going to abolish pain.
And that's even more amazing than abolishing death, right?
I mean, once we get a little better neuroscience, we'll be able to go in and
adjust the brain so the pain doesn't hurt anymore. Right? And that, you know, people will say
that's bad because there's so much beauty and overcoming pain and suffering. Oh, sure.
And there's beauty and overcoming torture too, but and some people like to cut themselves,
but not not many, right?
I mean, that's an interesting,
so, but to push, I mean, to push back again,
this is the Russian side of me,
I do romanticize suffering.
It's not obvious.
I mean, the way you put it,
it seems very logical.
It's almost absurd to romanticize suffering
or pain or death.
But to me, a world without suffering, without pain, without death, it's
non-obvious what I want to say.
Well, then you can say in the people zoo, people zoo, people who are saying each other.
Right.
No, but what I'm saying is I don't, what, that's, I guess what I'm trying to say, I don't
know if I was presented with that choice, what I would choose, because me. No, this is a subtler matter.
It's a subtler matter.
And I've posed it in this conversation
in an unnecessarily extreme way.
So I think the way you should think about it
is what if there's a little dial on the side of your head
and you could turn how much pain hurt.
Turn it down to zero, turn up to 11,
like in spinal tap, if it wants,
maybe through an actual spinal tap, right?
So I mean, would you opt to have that dial there or not?
That's the question.
The question isn't whether you would turn the pain down
to zero all the time.
Would you opt to have the dial or not?
My guess is that in some dark moment of your life, you would choose to have the dial or not? My guess is that in some dark
moment of your life, you would choose to have the dial and plan it. And then it would be
there. Just to confess a small thing, I'm, don't ask me why, but I'm doing this physical
challenge currently where I'm doing 680 pushups and pull-ups a day. Yeah, I'm like, and my shoulder is currently, as we sit here
in a lot of pain.
And I don't know,
I would certainly right now,
if you gave me a doll,
I would turn that sucker to zero
as quickly as possible.
Good.
But I don't,
I think the whole point of this journey is,
I don't know. Well,'re you're a twisted human being
now I'm a twisted it's so the question is if am I somehow a twist am I twisted
because I have I created some kind of narrative for myself so that I can deal
with the with the injustice and the suffering in the world or is this actually
going to be a source of happiness uh...
to an extent
is a research question that you might even undertake right so i mean
human human beings
do have a particular
biological makeup
sort of implies a certain probability distribution over motivational systems
right so i mean we we we, and that is there.
I'll put that is there. Now, the question is, how fluxibly can that morph as society and technology change?
So if we're given that dial and we're given a society in which say we don't have to work for a living and in which there's an ambient decentralized
benevolent AI network that will warn us when we're about to hurt ourselves. If we're in
a different context, can we consistently with being genuinely and fully human, can we consistently
get into a state of consciousness where we just want to keep the pain dial turned
all the way down and yet we're leading very rewarding and fulfilling lives, right now
I suspect the answer is yes, we can do that but I I don't I don't know that I don't know that for certain
Yeah, no, I'm more confident that we could create a non-human AGI system, which just didn't need
an analog of feeling pain.
And I think that AGI system will be fundamentally healthier and more benevolent than human beings.
So I think it might or might not be true that humans need a certain element of suffering
to be satisfied humans, consistent with the
human physiology. If it is true, that's one of the things that makes us fucked and disqualified
to be the super-age-i, right? I mean, this is the nature of the human motivational system system is that we seem to gravitate towards situations where the best thing in the large
scale is not the best thing in the small scale, according to our subjective value system.
So we gravitate towards subjective value judgments where to gratify ourselves in the large, we
have to ungratify ourselves in the small.
And we do that in, you see that in music,
there's a theory of music, which says the key to musical aesthetics
is the surprising fulfillment of expectations.
Like you want something that will fulfill the expectations
elisted in the prior part of the music,
but in a way with a bit of a twist that surprises you.
And I mean, that's true, not only in out their music like my own or that
of Zappa or Steve Vire or Buckethead or Christoph Penderekki or something. It's even there in Mozart
or something. It's not there in elevator music too much, but that's why it's boring, right? But
wrapped up in there is, you know, we want to hurt a little bit so that we can
We can feel that we can feel the pain go away like we want to be a little a little confused by what's coming next
So then when the thing that comes next actually makes sense. It's so satisfying, right?
And it's the surprising fulfillment of expectations that we said. Yeah, yeah, so beautifully put is there
We've been scurrying around a little bit, but if I were to ask you the most ridiculous big question of what is the
meaning of life what would your answer be?
Three values joy growth and choice
I'm I think you need you need joy. I mean, that's the basis of everything.
If you want the number one value.
On the other hand, I'm unsatisfied with a static joy that
doesn't progress, perhaps because of some elemental,
element of human perversity, but the idea of something that
grows and becomes more and better and better in some sense
appeals to me.
But I also sort of like the idea of individuality that as a distinct system, I have some agencies.
So there's some nexus of causality within this system rather than the causality being wholly
evenly distributed over the joyous growing mass.
So you start with joy, growth, and choices, three basic values.
That's so important.
And those three things could continue indefinitely.
That's not, that's something that you can last forever.
Is there, is there some aspect of something you called
which I like super longevity that you find exciting?
That's what, is there research wise?
Is there ideas in that space that I mean, I think
Yeah, in terms of the
Meaning of life this really ties into that because
for us as humans
Probably the way to get the most joy growth and choice
is
Transhumanism and to go beyond the human form that we have right now.
I mean, I think human body is great and by no means any of us maximize the
potential for joy, growth and choice imminent in our human bodies. On the other
hand, it's clear that other configurations of matter could manifest even
greater amounts of joy, growth, and choice than humans do,
maybe even finding ways to go beyond the realm of matter as we understand it right now.
So I think in a practical sense, much of the meaning I see in human life
is to create something better than humans and go beyond human life.
But certainly that's not all of it for me
in a practical sense, right?
Like I have four kids and a granddaughter
and many friends and parents and family
and just enjoying everyday human social existence.
Well, we can do even better.
Yeah, yeah.
And I mean, I love, I've always,
when I could live near nature,
I spend a bunch of time out in nature and the
forest and on the water every day and so forth.
So, I mean, enjoying the pleasant moment is part of it, but the, you know, the growth and
choice aspect are severely limited by our human biology, in particular, dying seems
to inhibit your potential for personal growth considerably as as far as we know I mean
There's some element of life after death perhaps but even if there is
Why not also continue going in in this biological realm right and and so in super longevity. I mean
You know, we haven't yet cured aging. We haven't yet cured death
certainly You know, we haven't yet cured aging. We haven't yet cured death. Certainly, there's very interesting progress all around.
I mean, CRISPR and gene editing can be an incredible tool.
And I mean, right now, stem cells could potentially prolong life a lot.
Like if you've got stem cell injections of just stem cells for every
tissue of your body injection to every tissue, and you can just have replacement of your
old cells with new cells produced by those stem cells, I mean that that could be highly
impactful at prolonged life. Now we just need slightly better technology for having them
grow, right? So you're using machine learning to guide procedures
for stem cell differentiation and trans differentiation. It's kind of nitty-gritty, but I mean that's
quite interesting. So I think there's a lot of different things being done to help with
prolongation of human life, but we could do a lot better.
So for example, the extracellular matrix,
which is the bunch of proteins in between the cells in your body,
they get stiffer and stiffer as you get older,
and the extracellular matrix transmits information both electrically,
mechanically, and to some extent biophotonically.
So there is all this transmission through the parts of the body,
but the stiffer the extracellum matrix gets, the less the transmission happens, which makes your body get
worse coordinated between the different organs as you get older. So my friend Christian Schaffmeister at my
alumnus organization, the great my alma mater, the great Temple University.
Christian Schaffmeister has a potential solution
to this where he has these novel molecules called spiral ligamers, which are like polymers
that are not organic. They're specially designed polymers so that you can algorithmically predict
exactly how they'll fold very simply. So he designed a molecular scissors that have
spiral ligamers that you could eat eat and then cut through all the glucose
to paint another cross-linked proteins in your extracellular matrix, right?
But to make that technology really work and be mature as several years of work as far as
I know no one's funding it at the moment.
But there are so many different ways that technology could be used to prolong longevity.
What we really need, we need an integrated database
of all biological knowledge about human beings
and model organisms, like basically a massive,
we distribute open-cog bio-admts space,
but it can exist in other forms too.
We need that data to be opened up
in a suitably privacy-protecting way.
We need massive funding into machine learning,
AGI, proto-AGI statistical research, aimed
at solving biology, both molecular biology and human biology, based on this massive, massive
data set, right? And then we need regulators not to stop people from trying radical therapies
on themselves, if they so wish to, as well as better cloud-based platforms for automated
experimentation on microorganisms, flies, and mice, and so forth. And we could do all this.
You look, after the last financial crisis, Obama, who I generally like pretty well, but he gave
$4 trillion to large banks and insurance companies. Now in this COVID crisis,
to large banks and insurance companies. Now in this COVID crisis,
trillion are being spent to help everyday people
in small businesses.
In the end, we'll probably find many more trillion
to being given to large banks and insurance companies.
Anyway, could the world put 10 trillion dollars
into making a massive holistic bi- AI and bio simulation
and experimental biology infrastructure.
We could we could put $10 trillion into that without even screwing us up too badly.
Just as in the end, COVID and the last financial crisis won't screw up the world economy so badly.
We're not putting $10 trillion into that.
Instead, all the features is siloed inside a few big companies and government agencies.
And most of the data that comes from our
individual bodies, personally, that could feed this AI to solve aging and death, most
of that data is sitting in some hospital database doing nothing, right?
I got two more quick questions for you.
One, I know a lot of people are going to ask me, you're on the Joe Rogan podcast wearing
that same amazing hat.
Do you have an origin story for the hat?
Does the hat have its own story that you're able to share?
The hat story has not been told yet.
So we're going to have to come back and you can interview the hat. We. That. That's it. We'll leave that for the hat. Someone interview.
All right. It's too much to pack into. Is there a book? Is the hat going to write a book? Okay.
We'll, uh, it will, it may transmit the information through direct neural transmission. Okay.
So it's actually there might be some neural link competition there. Beautiful. We'll leave it as a mystery. Maybe one last question. If you
build an AGI system, you're successful at building the AGI system that could lead us to
the singularity and you get to talk to her and ask her one question, what would that question
be? We're not allowed to ask what is the question I should be asking.
Yeah, that would be cheating, but I guess that's a good question.
I'm thinking of a, I wrote a story with Stefan Bugai once where these AI developers,
they created a super smart AI aimed at answering all the philosophical questions that have been
worrying them. Like, what is the meaning of life? Is there free will? What is consciousness
and so forth? So they got the super AGI built and it turned to while it said, those are really
stupid questions. And then it puts off on a spaceship and
and left the earth. So you'd be afraid of scaring it off. That's it. I mean, honestly, there's
there is no one question that rises among all the others, really?
I mean, what interests me more is upgrading my own intelligence
so that I can absorb the whole world view of the Super-AGI.
But I mean, of course, if the answer could be,
like, what is the chemical formula for the immortality pill? Like that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that,
that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that, that So if your own mind was expanded to become super intelligent like you're describing, I mean,
there's a, you know, there's kind of a notion that intelligence is a burden that is
possible that with greater and greater intelligence, that other metric of joy that you mentioned
becomes more and more difficult.
What's your story?
Pretty stupid idea.
So you think if you're super intelligent, you can also be super joyful. mentioned becomes more and more difficult. What's your story? Pretty stupid idea.
So you think if you're super intelligent, you can also be super joyful.
I think getting root access to your own brain will enable new forms of joy that we don't have now. And I think, as I've said before, what I aim at is really make multiple versions of myself.
So I would like to keep one version,
which is basically human like I am now,
but keep the dial to turn,
pain up and down and get rid of death, right?
And make another version which fuses its mind
with superhuman AGI,
and then we'll become massively transhuman and whether
it will send some messages back to the human me or not, it will be interesting to find
out. The thing is, once you're a super AGI, like one subjective second to a human might
be like a million subjective years to that super AGI, right? So it would be on a whole different
basis. I mean, at very least those two copies
will be good to have, but it could be interesting
to put your mind into a dolphin or a space amoeba
or all sorts of other things.
Or you can imagine one version
that doubled its intelligence every year.
And another version that just became a super-age AI
as fast as possible, right?
So I mean, now we're sort of constrained to think one mind, one self, one body, right?
But I think we actually, we don't need to be that constrained in thinking about future
intelligence after we've mastered agi and nanotechnology and longevity biology. I mean, then each of our minds is a certain
pattern of organization, right? And I know, I know we haven't talked about consciousness,
but I sort of, I'm pan-psychist. I sort of view the universe as conscious. And so, you
know, I light bulb or a quark or an ant or a worm or a monkey have their own manifestations of consciousness.
And the human manifestation of consciousness, it's partly tied to the particular meat
that were manifested by, but it's largely tied to the pattern of organization in the brain,
right?
So if you upload yourself into a computer or a robot or whatever else it is,
some element of your human consciousness may not be there because it's just
tied to the biological embodiment, but I think most of it will be there and
these will be incarnations of your consciousness in a slightly different
flavor and you know creating these different versions will be amazing. And each of them will discover meanings of life
that have some overlap, but probably not total overlap
with the human bends, meaning of life.
The thing is to get to that future,
where we can explore different varieties of joy,
different variations of human experience and values
and transhuman experiences and values
to get to that future.
We need to navigate through a whole lot of human bullshade
of companies and governments and killer drones
and making and losing money and so forth.
And that's a challenge we're facing now.
If we do things right, we can get to a benevolent singularity, which is levels of joy,
growth, and choice that are literally unimaginable to human beings.
If we do things wrong, we can either annihilate all life on the planet, or we could lead to
a scenario where say, all humans are annihilated
and there's some super-AGI that goes on and does its own thing unrelated to us except
via our role in originating it.
And we may well be at a bifurcation point now, right?
Where what we do now has significant causal impact on what comes about.
And yet most people on
the planet aren't thinking that way whatsoever, that thinking only about their own narrow
aims and aims and goals, right? Now, of course, I'm thinking about my own narrow aims and
goals to some extent. Also, but I'm trying to use as much of my energy in mind as I can to push short this
more benevolent alternative, which will be better for me, but also for everybody else.
And that's a, it's weird that so few people understand what's going on.
I know you interviewed Elon Musk and he understands a lot of what's going on, but he's much more paranoid than I am, right? Because Elon gets that AGI is going
to be way, way smarter than people. And he gets that an AGI does not necessarily have
to give a shit about people because we're a very elementary mode of organization of matter
compared to many AGIs. But I don't think he has a clear vision of how infusing early stage AGI's
with compassion and human warmth can lead to an AGI that loves and helps people, rather
than viewing us as a historical artifact and a waste of mass energy.
But on the other hand, while I have some disagreements with him, like he understands way, way more
of the story than almost anyone else in such a large-scale corporate leadership position,
right?
It's terrible how little understanding of these fundamental issues exists out there now
that may be different five or ten years from now though because I can see understanding of a g
I and longevity and other such issues is certainly much stronger and more prevalent now than than 10 or 15 years ago
right so I mean you you might as a whole can be
slow learners relative to what what what what what I would like but
on a historical sense on the other hand, you could say the progress
is astoundingly fast.
But Elon also said, I think on the Joe Rogan podcast, that love is the answer.
So maybe in that way, you and him are both in the same page of how we should proceed with
AGI.
I think there's no better place to end it. I hope we get to talk again about the hat
and about consciousness and about a million topics
that we didn't cover.
Ben is a huge honor to talk to you.
Thank you for making it out.
Thank you for talking to me.
I really love it.
Thanks for having me.
This was really, really good fun.
And we dug deep into some very important things.
So thanks for doing this. Thanks very much. Awesome
Thanks for listening to this conversation with Ben Gerzel and thank you to our sponsors the Jordan Harbinger Show and
Masterclass, please consider supporting the podcast by going to Jordan Harbinger.com slash Lex and
www.masterclass.com.com a podcast, support on Patreon, or connect with me on Twitter. Alex Friedman spelled without the E, just F-R-I-D-M-A-N. I'm sure eventually you will figure it out.
And now let me leave you some words from Ben Gertzel.
Our language for describing emotions is very crude.
That's what music is for.
Thank you for listening and hope to see you next time.
you