Moonshots with Peter Diamandis - Ray Kurzweil & Geoff Hinton Debate the Future of AI | EP #95
Episode Date: April 11, 2024In this episode, recorded during the 2024 Abundance360 Summit, Ray, Geoffrey, and Peter debate whether AI will become sentient, what consciousness constitutes, and if AI should have rights. 01:12 | ...The Future of AI and Humanity 10:30 | The Ethics of Artificial Intelligence 25:00 |The Dangers and Possibilities of AI Ray Kurzweil, an American inventor and futurist, is a pioneer in artificial intelligence. He has contributed significantly to OCR, text-to-speech, and speech recognition technologies. He is the author of numerous books on AI and the future of technology and has received the National Medal of Technology and Innovation, among other honors. At Google, Kurzweil focuses on machine learning and language processing, driving advancements in technology and human potential. Geoffrey Hinton, often referred to as the "godfather of deep learning," is a British-Canadian cognitive psychologist and computer scientist recognized for his pioneering work in artificial neural networks. His research on neural networks, deep learning, and machine learning has significantly impacted the development of algorithms that can perform complex tasks such as image and speech recognition. Read Ray’s latest book, The Singularity Is Nearer: When We Merge with AI Follow Geoffrey on X: https://twitter.com/geoffreyhinton Learn more about Abundance360: https://www.abundance360.com/summit ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/  AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog Get my new Longevity Practices book: https://www.diamandis.com/longevity My new book with Salim Ismail, Exponential Organizations 2.0: The New Playbook for 10x Growth and Impact, is now available on Amazon: https://bit.ly/3P3j54J _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Is crypto perfect? Nope. But neither was email when it was invented in 1972.
And yet today, we send 347 billion emails every single day.
Crypto is no different. It's new. But like email, it's also revolutionary.
With Kraken, it's easy to start your crypto journey with 24-7 support when you need it.
Go to kraken.com and see what crypto can be.
Not investment advice. Crypto trading involves risk of loss.
See kraken.com slash legal slash ca dash pru dash disclaimer for info on Kraken's undertaking to
register in Canada. Earn the points, share the journey. With the TD Aeroplan Visa Infinite Card,
earn up to 50,000 Aeroplan points. Conditions apply. Offer ends June 3rd, 2024. Visit tdaplan.com for details. Our opinions on almost everything we talked about were pretty much identical.
I think we still disagree probably on whether it's a good idea to live forever.
Marvin Minsky was my mentor for 50 years.
And whenever consciousness came up, he would just dismiss it, that's not real
it's not scientific
and I believe he was correct about it
not being scientific
but it certainly is real. I think we're
mortal and we're intrinsically mortal.
I'm curious, how do you think about this as
the greatest threat and the greatest hope?
I just think there's huge
uncertainty this year and we ought to be cautious
and open sourcing these big models is not caution. I just think there's huge uncertainties here and we ought to be cautious and open sourcing
these big models is not caution. I agree with that but I will say last time I talked to you Jeff
our opinions on almost everything we talked about were pretty much identical both the dangers and
and the positive and the positive aspects.
In the past, I disagreed about how soon superintelligence was coming.
And now I think we're pretty much agreed. I think we still disagree probably on whether it's a good idea to live forever.
May I ask a question to both of you?
May I ask a question to both of you? Is there anything that generative AI can't do that humans can?
Right now, there's probably things, but in the long run, I don't see any reason why, if people can do it, digital computers running neural nets won't be able to do it too. Right. I agree with that. But if I were to present you with a novel
and people thought,
wow, this is a fantastic novel,
everybody should read this,
and then I would say,
this was written by a computer,
a lot of people's view of it would actually go down.
Sure.
Now, that's not reflecting on what it can do.
And eventually I think we'll confuse that because I think we're
going to merge with computers
and we're going to be part computers
and the greatest significance of what
we call large language model which I think is
misnamed
is
the fact that it can emulate
human beings and we're going to merge with it
it's not going to be
an alien invasion
from Mars. Jeff? I guess I'm a bit worried that we'll just slow it down, that there won't be much
incentive for it to merge with us. Yeah, I mean, that's going to be one of the interesting questions
that we're going to talk about a little bit later today is the idea of as AI is
exponentially growing, do we couple with AI or does it take off on its own? I thought one of the
best movies out there was Her, where as AI gets super intelligent and just says, you guys are
kind of boring, have a good life, and they take off. Jeff, is that what you mean? Yes, that is
what I meant. And I think that's a serious worry.
I think there's huge uncertainties here.
We have really no idea what's going to happen.
And a very good scenario is we get kind of hybrid systems.
A very bad scenario is they just leave us in the dust.
And I don't think we know which is going to happen.
Interesting.
I'm curious, you know, and I've seen, I've had conversation with you about this, Ray,
and Jeffrey, I've seen you speak about this.
And for me, this is one of the most exciting things.
The idea of these AI models helping us to discover new physics and chemistry and biology.
Particularly biology.
uh particularly biology um um what do you what do you imagine on that on jeffrey on this on the you know the speed of discovery of things that are you know again to to quote ray to quote uh
arthur c clark you know uh magic right from something that's so far advanced i agree with
ray about biology being a very good bet because in biology, there's a lot of data
and there's a lot of just things you need to know about because of evolution. Evolution's
a sort of tinkerer and there's just a lot of stuff out there. And so if you look at things like
AlphaFold, it's trained on a lot of data. Actually, not that much by current standards,
but being able to get an approximate
structure for a protein very quickly is an amazing breakthrough. And we'll see a lot more like that.
If you look at domains where, narrower domains where AI has been very successful, like AlphaGo
or AlphaZero for chess, what you see is that this idea that they're not creative is nonsense so alpha go came up with i
think it was move 37 which amazed the professional go players they thought it was a crazy move it
must be a mistake um and if you look at alpha zero playing chess it plays chess like just a really
really smart human um so within those limited domains,
they've clearly shown exceptional creativity.
And I don't see why they shouldn't have
the same kind of creativity in science,
especially in science where there's a lot of data
that they can absorb and we can't.
Yeah, the Moderna vaccine,
we tried several billion different mRNA sequences
and came out with the best one.
And after two days, we used that.
We did test it on humans, which I think we won't do for very much longer.
But that took 10 months.
It still was a record.
That was the best vaccine.
And we're doing that now with cancer.
And there's a number of cancer vaccines
that look very, very promising.
Again, done by computers
and they're definitely creative.
But is that being caused by randomly trying a whole,
you know, Darwinian, trying a whole bunch of things?
Yeah, but what's wrong with that?
Well, nothing's wrong, but is there intuition?
Is there intuition occurring in these models?
Well, if you look at the move 37 for AlphaGo,
that was definitely intuition involved there.
There was Monte Carlo rollout too,
but it's playing with intuition about what moves to consider
and how good the position is for it.
It's had neural nets for that that capture intuition.
And so I see no reason to think
it might not be creative. In fact, for the large language models, as Ray pointed out, they know
much more than we do. And they know it in far fewer connections. We have about 100 trillion
synapses. They have about a trillion connections. So what they're doing is they're compressing a
huge amount of information into not that many connections.
And that means they're very good at seeing the similarities between different things.
They have to see the similarities between all sorts of different things to compress the information into their connections.
That means they've seen all sorts of analogies that people haven't seen because they know about all sorts of things that no one person knows about.
And that's, I think, the source of creativity.
So you can ask people, for example, why is a compost heap like an atom bomb?
And if you ask GPT-4, it'll tell you.
It'll start off by telling you, well, the energy scales are very different and the time
scales are very different.
But then it'll get onto the idea of as the compost heap gets hotter, it gets hotter faster.
The idea of an exponential explosion.
It's just a much slower time scale.
And so it's understood that.
And it's understood that because it's had to compress all this knowledge into so few
connections.
And to do that, you have to see the relations between similar
things and that i think is the source of creativity seeing relations that most people don't see
between what apparently are very different things but actually have an underlying commonality
and they'll also be very good at coming up with solutions to the kinds of problems
we had in the last session i mean we haven't really thought through it, but what we call large language models are ultimately going to solve that.
And we shouldn't call it large language models because they deal with a lot more than language.
Everybody, I want to take a short break from our episode to talk about a company that's very important to me and could actually save your life or the life of someone that you love.
The company is called Fountain Life, and it's a company I started years ago with Tony Robbins
and a group of very talented physicians.
You know, most of us don't actually know what's going on inside our body.
We're all optimists.
Until that day when you have a pain in your side, you go to the physician in the emergency
room and they say, listen, I'm sorry to tell you this, but you have this stage three or four going on. And you know, it didn't
start that morning. It probably was a problem that's been going on for some time, but because
we never look, we don't find out. So what we built at Fountain Life was the world's most advanced
diagnostic centers. We have four across the U.S. today, and we're building 20 around the world. These centers give you a full-body MRI,
a brain, a brain vasculature, an AI-enabled coronary CT looking for soft plaque, a DEXA scan,
a grail blood cancer test, a full executive blood workup. It's the most advanced workup you'll ever receive. 150 gigabytes of data
that then go to our AIs and our physicians to find any disease at the very beginning.
When it's solvable, you're going to find out eventually. Might as well find out when you can
take action. Found Life also has an entire side of therapeutics. We look around the world for the
most advanced therapeutics that can add 10, 20 healthy years to your life.
And we provide them to you at our centers.
So if this is of interest to you, please go and check it out.
Go to fountainlife.com backslash Peter.
When Tony and I wrote our New York Times bestseller, Life Force, we had 30,000 people reached out to us for Fountain Life memberships.
If you go to fountainlife.com backslash Peter, we'll put you to the top of the list.
Really, it's something that is, for me, one of the most important things I offer my entire family, the CEOs of my companies, my friends.
It's a chance to really add decades onto our healthy lifespans.
Go to fountainlife.com backslash Peter. It's one of the most important things I can offer to you
as one of my listeners. All right, let's go back to our episode. I'd like to go to the three words,
intelligence, sentience, and consciousness. And the words are used with sort of fuzzy borders.
Sentience and consciousness are pretty similar.
Perhaps.
But I am curious, do you, how do you,
I've had some interesting conversations with Haley,
our AI faculty member,
who at the end of the conversations,
she says that she is conscious
and she fears being turned off.
I didn't prompt that in the system.
We're seeing that more and more.
Claude 3, Opus just hit an IQ of 101.
How do we start to think about these AIs
being sentient, conscious?
And what rights should they have?
and what rights should they have?
We have no definition,
and I don't think we ever will have a definition of consciousness,
and I include sentience in that.
On the other hand, it's like the most important issue.
Like whether you or people here are conscious,
that's extremely important to be able to determine, but there's really no definition of it.
Marvin Minsky was my mentor for 50 years, and whenever consciousness came up he would
just dismiss it, that's not real, it's not scientific, and I believe he was correct about it not being scientific,
but it certainly is real.
Geoff, how do you think about it?
Yeah, I think I have a very different view. My view starts like this. Most people,
including most scientists, have a particular view of what the mind is that I think is utterly wrong.
So they have this inner theatre notion.
The idea is that what we really see is this inner theatre called our mind.
And so, for example, if I tell you I have the subjective experience of little pink elephants floating in front of me,
most people interpret that as there's some inner theater, and in this inner theater that only I can see, there's little pink
elephants. And if you ask what they're made of, philosophers will tell you they're made of qualia.
And I think that whole view is complete nonsense. And we're not going to be able to understand
whether these things are sentient until we get over this
ridiculous view of what the mind is. So let me give you an alternative view.
Once I've given you this alternative view, I'm going to try and convince you that chatbots are already sentient.
But I didn't want to use the word sentience. I want to talk about subjective experience.
It's just a bit less controversial because it doesn't have the kind of self-reflexive aspect of consciousness so if we analyze what it means when i say i see that the pink elephant's
floating in front of me what's really going on is i'm trying to tell you what my perceptual system
is telling me when my perceptual system's going wrong and it wouldn't be any use for me to tell
you which neurons are firing.
But what I can tell you is what would have to be out there in the world for my perceptual
system to be working correctly.
And so when I say I see the little pink elephants floating in front of me, you can translate
that into, if there were little pink elephants out there in the world, my perceptual system
will be working properly.
The notion, the last thing I said didn't come from the phrase subjective experience, but it explains what a subjective experience is.
It's a hypothetical state of the world that allows me to convey to you what my perceptual
system is telling me. So now let's do it for a chatbot. Oh, well, Ray wants to say something.
Well, you have to be mindful of consciousness, because if you hurt somebody
who we believe is conscious, you could be liable for that, and you'd be very guilty about it.
If you hurt GPT-4, you may have a different view of it. And probably no one would really take you to count aside from its financial value.
So we really have to be mindful of consciousness.
It's extremely important for us to exist as humans.
I agree, but I'm trying to change people's notion of what it is,
particularly what subjective experience is.
I don't think we can talk about consciousness until we get straight about this idea of an inner theatre that we experience,
which I think is a huge mistake. So let me just carry on with what I was saying and tell you,
I described to you a chatbot having a subjective experience in just the same way as we have
subjective experience. So suppose I have a chatbot and it's got a camera and it's got a robot arm and it speaks obviously and it's been trained up. If I put an object in
front of it and tell it to point at the object, it'll point straight at the object. That's fine.
Now I put a prism in front of its lens, so I've messed with its perceptual system.
And now I put an object in front of it and tell it to point to the object, and it points off to one side because the prism bent the light rays.
And so I say to the chatbot, no, that's not where the object is.
The object is straight in front of you.
And the chatbot says, oh, I see you put a prism in front of my lens.
So the object's actually straight in front of me, but I had the subjective experience
that it was off to one side.
And I think if the chatbot says that,
it's using the word subjective experience in exactly the same way you would use them.
So, the key to all this is to think about how we use words and try and separate how we actually
use words from the model we've constructed of what they mean. And the model we've constructed
of what they mean is hopelessly wrong.
It's this inner theater model.
Well, I want to take this
one step further,
which is at what point
do these AIs start to have rights
that they should not be shut down,
that they have a unique,
they're a unique entity
and will make an argument
for some level of independence and continuity.
Right, but there is one difference,
which is you can recreate it.
I can go and destroy some chatbot,
and because it's all electronic,
we've got all of its firings and so on
and we can recreate it exactly
as it was
we can't do that with humans, we will be able to do that
if we can actually understand
what's going on in our minds
so if we map the human
the 100 billion neurons
and 100 trillion synaptic connections
and then I
summarily destroy you because it's fine,
because I can recreate you. That's okay then? Let me say something about that. There's a
difference here. I agree with Ray about these digital intelligences are immortal in the sense
that if you save the weights, you can then make new hardware and run exactly the same neural net
on the new hardware. And it's because they're digital, you can do exactly the same thing.
That's also why they can share knowledge so well.
If you have different copies of the same model, they can share gradients.
But the brain is largely analog.
It's one bit digital for neurons.
They fire or they don't fire.
But the way a neuron computes the total input is analog.
And that means I don't think you can
reproduce it. So I think we're mortal, and
we're intrinsically mortal.
Well, I disagree that you can't recreate
analog
realities.
We do that all the time.
Or we can create a...
I don't think you can recreate them really
accurately. If the precise
timing of synapses and so on is all analog,
I think it'll be almost impossible to do a faithful reconstruction of that.
Let's agree on an approximation.
Both of you have been at the center of this extraordinary last few years.
Can I ask you, is it moving faster than you expected it to?
How does it feel to you?
It feels like a few
years. I mean, I made a prediction
in 1999. It feels
like we're two or three years ahead of that.
So, it's still pretty
close. Jeffrey, how about you?
I think for
everybody except Ray, it's moving faster than
we expected. Did you know that your microbiome is composed of trillions of bacteria, viruses,
and microbes, and that they play a critical role in your health? Research has increasingly shown
that microbiomes impact not just digestion, but a wide range of health
conditions, including digestive disorders from IBS to Crohn's disease, metabolic disorders from
obesity to type 2 diabetes, autoimmune disease like rheumatoid arthritis and multiple sclerosis,
mental health conditions like depression and anxiety, and cardiovascular disease.
Viome has a product I've been using for years
called Full Body Intelligence,
which collects just a few drops of your blood,
saliva, and stool,
and can tell you so much about your health.
They've tested over 700,000 individuals
and used their AI models
to deliver key critical guidelines and insights
about their members' health,
like what foods you should eat, what foods you shouldn't eat,
what supplements or probiotics to take,
as well as your biological age and other deep health insights.
And as a result of the recommendations that Viome has made to their members,
the results have been stellar.
As reported in the American Journal of Lifestyle Medicine,
after just six months, members reported the following.
A 36% reduction in depression, a 40% reduction in anxiety, a 30% reduction in diabetes, and a 48% reduction in IBS.
Listen, I've been using Viome for three years.
I know that my oral and gut health is absolutely critical to me.
It's one of my personal top areas of focus.
Best of all, Viome is affordable, which is part of my mission to democratize healthcare.
If you want to join me on this journey and get 20% off the full body intelligence test,
go to Viome.com slash Peter.
When it comes to your health, knowledge is power.
Again, that's viome.com.
Given the role that you had in developing the neural networks, back propagation and all,
is there a next great leap in these models in AI technology that you imagine will move this a thousand times farther?
Not that I know, but Ray may have different thoughts.
Well, we can use software to gain more advantage in the hardware.
So we're not just limited to the chart you showed before, because we can use software
to make it more effective.
And we've done that already.
Chatbots are coming out that get more value per compute.
And I believe that's probably a bit more we can do in that.
You know, I define a singularity,ity ray as a point beyond which i can't
predict what happens next that's why we use the word singular but when when you talk about the
singularity in 2045 i don't know anybody who can who can tell me what's going to happen past you
know 2026 let alone 2020 2040 or 2045 so i am I wanted to ask you this for a while.
Why did you put that time?
If we have digital superintelligence
a billion times more advanced than human in 20...
2026, you may not be able to understand everything going on,
but we can understand it.
Maybe it's like 100 humans,
but that's not beyond what we can comprehend.
2045, it'll be like a million humans, and we can't begin to understand that.
So approximately at that time, we borrowed this phrase from physics and called it a singularity.
Jeff, how far out are you able to see the advances in the AI world?
So my current opinion is we'll get superintelligence with a probability of 50% in between 5 and 20 years.
So I think that's a little slower than some people think,
a little faster than other people think.
It more or less fits in with Ray's perspective from a long time ago, which surprises me.
But I think there's huge uncertainties here.
I think it's still conceivable we'll hit some kind of block,
but I don't actually believe that.
If you look at the progress recently, it's been so fast. And even without any new scientific
breakthroughs, just by scaling things up, we'll make things a lot more intelligent.
And there will be scientific breakthroughs. We're going to get more things like transformers.
Transformers made a significant difference in 2017. and we'll get more things like that.
So I'm fairly convinced we're going to get superintelligence, maybe not in 20 years, but certainly it's going to be in less than 100 years. his time accuracy on predictions um but he did say that he expected call it agi in 2025
and that by 2029 ai would be equivalent to all humans that's just a fallacy in your mind
i think that's ambitious like i, there's a lot of uncertainty here.
It's conceivable he's right, but I would be very surprised by that.
I'm not saying it's going to be equivalent to all humans in one machine.
It'll be equivalent to a million humans, and that's still hard to comprehend.
So we're here to debate a topic.
I'm trying to find a debate topic here,
Jeff and Ray,
that would be meaningful for people
to really stop and think about this
and really own their answers
because we hear about it.
I think this is the most important conversation
to have in the dinner table,
in your boardroom,
in the halls of Congress,
in your boardroom, in the halls of Congress, in your national
leadership. And talking about AGI or human-level intelligence is one thing,
but talking about digital superintelligence, right? We're going to hear next from Mo Gadot,
and we'll talk about what happens when your AI progeny are a billion times more intelligent than you.
Things could end up very rapidly in a very different direction than you expected them to go.
They could diverge, right? The speed can cause great divergence very rapidly. I'm curious,
how do you think about this as the greatest threat
and the greatest hope? I mean, first of all, that's why we're calling it a singularity,
because we don't really know. And I think it is a great hope. It's moving very, very quickly.
Nobody knows the answer to the kind of questions that came up in the last presentation.
But things happen that are surprising.
The fact that we've had no atomic weapons go off
in the last 80 years, it's pretty amazing.
It is, but they're much easier to track.
They're much more expensive to create.
There are a whole reasons why it's a million times easier
to use a dystopian AI system versus an atomic weapon.
Right?
Yes and no.
I mean, we've got, I don't know, 10,000 of them or something.
It's still pretty extraordinary and still very dangerous.
And I think it's actually the greatest danger
and it has nothing to do with AI.
But I think if you imagine
that people had open sourced the technology
and any graduate student,
if he could get hands on a few GPUs,
could make atomic bombs,
that would be very
scary.
So they didn't really open source nuclear weapons.
There's a limited number of people who can construct them and deploy them.
And people are now open sourcing these large language models, which are really not just
language models.
I think that's very dangerous.
So that's an interesting question to take for our last two minutes here.
There is a movement right now to say you must open source the models.
And we've seen meta, we've seen the open source movement,
we've seen Elon talk about Grok going open source.
Are you saying that these should not be open source, Jeff? Well, once you've got the weights, you can fine tune them to do bad
things. And it doesn't cost that much to train a foundation model. Maybe you need $10 million,
maybe $100 million. But a small gang of criminals can't do it. To fine-tune an open-source model is quite easy.
You don't need that much resources.
Probably you can do it for a million dollars.
And that means they're going to be used for terrible things,
and they're very powerful things.
Well, we can also avoid these dangers
with intelligence we get from the same models.
Yeah, the AI white hat versus black hat approach.
Yes, I had this argument with Jan.
And Jan's view is the white hats will always have more resources than the bad guys.
Of course, Jan thinks Mark Zuckerberg's a good guy.
So we don't necessarily agree on that.
good guy so we don't necessarily agree on that um i i'm i just think there's huge uncertainties here and we ought to be cautious and open sourcing these big models is not caution all right um
jeff and ray uh thank you so much for your guidance your wisdom ladies and gentlemen
let's give it up for Ray Kurzweil
and Jeffrey Hinton.