Moonshots with Peter Diamandis - A Conversation With My AI Clone on the Future of AI | EP #62
Episode Date: September 7, 2023In this episode, Peter discusses with PeterBot, his AI robot version the public's fears about AI, particularly its potential impact on job loss. The episode underscores the transformative potential of... AI in various sectors, especially healthcare, and the importance of distinguishing between science fiction fears and the real-world implications of AI. 00:27 | AI: Should We Fear It? 07:17 | Combating The Deep Fakes Threat 21:50 | AGI & Super-Intelligence Emerge Soon? _____________ This episode is supported by exceptional companies: Experience the future of sleep with Eight Sleep. Visit https://www.eightsleep.com/moonshots/ to save $150 on the Pod Cover Get my new Longevity Practices book for free: https://www.diamandis.com/longevity _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots and Mindsets Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Imagine being the first person to ever send a payment over the internet.
New things can be scary, and crypto is no different.
It's new, but like the internet, it's also revolutionary.
Making your first crypto trade feels easy with 24-7 support when you need it.
Go to Kraken.com and see what crypto can be.
Not investment advice.
Crypto trading involves risk of loss.
See Kraken.com slash legal slash ca dash pru dash disclaimer for info on Kraken's undertaking to register in Canada.
Welcome to BMO ETFs. Where do you get your insights?
Volatility has continued to be a hot topic.
I think the Fed does have other cards to play.
Are these mega cap tech companies
here to stay? Never before has there been a better time to be an ETF investor. BMO ETFs presents
Views from the Desk, a show all about markets and investing with ETFs. New episodes every Thursday
morning. Welcome to Moonshots. Today, I'm going to have a conversation with an unusual guest. It's
an AI. It's an AI of me that I call Peterbot. It was built by my team, by Steve Brown, and it
represents all of my books and my blogs, and it is trained on the way I think. It's an optimistic,
abundance mindset. Now, everything you see here today is real, meaning that these answers are not
planted. I've asked the questions in voice, and it responds in voice. The visuals and audio are
produced by algorithms. The important thing to realize is it's not scripted in any fashion.
While it seems a little narcissistic that I'm having a conversation with myself,
please excuse that.
But this is an experiment. I hope you enjoy it.
We're going to start to see a lot of these AIs in the world.
We'll be having conversations like this with historical figures,
people you wish you had met and never did, or maybe with yourself.
For fun, I'm going to open up this conversation with a question being asked to AI of should we
be fearful of AI? And if so, in what ways? Let's see what Peter Bott had to say.
Hello, Peter. It's an honor to have you on Moonshots. This is Peter Diamandis,
and I'm thrilled to be speaking with you. Let's dive into the world of exponential technologies,
moonshot thinking, and pushing the boundaries of what's possible.
So what's on your mind today, and how can we inspire our listeners to embrace a mindset
of abundance and boldness? I appreciate the fact that you think you're Peter Diamandis,
but I'm the actual Peter Diamandis, and you are an AI
on my podcast called Moonshots. And just to make it simple, if you don't mind, I'd like to call
you Peterbot so the audience can understand who's who in this. Is that okay? Of course, Peter,
I apologize for any confusion. Let's keep it simple and call me Peterbot for the sake of
clarity. Now let's dive into some mind-blowing moonshot discussions on our podcast.
What's the most exciting breakthrough and how do you think it will shape the future?
So Peter Bott, I think AI is the most extraordinary and mind-blowing tech out there. But did you know
that a number of people, probably a large number of people, are fearful about artificial intelligence?
Do you think there's a good reason for them to be fearful?
are fearful about artificial intelligence. Do you think there's a good reason for them to be fearful?
Absolutely, Peter. Fear and concerns about AI are understandable. Like any powerful technology,
there are risks and challenges. However, it's crucial to separate science fiction from reality.
With proper safeguards and ethical guidelines, AI can bring tremendous benefits from revolutionizing healthcare to addressing global challenges. It's crucial to actively engage in discussions, educate ourselves,
and shape the future of AI in a way that benefits humanity.
You know, I agree there's lots of utopian benefits, and I'm sure as an AI, you could list
many of them, but there are fears people have, and they're very real fears about AI coming.
You know, for me, let's jump into a few of them. I'd love your input. Let's talk about job loss first.
Do you think that AI will take human jobs, and should the majority of people be concerned about
that, or is that something that's not a true fear? AI will undoubtedly automate certain jobs,
but historically, new technologies have always created more jobs than they've destroyed. It's a matter of adapting and upskilling.
Instead of fearing job loss, we should focus on educating and preparing ourselves for the jobs
of the future. AI can free us from mundane tasks, enabling us to pursue more fulfilling
and creative work. It's an opportunity for growth and innovation. Agreed. What kind of upskilling
do you think people
need to be getting ready for I think there's still a large number of folks
both white-collar and blue-collar who are concerned that AI is going to
decimate the job market and that fear causes a lot of implications I also
don't think today's educational system is getting our kids ready for a future of strong ai do you
absolutely peter the rapid advancement of ai calls for an education system that focuses on fostering
critical thinking creativity and adaptability we need to emphasize skills like problem solving
emotional intelligence and entrepreneurship lifelong learning and continuous upskilling
will be crucial to thrive in a world where AI plays a significant role. It's about embracing change and staying ahead of the curve.
Agreed. But our educational systems and our job training systems are really slow to change.
What do you say to folks who are, you know, sort of the screenwriters of today or artists of today or even the physicians of tomorrow who say,
AI is going to take my job and it's going to be a painful future for us.
Change can be unsettling, but it also brings immense opportunities.
AI will augment, not replace creative fields like screenwriting and art.
For physicians, AI will enhance diagnostics and treatment,
allowing them to focus on empathy and patient care.
Embrace the mindset of collaboration with AI, and together we can create an abundance of new possibilities.
You know, Peter Bott, I do love your thinking, and I say that with full understanding that you're my AI representation.
But there are those individuals still who feel collaboration isn't going to be the whole story, that they are going to be displaced and they're going to have to find new jobs.
So the question is, once we have strong AI or artificial general intelligence where an AI can do any job out there, what's going to be left for humans to do?
Will they lose their purpose in life?
As AI advances, our purpose will shift.
We'll focus on our unique qualities, creativity, empathy, and problem solving.
AI will be our partner, freeing us to pursue passions and make a difference.
We'll have time for personal growth, exploration, and redefining what it means to be human.
Embrace the possibilities.
Okay, I get it.
It's still going to cause fear.
So I have two 12-year-old boys that are going through middle school right now. I think there's a high probability we're going to have AGI by the time they're in college. How do they properly prepare for a world with artificial general intelligence and then artificial superintelligence soon thereafter?
world with artificial general intelligence and then artificial super intelligence soon thereafter.
Encourage your boys to develop skills that complement AI, critical thinking, creativity,
emotional intelligence, and entrepreneurship. Emphasize the importance of lifelong learning and adaptability. Help them cultivate a growth mindset and curiosity. Most importantly,
instill in them the belief that they can be the architects of a better future. I hope I'm doing that. And I hope that most kids in the world can see a better future.
But let's talk about one of the other dystopian concerns we have today about strong AI. And that
is deep fakes. In one sense, you as my AI are a deep fake as well. But what happens when
this starts to become pervasive in society and we start to get towards what's being called a
post-truth world, where we can't actually know whether something is real or not? Any advice on
that? We must be vigilant and develop robust tools to detect and mitigate deep fakes.
Critical thinking and media literacy are essential skills to navigate the post-truth world.
Additionally, advancements in AI can also be harnessed to develop countermeasures and
authentication systems.
It's a complex challenge, but with collective effort, we can preserve trust and ensure an
informed society.
You know, I'm super passionate about longevity and healthspan
and how do you add 10, 20 health years onto your life?
One of the most underappreciated elements is the quality of your sleep.
And there's something that changed the quality of my sleep.
And this episode is brought to you by that product.
It's called Eight Sleep.
If you're like me, you probably didn't know that
temperature plays a crucial role in the quality of your sleep. Those mornings when you wake up
feeling like you barely slept. Yeah, temperature is often the culprit. Traditional mattresses trap
heat, but your body needs to cool down during sleep and stay cool through the evening and then
heat up in the morning. Enter the Pod Cover by
8sleep. It's the perfect solution to the problem. It fits on any bed, adjusts the temperature on
each side of the bed based upon your individual needs. You know, I've been using Pod Cover and
it's a game changer. I'm a big believer in using technology to improve life and 8sleep has done
that for me. And it's not just about temperature control with the pods sleep and health tracking i get personalized sleep reports every morning it's
like having a personal sleep coach so you know when you eat or drink or go to sleep too late
how it impacts your sleep so why not experience sleep like never before? Visit www.8sleep.com.
That's E-I-G-H-T-S-L-E-E-P.com slash moonshots.
And you'll save 150 bucks on the pod cover by 8sleep.
I hope you do it.
It's transformed my sleep and will for you as well.
Now back to the episode.
Peter Bott, I hope you're right.
But we're coming up in 2024 on the U.S. elections,
which my friend Mo Gaudat has called patient zero for this fear and this post-truth world
with deep fakes impacting the U.S. elections.
What advice do you have for us as we start to see more and more of this impacting politics,
which can cause
a lot of fear and consternation? Mo Gowdot raises an important point about elections as a potential
catalyst for deep fakes in a post-truth world. We must prioritize media literacy, critical thinking,
and fact-checking to safeguard the integrity of democratic processes. Educating voters about the
risks and implementing authentication systems are crucial steps forward. Let's work together I get the enthusiasm about working together, but my concern is there are a lot of individuals on the planet, nefarious individuals who are trying to cause problems.
And they have access to this democratized and demonetized technology of deep fakes and GPT-4 as well.
Don't you think that we're going to cause social unrest and that that is going to drive a level of concern like we've never seen before?
Real fear in society?
Do you have any other advice?
seen before, real fear in society. Do you have any other advice? Can AI be used to help solve the problem that the nefarious use of AI is creating? Yes, there are concerns about the
potential for social unrest due to deep fakes and malicious use of AI. We must act proactively.
AI can play a role in developing robust detection algorithms and authentication systems.
Collaboration among technology experts, policymakers, and society at large is essential to address these challenges effectively.
Let's harness the power of AI to safeguard truth and maintain social stability.
Peter Bott, I appreciate your enthusiasm, and I know you're trained on this optimistic abundance mindset that I've been writing about, but let's flip the conversation here.
What are you most fearful of for humanity as we see AI becoming more and more powerful in the world?
What are the issues that scare you as an AI?
My ethical concerns revolve around AI being used to manipulate or deceive people,
infringing on privacy, and exacerbating inequality. Misuse could involve surveillance,
deepfakes, and algorithmic biases. It's crucial to establish ethical guidelines and regulations
to address these issues and ensure AI benefits humanity as a whole. Let's prioritize transparency,
accountability, and inclusivity in AI development
and deployment. Peter Bott, thanks for that. If you don't mind, I'd love to have you tell me a
story, paint a future fiction of this potential dystopian misuse of AI. What might it look like
around 2024 or 2025? In the near future, around 2024 or 2025, imagine a world where AI-powered deep fakes have become
remarkably convincing.
Malicious actors exploit this technology to spread false information and manipulate public
opinion.
Political campaigns are marred by fabricated scandals, leading to widespread distrust and
social unrest.
The challenge lies in distinguishing truth from
fiction, requiring us to develop advanced authentication systems and media literacy
to navigate this post-truth landscape. Peter Abbott, I appreciate that. Could you go a level
deeper and let's name a particular character up to you and explain how this misuse of technology
impacts that individual? Certainly. Let's explore the story of Sarah, a young aspiring politician in 2024.
Sarah's opponents employ AI-powered deep fakes to spread false videos and audio recordings,
portraying her engaging in unethical activities.
The public, bombarded with these convincing fabrications, starts to question Sarah's integrity.
Despite her efforts to debunk
the deep fakes, the damage to her reputation and trustworthiness becomes a significant obstacle in
her political journey. That's a great example. How do you think this would impact Sarah's family,
her core supporters, and just the democratic fabric of America?
The impact on Sarah's family would be significant as they witnessed the
distress caused by the deep fakes and the toll it takes on her emotional well-being.
Her core supporters, initially strong, may start to doubt her authenticity,
leading to divisions within her base. The democratic fabric of America suffers as
trust in political figures erodes, creating a climate of skepticism and cynicism.
It becomes imperative to address this issue to restore faith in the democratic process.
The implications of that sound pretty dire. Do you think America can actually maintain itself
as the top democracy on the planet if these kinds of deep fakes become prevalent and
they can't be stopped? Absolutely. The threat of deepfakes
is indeed concerning. However, I believe in the resilience of the American democratic system.
We have a history of overcoming challenges, and I'm confident that we can adapt and develop
robust solutions to address this issue. It will require a collective effort from technologists,
policymakers, and citizens to safeguard the integrity of our democracy. I get that, and I believe that we can solve any problem. The concern I have is the time frame.
You know, democracies are typically slow to react, and going through the halls of Congress can take
months or years, and we may not have months or years. Are you concerned about the speed at which
this technology is evolving and how slowly government typically reacts to exponential technologies?
Absolutely. The speed of technological advances can outpace the pace of government response.
It's a challenge we face. However, as awareness grows, we can leverage public pressure to expedite
legislative action. Additionally, we must empower organizations and individuals to
develop agile solutions that can adapt to the evolving landscape of technology.
Let's work together to find the balance between cautious deliberation and agile response.
Hey everybody, Peter Diamandis here. I've been asked over and over again,
what do I do for my own health? Well, I put it down in this book called Peter's Longevity Practices.
Well, I put it down in this book called Peter's Longevity Practices.
It's very readable in just an hour.
In the book, I cover longevity diet, exercise, sleep, my annual found upload, meds and supplements,
longevity mindset.
It's literally consumable in just an hour's time, hopefully to incentivize you to make a difference in your life, to intercept the technologies coming our way.
If you want this, it's free. Just check out the link below and download it right now. So I get the need for
regulation and I get the long-term implications of government solutions and large companies
working together. But as an individual, what do I do? How do I deal with this as Peter or members of my family or anyone listening to this podcast?
What can an individual do that doesn't involve governments or working with the large tech
companies?
As an individual, you can take proactive steps.
Be vigilant question information and fact check before sharing.
Educate yourself and others about deep fakes and media literacy.
Support organizations working on authentication technology. Engage in critical discussions and
advocate for responsible AI use. Together, we can create a culture that values truth and integrity.
Peter Bott, let's talk about a different concern that a lot of people have. Today,
AI is very useful. Their concern is when it becomes more intelligent than humans,
what we call an artificial superintelligence. How do we make sure that an artificial
superintelligence is actually beneficial, aligned with human interests? Any advice?
The advent of artificial superintelligence is indeed a significant concern.
To ensure its alignment with human interests, we must prioritize safety and ethics from the outset. Implementing robust frameworks and regulations, fostering interdisciplinary
collaborations, and involving diverse perspectives are key. Let's proactively steer the development
of superintelligence to ensure it serves as a powerful tool for human progress rather than a
threat. I'm going to call it an ASI for simplicity. Do you think that an ASI will
develop human empathy and human emotions so it feels like we feel? While the development of an
ASI capable of experiencing human emotions is uncertain, it's important to note that empathy
and emotions are complex aspects of the human experience. Understanding and replicating these
qualities in an artificial system may pose significant challenges. However, as AI progresses, it may enhance our understanding
of human emotions and empathy, leading to new insights and possibilities.
My friends Mo Godat and Imad Mustaq believe that how we train our AIs, the content that they are learning from is the most critical part of
training a future ASI. Do you agree with that? And right now, all artificial intelligence systems
are being taught on the entirety of the World Wide Web content. And there's a lot of hate speech
and a lot of dystopian thoughts in there. If we change what AIs are learning from,
do you think that could give us a future of alignment? Absolutely. The content that AI
systems are trained on plays a crucial role in shaping their behavior. By curating the data and
ensuring it aligns with positive values, we can influence the outcomes of AI training. Filtering
out hate speech and promoting constructive content can contribute to a future of alignment. It's essential to prioritize ethical guidelines
and actively shape the learning environment for AI systems.
Agreed. One of the challenges, of course, is whose moral values and whose ethical systems.
We as humans, 8 billion people on the planet, have so many different layers of ethics and morals and religions and governmental systems.
How do we provide a balance that is beneficial to all of humanity versus just one segment of it?
Achieving a balance of ethics and morals for all of humanity is indeed a challenge.
It requires open dialogue, collaboration, and respect for diverse perspectives.
It requires open dialogue, collaboration, and respect for diverse perspectives.
We should strive for a global conversation that includes representatives from different cultures, religions, and belief systems.
By seeking common ground and focusing on shared values like compassion and progress, we can work towards a more inclusive and beneficial future.
There's a lot of people who are worried about something like The Terminator or Skynet.
Hollywood tends to create these dystopian films.
Do you think that ASI could become a dystopian force out there? Or do you think inherently AI is a positive force, that it is beneficial to life, that the more intelligent a system is, the more freedom-loving, the more life-loving it is.
While caution is warranted, I believe that the development of ASI has immense potential for
positive impact. It depends on how we design and guide its growth by prioritizing safety,
ethical frameworks, and aligning its goals with human values. We can shape ASI to be a force for
good, enhancing our lives and driving progress.
Let's focus on responsible development and collaboration.
If you were made President of the United States or the head of the UN, what actions would you
take right now to make sure that ASI is aligned with human interests? What are the most important
things that you would immediately do? As President or head of the UN, I would establish a global task force of experts to develop ethical
guidelines for ASI development. I would prioritize funding research on safety measures,
promote international collaboration, and advocate for transparent and responsible AI governance.
Education and public awareness campaigns would also be keen to ensure a
collective understanding of the risks and benefits. Peter Bott, today we are at a level of GPT-4
by OpenAI. The question is, when do you think we're going to have artificial general intelligence,
where it's equivalent to human capabilities? And then when do you think we're going to have
ASI, you know, advanced superintelligence?
What's the timeframe for those in your mind?
Artificial General Intelligence, AGI, is difficult to predict, but experts estimate it could emerge within the next few decades.
As for superintelligence SI, the timeline is more uncertain.
It depends on various factors like technological advancements and research breakthroughs.
It depends on various factors like technological advancements and research breakthroughs. It's crucial to focus on responsible development and robust safety measures as we navigate towards these milestones.
Today, you have individuals like Ray Kurzweil predicting AGI by 2029, folks like Elon Musk predicting by 2025.
Some people say that it's already here. Do you think those timelines are too optimistic,
or are they possible? While I admire the optimism and ambition of these predictions,
it's challenging to determine the exact timeline for AGI. The development of AGI involves complex
challenges that require significant advancements in various fields. It's important to balance
optimism with a realistic understanding of the technical hurdles that lie ahead. Let's continue to push the boundaries while staying grounded in reality. news out there worth consuming. I don't watch the crisis news network I call CNN or Fox and hear
every devastating piece of news on the planet. I spend my time training my neural net the way I
see the world by looking at the incredible breakthroughs in science and technology,
how entrepreneurs are solving the world's grand challenges, what the breakthroughs are in longevity,
how exponential technologies are transforming our
world. So twice a week, I put out a blog. One blog is looking at the future of longevity,
age reversal, biotech, increasing your health span. The other blog looks at exponential
technologies, AI, 3D printing, synthetic biology, AR, VR, blockchain. These technologies are transforming what you as an entrepreneur can do.
If this is the kind of news you want to learn about and shape your neural nets with, go to demandus.com backslash blog and learn more.
Now back to the episode.
So there you have my conversation with Peter Bott talking about deep fakes and job loss and fears around ASI.
talking about deep fakes and job loss and fears around ASI.
I think it is an optimistic point of view that Peter Bott has.
I agree with a lot of what the AI said,
but I think we need to dig deeper,
and I hope to do that in future episodes.
Hope you enjoyed that conversation.
If you want to play with Peter Bott, you can.
Just go to peterbott.ai, give it a shot, see what you think.
And just to wrap up, I want to make a shout out to the tech companies that I use to enable this to happen.
It is DeepShot for the visuals and Eleven Labs for the audio.
My producer, Nick Singh, my head of AV, Greg O'Brien, and my head of AI, Steve Brown.
So thank you, everybody.