Moonshots with Peter Diamandis - EP #39 Should We Be Fearful of Artificial Intelligence? The AI Panel w/ Emad Mostaque, Alexandr Wang, and Andrew Ng
Episode Date: April 20, 2023In this Ask Me Anything session during this year’s Abundance360 summit, Andrew, Emad, Alexandr, and Peter discuss how the world will change post-A.I explosion, including how to reinvent your busin...ess, your skills, and more. You will learn about: 03:39 | How Do We Educate On New Technologies In Our Changing World? 15:22 | Is There Any Industry That AI Will Never Disrupt? 28:27 | Will The First Trillionaire Be Born From The Power Of AI? Emad Mostaque is the Founder and CEO of Stability AI, the company behind Stable Diffusion. Alexandr Wang is the world’s youngest self-made billionaire at 24 and is the Founder and CEO of Scale AI. Andrew Ng is the Founder of DeepLearning.AI and the Founder & CEO of Landing AI. > Try out Stable Diffusion > Visit Scale AI > Learn AI with DeepLearning.AI _____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsor: Use my code MOONSHOTS for 25% off your first month's supply of Seed's DS-01® Daily Synbiotic: seed.com/moonshots Levels: Real-time feedback on how diet impacts your health. levels.link/peter _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now:  Tech Blog Join me on a 5-Star Platinum Longevity Trip at Abundance Platinum _____________ Connect With Peter: Twitter Instagram Youtube Moonshots and Mindsets Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Will you rise with the sun to help change mental health care forever?
Join the Sunrise Challenge to raise funds for CAMH,
the Centre for Addiction and Mental Health,
to support life-saving progress in mental health care.
From May 27th to 31st, people across Canada will rise together
and show those living with mental illness and addiction that they're not alone.
Help CAMH build a future where no one is left behind.
So, who will you rise for?
Register today at sunrisechallenge.ca.
That's sunrisechallenge.ca.
That's the sound of unaged whiskey
transforming into Jack Daniel's Tennessee whiskey in Lynchburg, Tennessee.
Around 1860, Nearest Green taught Jack Daniel how to filter whiskey through charcoal
for a smoother taste, one drop at a time.
This is one of many sounds in Tennessee with a story to tell.
To hear them in person, plan your trip at tnvacation.com.
Tennessee sounds perfect.
We're all going to have some version of Jarvis, right?
An AI that is in your ear, on your body, and so forth.
Frankly, even as someone in AI, I feel like, man, it's tiring. There's so much going on.
Physical things in the real world are much further from being disrupted by AI.
And it's going to create wealth faster than anything we've ever seen.
The beauty of AI is that it can analyze information faster than any person. We're just literally starting to explore these.
All of us are engaged with governments.
We're trying to help them.
It's a bit difficult when they're just catching up to the internet right now.
Now it's got to the point where this technology could potentially be dangerous.
So not everything needs to be open, but we do need to be transparent and careful when
stuff gets out of our control.
I have to admit, I'm not letting ChatGPT tell me where to invest at this moment in time.
In school, a lot of kids have been cheating and they've been using ChatGPT to write essays and book reviews and do their math.
Cheating implies it's a contest. School should not be a contest. You will never not be without this AI as you grow up.
My question is about content creation, especially in music.
You will have perfect music models by the end of the year.
We believe this technology could be existential,
and we will treat it as such.
One of the greatest drivers for peace on the planet
is making sure every mother has their children
with the best health and the best education.
A more peaceful world for us is a world
in which we uplift everybody, which is what this technology has the ability to do.
Everybody, this year at Abundance360, given the meteoric rise of AI, I decided to put together
an entire day on AI with three of the most extraordinary thinkers. You've met one of them, Imad Moustak, the chairman,
founder, CEO of Stability AI. Also brought to the conversation, Alexander Wang, who's the founder
and CEO of Scale AI. It's a big data machine learning platform that accelerates the development
of AI. He founded it when he was at MIT at age 19 as a student, dropped out after the first
year to become the youngest self-made billionaire ever.
I also brought Andrew Ang, the managing partner at AI Fund.
He's co-founder and chairman of Coursera, the founder of Google Brain.
I love the name of that.
And one of Time's 100 most influential people.
Between Imad, Alexander, and Andrew,
you can get an overview of this field.
How fast is it moving?
How disruptive is it?
Is it something you should be excited about or fearful of?
My massive transformative purpose
is to inspire and guide entrepreneurs
to create a hopeful, compelling,
and abundant future in humanity.
And that's what I'm doing with this podcast, to open up every relationship, every conversation
I have with you, to inspire you, to support you in going big, in helping uplift humanity.
If that's of interest to you, please subscribe to this podcast.
Allow me to share with you the wisdom that I'm learning from the most incredible moonshot
entrepreneurs on the planet. Let's jump into this episode. It's a conversation that everyone needs
to be having in your company, in your family, and definitely at the heads of every nation. Enjoy.
Thanks, Peter. And thanks to our amazing guests here today. We've been talking about this actually for years
when we look at the rapid acceleration of AI
and this disruption that's kind of happening
very, very quickly from this point forward.
And when we look at industries and workers
and whether they're getting the understanding,
the knowledge, and the reskilling that needs to happen to approach this current world. In what ways do you see or do you believe we can do that faster and work with governments and larger corporations and education systems to help enable this.
Because we're all talking about positivity and abundance, and I think we're very privileged
in this room.
And the question is, for the rest of the world, what are we doing?
And in what ways are you involved as leaders?
By the way, we've got 40 minutes here, so lots of great questions to ask.
Who wants to take that on?
You know, so it's a very important point.
I think the world is changing so fast,
it's difficult for a lot of people
to just keep up with what on earth is going on.
Frankly, even as someone in AI, I feel like, man, it's tiring.
There's so much going on.
So I think maybe one of the things I'm doing,
years ago I started, co-founded Coursera,
which to this day offers a lot of courses,
often works with governments to train up different people.
I think that one nice thing is with online education, with digital education, we can
speed up the rate at which we help people learn the new technologies and figure out
what to do with it as well.
And I think media also has a huge important role to play.
And eventually, I would love if we can build tools.
Like if you look at Google,
the UI is really simple.
The background is insanely complicated.
You just type what you want and you get answers.
So I hope that AI tools can evolve
to the point where, you know,
you could just use it without nearly
as deep of education needed.
And I think a lot of the generative AI tools
are headed in that direction.
Yeah, I think it's super natural.
I mean, that's why super, space natural.
Like, ChatGPT was so successful
because you just type and it just came.
Stable diffusion, you just type
and then have images, right?
And so I think making these things
more and more usable has led to the explosion
because it kind of comes from
the natural understanding of humanity itself.
It's trained on these large corpses
of an entire media set.
I think on the policy side,
I think all of us are engaged with governments.
We're trying to help them.
It's a bit difficult when they're just catching up to the internet
right now.
By the way, the most linear
organizations on the planet,
if not sublinear.
They are. And again, the typical approach
to this is regret minimax.
Stop it. Ban it. But it's inevitable now. And again, it's kind of a is regret minimax, like stop it, ban it.
But it's inevitable now. And again, it's kind of a global phenomenon.
One of the problems as well is that you've had an acceleration, but you haven't had a maturation.
So there's lots of excitement. How much implementation is there now?
There's going to be a bit of a pause because it has to be in the current wave, even as research keeps going. And you go into engineering and application. And so I think we're not sure exactly where things are.
I think Andrew talked a bit about where the value lands on the stack.
And again, we are all kind of at different areas of this stack.
So we ourselves haven't figured this out.
And we're doing our best with the community to try and do that,
to communicate to policymakers and make education courses and others accessible.
Michael, good to see you. You're next, pal.
So this is probably a question more to Ahmad, but anybody's perspective is welcome.
I have done lots of programming.
I usually like to think about abstractions or concepts.
You have a concept, let's say drawable, that's fit for some purpose.
And so it can do that thing.
Now with AI, we are using English language, we're using
prompts, so definition of what you're asking for is a lot more vague. So when it's vague, it may have
edge problems, right? You have to clarify things a lot more. And so my question is, do you still see
a value in having well-defined, structurally
uniquely named things which you can compose for well-understood result?
Or do you think will diverge more to everything's a bit fuzzy, everybody just provides a lot
more text, and it just somehow the system figured out for you?
Well, I think, that's an excellent question.
I think a lot of the applications right now have
been very surface level without the contextual
awareness of like, you know, Game of Thrones in South Korea
or something like that.
Amazon had this thing where they had the six page memos,
right, rather than presentations.
But another thing they used to do was write the press release
beforehand.
And so if you write the actual overview of the program
in the comments now for kind of Copilot
and things like that or GPT-4,
it will write the program based on the comments.
So again, I think this is a failure for us
to understand prompting and some of these other things.
It's something that Andrew and I have been talking about.
Again, it's gone so fast.
We've got the surface level.
We have to explore.
Another way to think about this is like game consoles.
When the Wii first came out, games were a bit crap.
By the end, it was fantastic.
We're just literally starting to explore these.
So it's simple words now, and then we'll compose and build structures around it
to get the most out of these weird thingies.
Everybody, I want to take a quick break from our episode
and tell you about a health product that I love and that I use every day.
In fact, I use it twice a day. It's called
Seed Health. Now your microbiome and gut health are one of the most important and modifiable parts
of your health plan. Your gut microbiome is connected to your brain health, your cardiac
health, your metabolic health. So the question is, what are you doing to optimize your gut?
Let me take a second to tell you what I'm doing. Every i take two capsules of seeds daily symbiotic it's a
two-in-one probiotic and prebiotic formulation that supports digestive health gut health skin health
heart health and more it contains 24 clinically studied and scientifically backed probiotic
strains that are deliberate in a patented capsule that actually protects it from the stomach acid and ensures that all of it reaches your colon alive with a hundred percent
survivability.
Now,
if you want to try seeds daily symbiotic for yourself,
you can get a 25% off your first month supply by using the code moonshots at
checkout.
Just go to seed.com backslash moonshots and enter the code moonshots at checkout. That's seeds.com
backslash moonshots and use the code moonshots to get 25% off your first month of Seeds Daily
Symbiotic. Trust me, your gut will thank you. All right, let's get back to our episode.
We're going to zoom next. Ned Alsikafi. Ned, where are you on the planet, and what's your question?
Hi, guys. Can you hear me okay?
Yes, we can hear you great.
Perfect. I'm from Chicago.
I am in health care, and I love the idea of health care and longevity.
I may be your only practicing physician here.
I had the idea of joining this group because as a practicing urologist,
we are getting
pummeled. And as Peter and AI and HealthSpan go on, most urologists are going to see people who
are older. I think all you guys are going to be a patient of either me or my colleagues here soon.
The point I'm making here is that there is an overflow of patients, and we as clinicians are getting bombarded.
And the question is, in light of workflow shortages with our staff and the incoming number of patients,
how to use AI most efficiently in terms of point of care for patients getting into our offices,
as well as having AI be part of the evaluation process, because as
many people may know, you know, 90% of the time we see the same 10 conditions. So the question to
the panelists is, number one, how would you advise a physician or healthcare provider to set this up? And do you think that this is something that is
easily attainable? I liked Alex's thought about starting small. And then the second question is,
if you were to do it, is it the sort of thing that would prevent, I mean, is there anything
in there to prevent others from doing the exact same thing so that if you are spending a lot of
time and money into this, that nobody else just kind of scoops it from you thank you for taking my question yeah i'll just say real quick if you
have a concrete idea for what you'd like to do in neurology let me know you know i think teams
could help maybe my team could help execute that and one example something ai funded we worked with
a fantastic founder allison darcy and now different different CEO, Michael Evers, to support building up Wobot, which is a digital mental health care chatbot.
You can actually install it on your phone that, you know, from the data published at the Stanford, seems to be able to take effect relatively quickly in terms of treating symptoms of, for users of symptoms of anxiety and depression.
So I think that there are actually lots of opportunities
to apply AI in interesting ways to healthcare,
but figure out those concrete use cases
that allows us or others to execute on it.
Yeah, I'll say two more things,
which is that I think the first is that,
you know, getting to market first
is actually an advantage.
You know, certainly other people can copy the idea,
but I think, you know,
this is one of these things that's well studied in business, but there's going to be a moat from whoever's able to establish the use case first and then able to use that pole position to get more and more data into the system. identifying some sort of disease or condition using AI,
then you're going to, just by launching that service,
get far more data than anyone else
and race ahead of where anyone else could possibly be.
So the sort of currency of the realm of AI really is data.
And if you have a niche data set or a unique data set,
that's going to give you an initial advantage.
And if you're able to build a strategy,
continue amassing more and more data,
that'll keep you ahead.
We're going to go next to John here at Mike2.
Let me remind you,
please look at the questions in Slido,
upvote them.
If you're here in the audience or on Zoom,
we'll be pulling them.
John, please, what's your question?
And keep the question short if you could.
Sure.
So I'm a medium to small business owner.
I'm a physician. I own a diagnostic laboratory. So I'm a medium to small business owner. I'm a physician.
I own a diagnostic laboratory.
So I hear a lot of buzzwords.
You need to implement AI.
You need to create, you know, you need to put your data on a balance sheet.
That sounds awesome.
But my initial question is, how do you get started going down that road of, you know,
we have a ton of data, but how do you put it in a package
that you can actually monetize it and put on a balance sheet? So what are the process of just
going about doing something like that? And then number two, implementing AI into something that
I think is a very concrete, like a case point for our company is digitizing pathology and allowing an AI over engine to help pre-diagnose or triage
cases or whatnot. How do you go about implementing something like that in a company that's my size
by either pairing with someone else or do you try to bring a team in to develop it? So
like what kind of playbook just to get you off the ground and move your company? Alex, I think you're
moving. Yeah. So we have tools. You go to scale.com. We have data set management tools that you can upload all of your data and
basically build that into a data set. Um, that's step one is to get all of it into one tool,
build up an entire data set, and then you can do a bunch of things from there. You can label and
annotate the data, turn it into a data set that you can train a new model on top of. Um, you can,
uh, bring in a host of companies. We could be one of them, or there's a bunch of other vendors out there
that can take that data and turn it into a customized model for you
and then launch it and figure out how much impact you can possibly have.
And you find out who that data is viable to, ultimately.
We're going to go to Slido.
There's a great question here I'm going to send to you, Imad.
It says here, is there any commercial activity which AI will never be able to disrupt?
What will be the factors to understand that?
It's an interesting question, right?
It's like the question asked of Bezos.
What will change?
There's nothing that will never be able to be disrupted because you'll have autonomous
robot agents that basically is Blade Runner,
right? In the strange world from humans. So you can't see any commercial activity cannot disrupt.
Do you guys agree? Alex, let's go with you. You know, I think at least in the short term,
and who knows, this stuff could all change. But in the short term, physical things in the real world
are much further from being disrupted by AI.
So, you know, anything that we have to sort of like interact with or do something physical in the real world,
robotics is relatively behind a lot of the sort of pure digital, pure online kind of AI systems.
So, you know, I don't know if the word is never, but at least for a long time,
you know, we won't have an AI robot that can do construction or that can do, you know,
mining or some of these things that are very, very... Interesting. I go into the,
I go into Emod's category here, but that's my opinion. You guys are excellent. Andrew?
You know, my friends and I, some of my AI tech friends and I, we used to challenge
each other, name an industry
that AI will not be able to disrupt.
We challenged each other to name that, and I had
a hard time coming with them until
one day I thought, all right, maybe hairdressing
industry. Which industry?
Hairdressing. I think that's easy.
And I used to say
this on stage until one day, one of my friends
who's a robotics professor, she was in the audience when I said that on stage until one day one of my friends was a robotics professor
she was in the audience when I said that on stage and
Afterward, she stood up she pointed at my head and she said Andrew for most people's hair cells
I don't have the bit of robot to cut the hair like that, but your head size
That's great, all right, we're gonna go to Kieran and Mike for
Oh, that's great.
All right, we're going to go to Kieran on mic four.
Do you think with the large language models like GPT,
will they ever be able to explain why they generated what they generated or create a new breakthrough?
So right now you can ask the models to explain why it said what it did.
It's not clear if those answers are actually correlated at all
with the actual reason why they aren't probably correlated with why they actually said that. And so there's sort
of, you know, humans have this bug, which is that we will do things. And then if somebody asks us
to explain why we did the thing, we'll come up with some post hoc rationalization that oftentimes
is not literally the reason why we did a certain thing. Usually it's, you know, there's the analogy of the elephant and the elephant rider, which
is that our emotional brain is an elephant and our rational brain is this rider of the
elephant that just sort of is trying to explain what the elephant's doing.
So I think the large-language models are similar, which are that they're going to go through
some process by which they make a decision or they say something, and then you can ask
it to explain, and it'll come up with something on the spot to try to explain it um you know this is something that amad and i were
actually just talking about is like how do you get greater levels of interpretability how do you how
do we do research in the direction of like actually truly understanding where it's coming from but
that's all active research nice giselle you're the next contestant good to see you um so um given So, given what has happened to social media, I actually rated the way I feel about AI as
scared shitless.
Because you guys, you know, you are the leading edge of this, this is your space.
Is anybody doing anything to build parameters around this technology so that what you were saying you met about
someone calling you and within your mother's voice and selling you stock
that is we already live in a world where we don't know what truth is how are we
going to fight this well I think you see information curation going through social networks.
I mean, this is the Twitter verified, it's Apple identity, it's kind of a bunch of other
things.
We bank contentauthenticity.org, which is an Adobe thing around visual media.
But we do need to have centralized repositories of trust because the cost of information becomes
zero and creation becomes zero.
And unfortunately, there is nothing
that can really regulate this output.
Like there are laws in various countries
that you must identify AI output,
like with stable diffusion,
there's an invisible walkmark and things,
but it's become so accessible
that we need to have standards quicker
than the system is adapting.
So this is why I think information distribution systems
need to standardize around this right now, which is something we're pushing a lot. Yeah, I mean,
there's a big question. Can we even govern against any of this, right? All of this is bits. We live
in a world of porous borders, data flows. So Giselle, we're going to have a lot of these
conversations around ethics and around implications. Guys, this is happening
right now, right? This is this year, next year, not 10 years from now, five years from now.
What gets played out, this is why we need to be paying such attention to it. Giselle, thank you.
I'm sure we'll hear from you a bunch more times. Let's go to mic number three in the back there, and then we'll go
to Zoom and we'll go to our Slido.
My name is Hajime, running an AI startup in Japan, Series D, now getting much closer to
the Japanese government. And they're concerning about the disaster and the resilience because there's a huge risk on an earthquake
as well as like Mount Fuji erosions.
So my question is like, what could be the interesting
but the practical way to apply AI
for those like disaster recovery?
So we actually worked on something in this direction
for in the Ukraine conflict, which has obviously been,
this horrible thing to happen to to a region in a country um and you're able to one of
the things that um is really unfortunate in any sort of disaster situation is that uh you know
seconds really matter you know every second spent that you aren't you know deploying resources to
help resolve one of the you know uh of the people who are in a damaged structure
or tackle a certain area
is going to result in either lives lost
or further damage to the infrastructure.
So one of the things that you can do
with a combination of satellite imagery
plus drone imagery in major cities
or major areas that are being impacted
is use AI to automatically identify damage
on a building-by-building level
and a change in that damage in every area
and use that to coordinate humanitarian response.
So this could be applicable, you know,
this is applicable in any sort of natural disaster,
any sort of scenario where fast action really matters.
The beauty of AI is that it can analyze information
faster than any person.
Yeah, we're doing a variant of stable diffusion
for satellite imagery and time series data as well.
And so that I'm sure will be used in that toolkit.
Yeah, amazing.
All right, to our Zoom audience,
we're going to you next, Nico.
Nico Dranik, where are you on the planet?
And what is your question?
Hi, Peter.
I'm calling in from Austria, from Europe.
Beautiful.
And I have a question regarding the profile of a chief AI officer.
So what private profile should this AI officer have to fulfill this for best?
Should it be technical?
Should it be another profession?
Should it be a mix?
Because it seems to me that combines a lot of traits
that are found in entrepreneurs and CEOs anyway.
So what else is needed?
And I ask this specifically coming from a non-tech traditional background
as well in my case.
Thank you.
Nico, great question.
Let me preface in a second.
My advice as a first step for folks is
I like the idea of get your team to experiment and so forth.
But what's been working for me is finding somebody
who knows the field out there, understands who to partner with,
is advising as a strategic advisor to the CEO
about the platforms and such as a chief AI officer in that regard.
Do you guys agree with that idea?
And what would be your, you know, what do you think that person should have as background?
So actually, as far as I know, I think I might have been the one that coined the term chief
AI officer.
I stole it from you then.
No, but so actually years ago, I actually wrote an article in Harvard Business Review.
I remember Googling, but chief AI officer didn't really exist on the internet.
So I actually wrote a piece in Harvard Business Review with my specific recommendations to the chief AI officer
profile. I think a person needs to be technical enough to understand the tech, and then also
business-oriented enough to work cross-functionally to figure out what are the valuable business use
cases for your specific application. But I have a multi-page article on HPR.
I would imagine the CEO or the head of whatever division says,
this is what I think is possible.
This is what I need.
The chief eye officer is sort of an interface to the world out there
to bring in a scale, to bring in part of what,
Imad, what you're building.
Does that sound like a reasonable role in a company?
Yeah, I think it would be technical enough to make good judgments
and then also to make the, you know,
buy versus build decisions.
And often for many companies,
you should buy a lot and build a little bit.
But to make those decisions
requires both deep technical judgment
as well as the ability to figure out,
understand the business well enough
to figure out the use cases.
Imad, I can't tell if you're agreeing or disagreeing.
No, I think so.
I think that there's an implementation
side of it, like an applied engineer is a
good thing. I think passion is kind of
key, because you have to throw yourself, otherwise
there's no way you can keep up.
Actually, my suggestion would be that
given this suggestion that you have,
there should be an internal A360
repository around the ecosystem,
the latest trends and others, that can go
to all the chief
AI officers here.
So they can keep in touch through that.
All right, Steve, you heard it.
You're in charge.
Wonderful.
Thank you.
I hope, Nico, that answered your question.
Let me go to Slido next.
And thank you.
Some great questions.
Katie, I didn't I apologize.
I didn't note i apologize i didn't uh note your
your your name earlier um actually what is the first step companies can take to evaluate the
data they have uh how can we get started extracting uh from the systems we have in place today
yeah so i think the first thing to do right now, the new large models are quite capable
of just sort of ingesting in existing data you have, and basically whatever format you have,
you know, in some cases it can be literally as simple as uploading a bunch of PDFs into the
models and sort of extracting all the information from there. By the way, it's something you just said
which is very important, right?
It's like you can almost upload anything.
If it's text.
Right? Your Outlook files, everything.
Well, soon images as well.
Soon images, yeah.
Texted images, yeah.
One point to cover, it turns out a lot of enterprises
have what's called structured data,
which means basically giant Excel spreadsheets.
So structured data, you know, models like ChatGPT and so on are less directly well suited for,
but text and images is getting really good.
That's true.
Yes.
Right now, general models are still worse at Excel than many of us.
But yeah, so these are the data formats.
You know, I would basically start the initial process,
just catalog all this data
that you would want to upload
into an AI system
and then reach out to one of us
to help you put it into an AI system.
I want to go to Christy's question
on Slido here.
I'm heading this towards you, Imad.
If you had to put all of your money,
your family's money, your friend's
money, into one of the technologies
here or industries, which one
would you put it into?
Cha-ching!
I'm all about risk diversification. No, I mean,
it's an incredibly hard thing to say.
My thing is that
what I advise all the young people is
actually I'm advising young people, don't go to university anymore, do PhDs.
Yeah, and don't get an MBA.
Go work.
The thing can get a GRE, right, already.
So I think, you know, put it into Andrew's fund.
I think it'll do well.
But there are very few kind of options here,
apart from you have to use it to basically upskill yourself.
This is the thing. Or upskill your this is the thing or upskill your community invest in yourself invest in your company this is literally the case because this
is such a disruptive massive game changer and there are no easy ways to do that other than like
actually let's spend money reasonably quickly to get up to speed on this technology because you
will have an edge over everyone else as i said it's first to market in the stock market itself
There are no real investable things here at the moment. Are we gonna see the first trillionaires in this area?
Most likely is
This where the what grace wealth creation is going to happen or do you see that in Bitcoin first?
Yeah, I think we see this I think again like Bitcoin had the GPU and then I had an asset here
We're at the GPU era of this technology,
but it's real, and we're going to get to ASIC,
so it's going to be everywhere,
and it's going to create wealth faster
than anything we've ever seen.
I call it the dot AI bubble, actually,
which is why I'm not joking.
Like, you know, if you have funds that are focused on this,
it's almost a rising tide where it lifts all boats.
You know, even if you have the alpha side,
that will come through.
Like, a trillion dollars will go into this. this like 20 billion went into delivery startups last year
I told I think probably of six billions gone into this sector today a
Hundred billions going to self-driving cars a trillion went into 5g. Yep. Just amazing anything here. That's investable
You know far as I'm next I want to add one thing to what Iman said about jumping in. I think sometimes
people think, am I
too late?
And the answer is
you're not.
You're actually
very early still.
AI on, you know,
don't have as
exponential as
some very rapid
growth, but if you
jump in now, I
think you look back
a few years from
now and people
say, wow, you
know, that my
buddy over there
was really early
jumping in because
it's still growing
so rapidly.
It's Bitcoin.
It's literally $6 billion. It's Bitcoin. It's literally 6 billion.
It's going to go to 600 billion in the next few years.
100 times increase, right?
This episode is brought to you by Levels.
One of the most important things that I do to try and maintain my peak vitality and longevity
is to monitor my blood glucose.
More importantly, the foods that I eat and how they peak the glucose levels in my blood.
Now, glucose is the fuel that
powers your brain. It's really important. High prolonged levels of glucose, what's called
hyperglycemia, leads to everything from heart disease to Alzheimer's to sexual dysfunction to
diabetes and it's not good. The challenge is all of us are different. All of us respond to different
foods in different ways. Like for me, if I eat bananas, it spikes my blood glucose. If I eat grapes, it doesn't. If I eat bread by itself,
I get this prolonged spike in my blood glucose levels. But if I dip that bread in olive oil,
it blunts it. And these are things that I've learned from wearing a continuous glucose monitor and using the levels app so levels is a company
that helps you in analyzing what's going on in your body it's continuous monitoring 24 7. i wear
it all the time really helps me to stay on top of the food i eat remain conscious of the food that
i eat and to understand which foods affect me based upon my physiology and my genetics.
You know, on this podcast, I only recommend products and services that I use,
that I use not only for myself, but my friends and my family,
that I think are high quality and safe and really impact a person's life.
So check it out, levels.link.com.
I'll give you two additional months of membership and it's something that I
think everyone should be doing. Eventually this stuff is going to be in your body, on your body,
part of our future of medicine today. It's a product that I think I'm going to be using for
the years ahead and hope you'll consider as well. I've been sitting here and really blown away with all the great technology and I'm in a business of
real estate, which is, I'm not playing in that arena, but I'm here to find out what I need to do.
But what I'm, what I'm actually brought me up to, to the... How many folks are in real estate here?
Raise your hand, please. Just get a sense. So you're not alone. Okay, thank you. But my main question really comes back to the sort of what do we do with all these great things
and how do we hack the world of peace around the world?
And how do we understand with the data that it's the need of what Russia, China, Iran, North Korea,
what is their problem, what are their challenges,
what are the stuff that they want to blow everything up,
and what is our problem,
and how do we look at ourselves as a citizen of planet Earth,
and how do we work with the geniuses we have in front of us
to really be able to bring this thing?
Because there's a lot of not cool things. We're here around the world. All right, we got your question. How do we save humanity?
That's an easy one, right? Thank you. Seriously. I mean, I mean, I'm odd. Please dive in because this is something you care deeply about.
These are universal translators. Go to chat GPT, paste something,
and say, write it from the perspective
of a Tea Party conservative.
It'll do that.
And then you'll say, rewrite it
from the perspective of a libertarian,
and it'll do that.
So you're going to have the ability very soon.
We're all going to have some version of Jarvis, right?
An AI that is in your ear, on your body, and so forth.
And that AI will be able to tell you,
are you being biased in this conversation?
Because it can look at the conversation from other points of view, right? I think one of the
greatest drivers for peace on the planet is making sure every mother has their children with the best
health and the best education. If you are educated, if you're healthy, if you have opportunity,
you're not going to want to throw your life away. A more peaceful world for us is a world in which you uplift everybody,
which is what this technology has the ability to do.
Thank you.
I think there's one more thing as well.
It comes to trusted third party.
When you have a disagreement with someone and you have someone trusted between you,
then a lot of this stuff that can speak both your languages, as it were,
understands both your contexts.
So again, this is the hope now now that we have this universal translator not for language
but for stories context and other things we just got to build it right and make sure it's distributed
as opposed to centralized and controlled with one specific world view i'm going to ask a question
and jump in here open ai's approach right now uh and you know and Elon tweeted the other day that, I guess, OpenAI dismissed
or Microsoft dismissed their ethics committee or something like that.
Is there a general sense in the community of concern around that, or is that not a valid
concern?
Thoughts?
Not tweeting this out?
Well, I think one thing, I'll just speak for everyone.
I think we all have to admire what OpenAI has done.
You know, the work in large models
has really been driven by OpenAI.
And kind of as I mentioned before,
I shouldn't say that all of AI was on the wrong track,
but certainly we were not as focused on large models and the incredible capabilities that we get from these
large models if not for open ai and you know i think so far they've been very responsible in
their deployment of the technology i think they've been very thoughtful they've they've
invested a lot into research to to make sure that these models are deployed safely.
And I think it's a hard thing for any organization to do to build such incredible capability that has never existed before and try to deploy it safely.
I feel like the easy answer would be to say, oh, we need to be super concerned about ethics and it's all going poorly.
I think we do need to be really concerned about ethics and responsible AI. And to be really pragmatic, I think when Imad released his model, which is fantastic, love what he did. I think that Imad had a lot of flack about how
could you release this thing like that? But net-net, I'm sure some of the models was used for negative
use cases, but I think EMOD created massively more value than
harmful use cases. So I think that we should be very clear-eyed about the problems and the harm
and do our best to mitigate that. And then also be clear-eyed about the huge benefits,
you know, that releasing these models and all the wealth and abundance that this also creates. And
that trade-off is a very tricky one. And I do see many highly ethical, well-meaning AI teams
agonizing over that yeah i
think i agree with kind of both and you know as alexander said amazing technology and it's been
really a breakthrough that allows my humans to scale and they've been at the core of it
um like i have disagreements with even ai uh my core one is just a case of now it's got to the
point where this technology could potentially be dangerous like as we scale more we don't know
exactly how it works and so i'd love for them to be responsible to be more open
transparent governance transparent kind of procedural stuff and again really adhere to a
very strong charter they don't have to release their models you can have proprietary models like
i funded the beta of mid journey and if you look at mid journey version 5 it's amazing i said you
never have to open source it because it's great for humanity to be creative you know so not everything needs to be
open but we do need to be transparent and careful when stuff gets out of our control in their own
ai agi the artificial general intelligence document they say we believe this technology
could be existential and we will treat it as such we believe this technology i was surprised when i
read that as well so So in that case,
you should be transparent
and you should be really transparent
about your governance.
Extra, more than anyone,
similar to companies
that can affect the environment.
Hey everybody, this is Peter.
A quick break from the episode.
I'm a firm believer
that science and technology
and how entrepreneurs
can change the world
is the only
real news out there worth consuming. I don't watch the crisis news network I call CNN or Fox and hear
every devastating piece of news on the planet. I spend my time training my neural net the way I
see the world by looking at the incredible breakthroughs in science and technology,
how entrepreneurs are solving the world's grand challenges,
what the breakthroughs are in longevity,
how exponential technologies are transforming our world.
So twice a week, I put out a blog.
One blog is looking at the future of longevity, age reversal, biotech,
increasing your health span.
The other blog looks at exponential technologies ai 3d
printing synthetic biology ar vr blockchain these technologies are transforming what you as an
entrepreneur can do if this is the kind of news you want to learn about and shape your neural
nets with go to demandist.com backslash blog and learn more now Now back to the episode. All right, we're going to go to Zoom next.
Charles, where are you on the planet?
And what is your question?
So I'm in Palm Beach, Florida.
I welcome all of you to come down and visit.
Most of us from New York have already left the city
and we're reestablishing ourselves here.
This is going to be very specific
because Andrew, you asked us to be specific.
So this is actually addressed to you and perhaps to Peter as well if he wants to chime in.
But how do you use AI or ChatGPT to recommend an investment for your venture fund?
You have a lot of inputs, a lot of variables.
In my business of investing, there are too many variables to do it.
So if this is a thumbs up, thumbs down, are we going to invest in this company?
How do you do it using AI?
Thank you.
Great question.
So I think ChatGP is amazing.
I play with it, building multiple businesses using it.
I have to admit, I'm not letting ChatGP
tell me where to invest at this moment in time.
Maybe it gets much smarter.
Well, I won't mind.
Maybe it'll replace me and I'll retire,
but not yet there.
Maybe the reality is making an investment decision or a decision to build a business is so complex and so multifaceted.
I don't know if we have the mechanisms to even digitize all of that data right now.
And even if we had the ability to digitize all that data, I think that while very exciting
progress is being made to improve the reasoning capabilities of large language models. I think it'll still take a while to get through the very complex multi-step reasoning that
I think investors are using to make decisions. By the way, I remember one of the founders of
Google Ventures, who is a friend, he's not there anymore, told me that Google Ventures actually
had built algorithms for making decisions on who they invest. It's based upon where the company was
based, what years established, a whole bunch of other parameters.
I don't think they're doing that anymore,
but it's an interesting idea.
But with that, let me go to Madhu,
who's going to have a brilliant question for us.
Madhu.
All right.
Speaking of positive use cases,
I'm going to play out a scenario,
and I want to hear from all three of you if that's possible.
I'm a large health system.
I've done the sick care for all of our existence.
And I'm trying to build a chatbot that supports our nursing staff or our triaging staff. And I have all this proprietary data.
The specific question is, what foundation models should we start with?
If my team's working on this project, what foundation models should we start with?
You mind?
You want to start?
Yeah, you'll be able to use stable
LLM and stable chat soon.
Right now, probably it's going to be GPT-Neo, JNX on that side
of things.
And that would be the open models that you can use inside
your cloud or on-prem.
For the proprietary ones, I'm not sure.
Is Clot available yet?
PAUL LEWISOHNSKY- Privately available.
IMAI LABOUREAUX- Privately available, yeah. So So basically the next generation of language models aren't coming via APIs just yet.
So right now, the people that we know in the healthcare industry, they use
GPT-NeoX and Plan T5, precisely.
But there's no real services around that. Again, we're still in the research
to engineering phase, but in the next couple of months, you'll have services that enable you to do this
far more seamlessly than before,
including the Microsoft GPT-4 chat service as well
that you can fine-tune on.
Yeah, Lama as well.
I'd mention Lama, although...
It's non-commercial, that's the problem.
Right.
But internal...
Well, regardless, I think one thing that's critical
that Ahmad is sort of implicitly mentioning is that
on health data, you're very likely going to need a model that you can control and own,
because there's a lot of PHI and PII that's going to go into these models.
So because of that, I think these sort of offerings where you have to give data to
another service are probably not workable.
So I think you probably need to take one of these open source models
of which Ahmad named a bunch, Lama's a new one from Meta,
and a host of others, and train them up.
While choosing the right LLM language model is important,
I feel like one of the things that AI Fund
would recommend to different companies,
what language model to start building on top of.
So these trade-offs between model size and cost,
and do you want to fine-tune, or do you do prompting, or do you consider all these things?
But I feel like the language model is important,
but my mind is also going to a lot of the decisions you need to make
after choosing a language model and the processes to execute those well.
All right, we're going to go to my deputy son over there.
Hi, Bear.
Hey, Peter.
What's your question, buddy?
So I'm a kid, so I still go to school. over there. Hi, Bear. Hey, Peter. What's your question, buddy? So,
I'm a kid, so I still
go to school.
And in school,
a lot of kids have been cheating,
or cheating, and they've
been using ChatGPT to write
essays and book reviews
and do their math.
And I guess my
question is,
one, how do you think schools should evolve to deal with that? Or do you think that it's really declared cheating if it's a tool that we're
going to be able to use later on? Brilliant question. Let's give it up for Bear here.
You know, to me, that's a big question of question of transparency is a great question. Thank you for that.
I feel like if you use chat GPT and you're willing to openly tell the teacher, I'm using
it and the teacher says you can use it, that's fully transparent.
That doesn't feel like cheating.
To me, the thing that is concerning is if students do it, but the rules require or leads
them to hide it and that kind of, I'm going to do it, i can't tell you about it that's that's what doesn't feel very good
and kind of these schools are struggling um the debate is people will be to use these tools in
the future so we train them to use it or is it a crutch that will limit right a person's growth i
have a controversial opinion so i have a four-year-old daughter trying to teach her math
and because she's four years old
i could stop her from using her calculator i just don't buy her calculator so she's learning to add
with her own brain one of the challenges of chat gpt is you know most students using it are in an
age where parents and teachers don't have an ability really to easily stop as used and so it
does concern me would be a crutch that stops people from doing certain things. Because I love to, I use calculators, but I feel like maybe learning to act with other
calculators is useful.
So figuring out what we need to do with child GV and educational system is something I see
a lot of schools struggling with.
I wish I had an answer.
Cheating implies it's a contest.
Schools should not be a contest.
Got it.
So I think most schools are basically childcare systems mixed with status games. a contest. Schools should not be a contest. Got it. So
I think most schools are basically
childcare systems mixed with status games.
I think schools should embrace this technology
and they should really think, how can we
impart knowledge into individuals?
How can we impart critical systems? Because again,
you will never not be
without this AI as you grow up.
And you have to prepare for that.
Awesome.
We have three minutes left.
I'm going to do the following.
I'm going to ask you to mention your questions very briefly, and we'll do a speed round.
Karthik, what's your question in 20 seconds or less?
Thanks, Peter.
And gentlemen, thank you so much for your brilliant insights.
Quick, short.
What is your question?
My question is about content creation, especially in music and arts.
Okay. Ellen, what's your question? Very short.
Privacy and AI in healthcare information.
Okay. Ben, what's your question?
Should my 15-year-old continue taking Python, and should he go to college?
Okay.
So we've got privacy.
We've got, say again, Karthik?
Creation of music and arts.
Creation of music and arts.
Okay.
Take your question, answer it what you want.
Yes to Python, yes to college.
Yes to Python, yes to college.
Okay.
Creation of music and art.
Music has had its stable diffusion moment.
Audio LLM and others.
You will have perfect music models by the end of the year.
Ellen's question again.
Privacy in terms of healthcare.
All right.
I think this is a really big topic.
Privacy of data when it comes to AI systems.
I think that this is going to be probably the biggest regulatory battle when it comes to AI is what data is allowed to be used to train these models and what
are the sort of, what consent do you need to get over data and people's data to be able
to train models on top of them?
I think it's a really big societal topic that we're all going to wrestle with over the next
few years.
And I encourage everyone to get involved if you have an opinion.
I think also AI for healthcare has to be auditable and interpretable as a medical device, typically.
So it's going to have to be based on open foundations for that particular use case.
Nice.
All right, Harry and Babs, what are your last two questions here?
Real quick.
ChatGPT asks, what measures can be taken to counter the misuse of AI by countries like China?
Okay, we're going to talk about China in our next session. So hold your question for that. What measures can be taken to counter the misuse of AI by countries like China?
Okay, we're going to talk about China in our next session.
So hold your question for that.
Babs, close us out.
I was going to ask about the evil empire using our AI against us. We're talking about ethics and all that kind of stuff.
They're not into ethics.
Okay, we're going to be talking about ethics and empathy and U.S. versus China in our very next session.
I'm going to try and keep us on time.
Let's give it up for these
incredible individuals.