Ten Percent Happier with Dan Harris - The Dharma of Artificial Intelligence (AI) | Jasmine Wang & Iain S. Thomas
Episode Date: August 2, 2023Our guests today trained an AI on the world’s most beloved texts, from the Bible to the Koran to the words of Marcus Aurelius, Maya Angelou, and Leonard Cohen. Then, they asked the AI life�...��s hardest questions. The AI’s answers ranged from strange to surprising to transcendent.Jasmine Wang, a technologist, and Iain S. Thomas, a poet, join us to talk about not only the answers they received from the robot, but also why they are deeply concerned about where AI might be headed.In this episode we talk about:The origins of the bookThe definitions of some basic AI terminologyThe biggest takeaways of their conversation with AI—some of the answers they got back were fascinating and beautifulThe perils and promise of AI (we spend a lot of time here)The ways in which AI may force us to rethink fundamental aspects of our own nature And what we all can do to increase the odds that our AI future is more positive than notFor tickets to TPH's live event in Boston on September 7:https://thewilbur.com/armory/artist/dan-harris/Full Shownotes: https://www.tenpercent.com/tph/podcast-episode/jasmine-wang-and-iain-s-thomasSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
This is the 10% Happier Podcast.
I'm Dan Harris.
Hello, everybody.
Like many of you, I have been following the discussions around artificial intelligence with great interest and no small amount of anxiety.
What is this technology about to do to our lives?
Is everything really about to change? And how so are we vectoring towards Utopia or Armageddon?
My team and I have been wrestling with how and when and whether to address this stuff on the show.
We were trying to figure out what, anything we might have to add and then our senior producer DJ cashmere came up with a great starting point.
You found a book in which a poet and a technologist teamed up to feed some of humanity's most cherished works like the bible the doubt a chain the caron the poetry of Maya Angelou, and much, much more.
They fed all of it into the world's most advanced AI, GPT-3, and then they asked GPT-3 some
big questions about the meaning of life and how to do life better, and they described
the resulting book, which is called What Makes Us Human, and Artificial Intelligence
answers life's biggest questions.
They described this book as a conversation with a chorus of our ancestors,
a chat with a compendium of human wisdom.
And I should say they did all of this work before chat GPT was released a few months ago
and then blew everybody's mind and got everybody really talking about AI.
The authors in question here are Jasmine Wang, a technologist and philosopher
who has worked with the partnership on AI and also open AI proprietors of the aforementioned
chat GPT and E and S Thomas, one of the world's most popular poets. In this conversation, we talked
about the origins of the book, the definitions of some basic AI terminology for beginners,
the biggest takeaways of their conversation with the AI.
Some of the answers they got back about, you know, how to do life better, what it all means.
Some of those answers were fascinating and beautiful. The perils and promise of AI, we spend a lot
of time on that. The ways in which AI may force us to rethink fundamental aspects of our own nature
and our own job on the planet. And what we can all do to increase the odds that our AI future is more positive than not.
One last thing, there are some stray background noises at moments in this recording.
That's the nature of remote recording.
Have you been considering starting or restarting your meditation practice?
Well, in the words of highway billboards across America,
if you're looking for a sign, this is it.
To help you get started, we're offering subscriptions at a 40% discount
until September 3rd. Of course, nothing is permanent.
So get this deal before it ends by going to 10%
.com slash 40. That's 10% one word all spelled out.
.com slash 40 for's 10 percent one word all spelled out dot com slash 40 for 40 percent
off your subscription.
Jasmine Wang in S Thomas. Welcome to the show.
Thank you so much for having us.
Thank you for having us.
Really excited to be here.
I'm excited to have you here.
Let's just start with some background information.
How and why did this project come about?
Sure.
I had been working with Jasmine for a little while at a startup that she had called
copiesmith.ai, which was effectively one of the first AI platforms to automate the
act of copywriting. I had a background in copywriting. That's why I grew up in the creative
industry. I was a copywriter. And I was just completely fascinated by this technology around the same time during
the pandemic.
My mother passed away and I had a similar experience that I think a lot of people had during
the pandemic where I couldn't be with her.
You know, I couldn't be by her side.
She was in a different city to me.
We couldn't travel.
And it was a quite a traumatic experience for me.
It was the lost grandparent my kids had.
And I was left kind of afterwards with just this sense of
wondering and loss and this inability to explain to my kids what had happened.
One night I was playing around with GPT-3, this technology, which, you know,
Jasmine had introduced me to.
And I realized that if I could train the AI on headlines for ads or TV scripts or whatever,
maybe I could train it on other things.
I took a whole bunch of different spiritual texts that would have appealed to someone like my mom,
who was very spiritual.
So text from the Bible, the Dalt-Aaching, the poetry of Rumi, some
Lina Cohen lyrics, and I said to, you know, how do I explain death to my children? And it came
back with this incredibly poetic response. And then I asked another question and another question,
and then I went to Jasmine the next day and I said like, there's something really interesting here.
We spoke about it some more, the project evolved. I got my
literary agents involved. In November of 2022, the book came out, what makes us human.
What's your version of the story, Jasmine? First of all, Ian did the conception of the projects and
Tyler conceived it himself and invited me. It's very honored to be invited. I've been thinking
a long time about sort of the cultural and sociological context of AI in the valley. I think
there is this orientation that is definitely apocalyptic, but in some circles it's also
a little bit religious, and I don't use that term lightly. I mean, very literally if you
search for AI religion, there are people who have tried to make that a thing.
I was very worried when Ian initially proposed this project to me.
I think religion is a very laid-in topic that I have so much respect for
and don't want to treat lightly.
Like a lot, I feel like a lot of people in San Francisco,
where a lot of the technology that drives this book
that makes it possible was conceived and built and funded.
There's a lot of people who are quite new age
and not to rake on them at all.
There's just a lot of reinventioning happening all around us
that I didn't want to necessarily endorse or center,
especially in light of the fact that we were pulling from very old
texts that are very, very important to millions of people.
So my question around this, less came from the viewpoint of someone who's trying to
interrogate religion, but instead someone who is very interested in the interrogation of artificial intelligence.
What is the political economy that this technology introduces?
What are the possibilities for material redistribution, economic empowerment, a new intellectual era,
which I'm now working on with my company, etc., etc.?
And how could we, as even a thought experiment,
orient towards a technology that typically induces
a lot of feelings of helplessness and fear
for very good reasons?
But how can we orient towards this technology
with awe and respect?
And with the understanding that it has something to teach us,
not only about how functional it is or how performant it is,
but in fact that it tells us something about ourselves.
I think throughout history, humans have grasped for metaphors that have helped them self-conceptualize in a more accurate way.
And you can see in literature that we've always compared ourselves
against the most complex of objects that we have created ourselves. At some point
it was a pot, at some point it was a clock, at some point it was a car or a robot
or a machine. At some point it was a calculator, and now it's a computer.
Or rather a decade ago, I think it was a computer, and now AI has emerged as the metaphor.
And I think every time we introduce a metaphor,
I'm personally interested in my poetic practice,
less so when I can make a metaphor a leap
and more so in the disimmolarities that a metaphor presents,
I still use the metaphor, but frequently I'm more
interested in where it is disanalogous between the two entities. So I'm really interested in
the title of this book. Well, it makes the human. Do you think the book got you closer to an answer
there? I think so, but one thing that we've talked about a lot is sort of the
centaur or the cyborg complex. And what is a centaur or cyborg? It's when a human integrates so
deeply with the technology that it becomes an extension of themselves. Heidegger, who was a famous
philosopher, but also philosophized a bit about technology and his famous essay, The Question concerning technology,
thought that the technology at hand, someone uses,
changes one's phenomenology of the world.
Yeah, this idea of the hammer man,
which is different from a man with a hammer.
It's literally like, the hammer man
is a new kind of entity who we've not encountered before,
who literally encounters a world in a different
way.
And actually modern neuroscience backs us up.
Folks have done studies on monkeys who have like wooden pillars, basically extend strapped
onto their arms, and they're like reaching through a cage looking for some food.
And they found that the same neurons fire when the tip of that pillar touches the food as when they're hand grass
the food.
Hattaker also thinks that technology reveals itself through usage.
I want to extend that metaphor and say that technology also reveals more of the person who is using it. And I think very much, Ian and I experienced this while writing this book that it showed us who we were, not just the
questions we were asking, but the questions we were expecting when we were surprised,
moved, touched, inspired by this other than human entity. And I hope that readers feel that,
this genuine quest for not something necessarily epistemically
sound in and of itself.
GPD is not truth seeking.
That's actually one of its big limitations, technically, is how it hallucinates a lot
of facts.
But rather that there are unexpected pockets of resonance and even effervescence that
I think if the reader approaches this text with a degree of openness
to be transformed and moved by what is contained within, that there is a real possibility there.
And I think for me at least that is the highest standard of any work.
So Ian was the point to move the reader or you have said this isn't a self-help book.
So if it's not a self-help book, what is the point?
I think the point has kind of changed as we've moved along. Me and Jasmine started working
on this book about two years ago, somewhere around there. And I think initially it was
to provoke a conversation around AI, because we were doing this work around GPT-3, and we were doing the spiritual
work in respect of the book.
The world wasn't having the global conversation around AI, it's having now.
We're kind of in this post-chat GPT moment where the dominant conversation around technology
and culture is around AI.
I kind of wanted to create something that was a flag
and go, there's this thing, look at what we've done. We've kind of, you know, taken all these
different spiritual texts and found a way to talk to all of them and isn't that amazing?
And what comes after that? What comes next? Because I think that like, the moment that we're in
is this incredible moment where I think we're about
to see the birth of a thousand different new media types.
I think the way that we watch films
is gonna change fundamentally.
I think the way that we read books,
I think the way that we listen to music
is gonna change fundamentally.
Short time ago, Paul McCartney said that he was gonna release
the last Beatles song, you know, re-created by AI
And I think that we're on the cusp of this incredible creative revolution that has profound cultural
implications and I think that was part of the reason for writing the book and I think that like right now
You can kind of divide the world into two kind of camps. There is
One camp that is very obsessed with how can
we replace humans with this technology, which to me, I think is the most boring use of
one of the most exciting technologies ever created. And another camp, which I can't myself
a member of, which is this technology is going to allow us to do things that we've never been able to do before, to create completely
new industries, new ideas, new kinds of art, new kinds of engaging with everything I've mentioned
and spirituality itself. And so I think that's what I was trying to do. I don't know if I have
the credibility to write a self-help book. I was just trying to create something that kind of
pointed towards something else, I think. So the point isn't to replace the wisdom we can get
from reading ancient texts. The point of this project is to give us a glimpse of what is possible
with this extraordinary technological development. Yes, I think so. I think it's a way to go, look, right now you can
use this technology, you can read Shakespeare and then have a conversation with Shakespeare if you
want to. I think in probably a year or two, you'll be able to listen to an audio book and then
have a conversation with an narrator. I think that there's all these different things that are about
to happen. And, you know, particularly around spirituality,
it's really interesting. I was on Twitch the other night watching these kids would put together
the single Jesus AI, which is the questions and engaging with them,
trying to get it to say crazy things, you know, which it did. The other day on an artificial
intelligence subreddit, a young man took the texts from his mother and put them into chat GPT
and used them to recreate her personality because she passed and then try to say goodbye to her over text.
That had profound spiritual implications.
And so I think it's about investigating all of these things and asking these questions,
what does the world look like from a cultural, spiritual, creative point of view as we move forward?
You say the book is in a self-help book, but it's very interesting. There's a lot to be learned, I think, from the answers you got after training this AI on these
ancient texts. As you've described it, it was like kind of interviewing a chorus of wise people.
So I want to dive into that, but let's just do something on the definitional tip first here with
Jasmine, who's the person in this conversation who knows the most about technology.
Nowadays everybody's talking about AI, but that's a pretty nonspecific term to the extent
that I understand it.
So maybe you could define what's AI, artificial intelligence, what's artificial general intelligence,
what's a large language model, what's GBT3, what's chat GBT?
Can you just give us a quick glossary here?
Okay, I might need to write these down. That was a lot of definitions. So, happy to do that.
Artificial intelligence has been united as a field not by methodology but by a goal. And that's
very interesting, because economics, for example, usually people don't think of economics
as the goal of creating a perfect economic system. Same thing with sociology, same thing with
history. You're usually studying something that already exists or you're uniting right a methodology
whereas AI is similar to some engineering disciplines, like let's say civil engineering,
you are thinking about how to build a bridge. AI is a field that is built with an aim.
And the aim of AI is to simulate human intelligence
for the purpose of doing certain tasks.
Artificial general intelligence is a term that I think
has only come into public consciousness recently.
And to use OpenAI's definition, which is a very functional one,
their definition is, HDI is any AI that outperforms or matches human capabilities across all
economically relevant tasks. And when you hear that, I think you should probably feel a
sinking feeling in your chest. Obviously, we do things outside of economic activities that are
really beautiful. We take care of our children, we water plants, like we do all this kind of stuff.
we take care of our children, we water plants, like we do all this kind of stuff.
But it should be worrying for the average person that these companies are working on AI that will replace people economically. That's their entire goal, that they have in fact
marshalled billions of dollars towards this goal. And the most important nation states and
the largest big tech companies who have much more money
at their disposal and the trillions when you look at nation states are also interested.
This should be very, very concerning.
And I think it is very important that we think about the material redistribution of those
benefits.
But I digress, I'm happy to talk about political economy.
That's the definition of AGI.
Now, let's talk about large language models.
So large language models are one part of many companies road map towards
AGI at present. All of them have been hugely successful last year,
they're the models that have taken over the news.
The only difference between GPT-3 and GPT-2 and GPT-3 is the one that got all the headlines.
The only real difference between GPT-4 and GPT-3,
materially, in my opinion, is that it is trained on orders
of magnitude more data. I actually think this should be a point of pride for humans because
our data actually means something, which is pretty cool. It's not just technical improvements
that I'm proved the model.
Let me spit it back to you in complete layperson terms, which is AI, artificial intelligence,
I think, is the name for a sort of broad field of endeavor.
Within that, you've got this potential future state of artificial general intelligence,
which is, you know, the sci-fi version of it where it's a technological innovation that
would pass the Turing test that seems like something sentient or close to it that is smarter
than we are.
Large language model is what we have now and that everybody is excited about with chat
GPT where you've got this artificial intelligence that's trained on basically the whole internet,
everything, you know, the sum or pretty close to it of human learning and discourse.
And then the chat part of it is that we can talk to it and ask
it questions.
How's that for a very rough dummy summary?
I think that's accurate.
Okay.
So now let's get back to your project.
You trained an AI on these ancient texts and then asked it a bunch of questions.
And Ian, I've heard you say that the bottom line that you got back from this chorus of
our forebears is that the most important thing is love.
I would say that there were three things.
In the intro, I remember, I kind of sign it off by saying that ultimately a dazzle come
down to love.
If you look for the common denominator between all these different texts, if you look at
Buddhism or Christianity or
anything like a song that moves you, there is an element of love in that that's really powerful.
The other things that were always there was this idea of coming back to the present moment, which is, again, a kind of prevailing idea that you would find a lot of different
spiritual thinking. And then the last one that came through a lot,
which was really interesting, was this idea of connection,
this idea that we're connected to the universe around us,
that we're connected to nature,
and that we're connected to each other,
which I find really profound and really interesting
considering what AI represents.
You know, you were saying earlier on that it's this way
to interact with the sum total of knowledge on the internet.
It's a lot of knowledge if someone somewhere digitized
some uniform that was pushed into a clay tablet
with a bamboo pen, it's in there in some way.
And so what I always find really fascinating about
this experience is that we're kind of having
this really interesting, beautiful conversation with ourselves when we interact with AI. And that's
what that chorus is. That chorus is made up of all our ancestors and all our thinking.
I'm leaving space for you, Jasmine, in case you want to jump in on that, but you don't have to.
I've been thinking about the term collective intelligence a lot,
that, but you don't have to. I've been thinking about the term collective intelligence a lot,
situating the phrase collective intelligence as one that is in opposition to the values
of artificial intelligence.
To quote a philosopher Donna Harroway, she entered into philosophy when there was a big
schism that had emerged.
One team was like, there is an objective truth.
We, for example, the GDP, there are these grand arbiters of value that are objective and
fair.
And then there's the camps of like relativists, right?
Nothing matters, nothing's real, and things like truthful.
Don't have arrows like this can't do. This doesn't make sense.
And, hair away goes forth and proposes a third way, situated knowledge.
Everyone acknowledging that no one has a God's eye of you,
and that you have an entire perspective,
but it is situated in your own body
on the five foot two Asian women.
I see the world in a particular way.
But what's beautiful about this is that I can go to Ian and say,
how do you see the world?
And we kind of like knit those perspectives together
into this beautiful mosaic.
I think the appeal of AI and the fact that it speaks as a crowd
is appealing, but it's an illusion.
It's an illusion at least at a political level.
Open AI controls this data.
They control this model. It's a monolithic least at a political level. OpenAI controls this data. They control this model.
It's a monolithic thing in that sense,
even though it is true that the data set is diverse.
It's owners, and it does have owners are not.
And that is really important to consider
as you're using the model.
Something I've been thinking a lot about
is the maintenance of the digital commons.
If we think about common space, pure production,
Wikipedia, which I'm sure so many of us have benefited
from throughout the years and is one of the original
and maintained bastions of the internet's promise.
It's collaboratively edited by many, many people.
It's true that most 80% of those editors are white and male,
but it's still really amazing that so many people
are contributing volunteer labor to this shared artifact that really embodies our collective knowledge about the world.
I would like to see more developments in that direction.
Keep in mind whenever you're using these closed AI models, even the data that you're giving it about what questions are interesting to you, your turns of phrase, your metaphors, your ways of seeing, they are not being put into contact directly with other people.
Think about forums like, now defunct Yahoo Answers, think about Tumblr,
think about Twitter, we see people's questions,
real questions from real people.
Chat GBT allows us only to interact with the simulation, literally, of others.
And I think we should be very suspicious, cautious, and protective of our collectivity and
our collective epistemics are at stake, which are important for democracy and deliberation.
I'm deeply, deeply distressed about the prospect of a world
where we are locked in to some penopticon surveillance that people are implementing,
because they're afraid that someone will unilaterally do bad AI stuff, which is a fair concern,
but I'm also concerned about values, lock-in, and rigidityidity and losing all too easily, all too quickly, the potential
for reinvention that is so, so core to being human.
I think that's one of my new answers to this book that has only really emerged after
meditating on the impacts of AI over the last year.
That wasn't really present to me while I was writing,
but I realized only a year afterwards.
I was like, this idea of collectivity
that I oriented to the AI with awe,
think of acquire singing full voice,
like that's so beautiful.
But unfortunately, in a solution,
we must demand more of our technologies
and of the structures that govern
that technology. So if the choir is an illusion Ian, should we be taking the answers you got from
the AI with a massive grain of salt? Absolutely. I think that one of the greatest
challenges that we'll have with artificial intelligence is, for example, the inherent bias
within the data sets that we interact with,
within the knowledge, you know, AI is biased because humanity is biased and it's based on humanities,
you know, kind of data. So I purposely included, you know, some work that seemed a little bit off. So they are, I think, to my mind, at least, as a
very successfully published poet, large aspects of the book that are really beautiful and ready
profound. But then, every now and again, I would try and include something that kind of spurred
the idea that there is a collage happening here. There is something not quite perfect,
so that people kept in mind that what they're interacting with is this technology
that's interesting, that's different, but it's not perfect yet. So I think that there are interesting answers and there's interesting questions and I think that the conversation was
incredibly interesting, but I also don't want to position myself as an AI idealist where I say everything that
you get from AI is perfect because I don't think that that's true.
I think that the technology has incredible potential.
Jazz when he's speaking about the challenges around it quite a lot, which is good because
I think over the two years since this book was conceptualized and it was released and
with the rise of chat GPT,
we've kind of gone from one extreme where no one was talking about AI and I was going on podcasts
and different things and talking about it and going, we need to talk about this to this other side
where there is a lot of hype which we need to be aware of and we need to be careful up as well.
Let me just stay with you Ian for a second because I do feel I'm feeling a little twitchy because we haven't really
said enough yet about the actual answers you got. I feel like we've gotten ahead of ourselves a couple times here
and so I want to just dive into the answers.
You
mentioned before that you lost your mother and I'm really sorry to hear that.
You then turn around and ask the AI, what do I tell my kids about this?
Could you read that section to us please?
Sure.
Yes, I've told you the story of how I got to this point where I asked this question.
I said after trying to explain to my child, at that point in time, four-year-old daughter,
what had happened and failing.
I said, how do I explain death to my children?
And they responded,
encourage them to celebrate the lives of other people.
Tell them that the dead are not dead,
not in the way we think of it.
Tell them they live on as they are remembered.
Tell them every goodbye is really a hello
to a different way of being.
Tell them they are loved and will always be loved.
Tell them they never have to feel alone, never.
Tell them the world is magical and mysterious and strange.
Tell them they are part of the mystery and the magic
and the beauty of it.
Okay, so both you and Jasmine have expressed skepticism about the answers you're getting from AI,
and yet that seems like a really good answer.
It does. I think the thing is like we're on the other side of this part's chat GPT moment,
and I think we're worried about the world in a very big sense.
And so we're worried about contextualizing AI
as this perfect thing that solves all problems
that always has the answers.
I see a lot of kids going onto chat GPT
and using it as a therapist, which is interesting
and people are getting some really interesting results from that.
But Sam Altman, you know, the guy who founded OpenAI has said that you shouldn't use this
for anything incredibly important, and to me, something like therapy is incredibly important.
And so we've gotten these beautiful answers.
We've had these beautiful conversations.
I don't want to do a disservice to them, but I also want to go, we can't get swept away in the hype.
There is something fundamentally human about us that is beautiful and original and novel,
and this book is not the result of AI. This book is the result of a collaboration with a new technology.
You know, where me and Jasmine were going through whatever personal things we were doing,
asking different questions and getting them back.
And I think that that's what we want to do now
in this kind of post-chatchy PT moment
is remembering the human
and making sure that they're important as well.
Let me see if I can restate that too.
So you're not saying these answers are bullshit.
I think you believe that some of these answers you got from the AI were quite meaningful,
especially when you distill it down to these three primary headlines based on all the questions
you asked.
Yeah.
Those three headlines being love, present moments, awareness, and interconnection, which basically
I might argue are all love. So you're not saying,
hey, no, no, this is all garbage. What you are saying is we got some really interesting answers
back. And we're worried about what the future of this technology is. So we don't want to tell you
that AI is perfect. Yeah, I mean, I think the thing for me is that right now there's two visions
for me is that right now there's two visions of the future.
The one is pure hype and idealism, and the other one is cynicism and apathy and denial.
The hype version is dangerous.
The cynicism and denial version is dangerous.
And I think that like what we're trying to be conscious of
is going, there is a middle way.
You know, one of the things that we did 20 years ago
when the internet came about is we kind of said,
there's a bunch of people in San Francisco
that we can just trust, they're very smart.
Like we can just leave the internet up to them,
it'll be great, it'll, you know, it'll all work out.
Whether we did that consciously or unconsciously,
I don't know, but that's kind of what we did.
And the internet did not turn out to be
as idealistic perhaps as all of us would have wanted it to be. You know, social media has created
a massive mental health crisis across the world. There's numerous other floors and challenges
around that technology. That can't happen with AI. AI is too important for that to happen to it.
There needs to be nuance
and there needs to be conversation and everyone needs to be involved in that conversation
from a legislative point of view, from a cultural point of view, from an economic point of view.
And so I think that's why we're being quite measured about it. We have this incredible moment
in time and we have to seize it in the right way. Does that make sense?
Yeah. So some of the answers are amazing. Yeah. And we should be careful. Yes. Exactly.
Coming up, Jasmine Wang and E and S Thomas talk about the ways in which our AI future may
force us to rethink fundamental aspects of our own identity,
Ian's contention that both the upsides and the downsides of AI are essentially infinite,
and what you can do to be part of ensuring some more excerpts. You asked, where do we begin? Could you read that to us?
Sure. So I asked, where do we begin? In our infancy, we begin by discovering that we are separate from our mother's body. We learn that we can move our hands and feet and that this motion is not a part of our mother's body. But in truth, there is
no true separation between yourself and others. You are already connected to everything.
Separation only exists in your mind. The mind does not know how to function with other subjects
and object. It does not know how to function without some sort of division.
At this point in your life, you still have the tendency to see yourself as a subject
on your own, separate from the world around you.
If you do not see yourself as a subject, then the world becomes everything, and there
is no division or separation.
Everything that was me has now become everything,
and there's everyone.
This is very confusing to most people,
because this is not our natural state.
We are so used to seeing ourselves,
as subjects on our own,
that when we see ourselves as part of everything,
it is very difficult for us to accept this concept.
But there is no separation or division
between the self and other things.
It's all one thing.
I mean, it's incredibly profound.
And I get that you're both trying to,
and I think that's partly why we got ahead of ourselves
in some ways, because you're both so eager to make the point
that we have to be careful about this technology,
while also showing us how cool
it is.
Yeah.
But I mean, just to stay on the G Wiz side of the equation for a second.
In some ways, what you've done is showing us that for millennia, we've been going to
war with each other about doctrinal differences.
And yet if you put all of the wisdom traditions in a blender, and I believe that's actually
a phrase you've used, putting them all in a blender and ask it a bunch of questions.
It like lands on some key and massive areas of agreement.
That's a really optimistic thing.
I think it is.
Spiritedly, I think like I'm in quite a strange place.
I believe I'm quite a spiritual person.
I'm not a very religious person. But there is a part of me that believes that if there is a God, I'm quite a spiritual person. I'm not a very religious person. But there is a part of me that believes that if there is a God,
I believe that we're all a part of him,
that he manifests in each of us.
And artificial intelligence may be a way for us to connect
with each other across religion, across politics,
in profound ways.
There's a lot of danger in a statement like that.
I understand that.
There's obviously all the different challenges
that we're talking about.
But if we manage the technology in the right way,
it has the potential to connect us
in just the most profound ways that we can imagine, I think.
Do you agree with that, Jevna?
I think Eric can definitely connect us.
I think there are many tools to facilitate recognition of our common humanity.
And similar to looking at Wikipedia, I think we should all be immensely proud.
To draw on an example, I like a little bit less.
The Manhattan Project also united us.
Like the fact that America pulled together some new resources, some new brilliant people,
recruited so many amazing scientists
and achieved a technical feat.
I think it's amazing.
Not as B.C. pessimistic,
but it was also an awesome project.
So I think it is a very apt analogy,
although unfortunate.
I hope that it will bring us together more ways
by creating things like UBI
and making sure that everyone has the same standard of living as a baseline
feels safe
I hate walking around San Francisco and seeing unhoused people. They are my brothers and sisters in this life
I also work a lot on mental health initiatives, and I'm neuro diverse myself and seeing people
a lot of people who direct
cause, approximately, of their homelessness, is their inability to work deeply,
pains me. I see my work on AI, political economy, and mental health is all deeply
related. Why I'm working on AI is not because it's technically beautiful,
because I saw it's promised from the beginning of my university career and my education and research career that if
we play our cards right, this can affect a wealth shift and wealth creation event that
is unprecedented in human history.
That's what I want to see.
I want to see everyone housed, everyone
clothed, everyone fed. I'm building trellis now, which is AI for education. I
want to see every child and every village in the world have access to a
completely personalized customized tutor that is adapted to your neurodivergences that never is biased against you.
Doesn't question you, supports you fully. That's what I want to see. I want to see AI create
community through creating material, genuine abundance for everybody. And I think for some people,
this sounds like a pipe dream. Economists will tell you,
there's no such thing as post-scarcity.
I don't know how to reconcile this
with the fact that AI will is literally unprecedented.
Like, I think just a bunch of our models will break,
but I think that's insufficiently rigorous
for me intellectually.
I certainly hope that it will have a significant paradigm
shifting impact, even if it doesn't put us into an entirely sci-fi world.
I think something that Jasmine said before, I'm paraphrasing you here, Jasmine, so correct
me.
But we currently see ourselves almost purely as economic beings.
You come into somebody at a party and within the first few minutes you ultimately ask
them, what do you do?
What do you do for a living?
Because so much of our status and who we are is derived from, what do we do for work?
How do we create economic value within culture?
And I think that's crazy, and I think it's crazy to think as well that at some point,
we might need to move beyond that and go, who are we beyond economic beings?
I also often say that like there is something scary
in the fact that one of the scariest things that we can think of right now is the idea that we
might have to work less. What does that mean? You know, about us as a culture and as a society.
There's this amazing book that I love called Meanings of Life,
Fibroid Ballmeister, who basically does a compilation of a bunch of other meta
analyses of what gives meaning to one's life. One insight is that the larger
the narrative that we feel ourselves to be part of, the more meaning we feel.
Like if I see myself as part of a lineage of Asian queer people who are
near diverse,
I'll feel more impact in working on my company.
However, the opposite also happens.
People who are immoral, or doing what immoral acts
frequently want to dissociate themselves from them
versus integration.
The movement is towards dissociation
when a thief robs a house.
Right, what Mr. illustrates, this is an example.
They do not think I am robbing an upstanding member
of the community of goods that they need to support their family.
What they're thinking is about the immediate next step.
It's a very small story.
It's very tactical move.
How do I unlock this door knob?
How do I transport this out of the home?
It is very interesting to think about the American story
that we all think of ourselves as being
part of.
This nation building story, and it's funny that I'm talking about this because neither
of us are American, Ian's or South Africa, I'm from Canada.
But we are both elected to be in America because of the story that America tells you about
how you can participate.
That everyone can become great in America if you work hard enough.
A huge value base, almost a primary value base
for many Americans to the criticism
of many other societies, most pointedly, European society
is work.
And that comes from pure tenism,
it's deeply related to religious history, et cetera.
And I wonder what that means for us, how I echo Ian here, how
destabilizing it will be to not be valued for that. And I don't think anyone has good
answers here yet. Some people are like, oh, we'll just do art. And we'll enjoy that. But
really, what I think, at least for me, what I get from laboring is not like thinking about my functioning as a labrower, but in fact, this
opportunity to spend 10,000 hours is what's required for mastery. Doing
something, I'm excellent at. I'm world class at piano, I'm world class at writing,
I'm world class at founding companies. I love doing all of those things for a reason.
Can we find opportunities to achieve mastery
without the coercive structure of capitalism?
That's my question to the audience.
In other words, AI may start doing
some pretty significant percentage of the jobs
that many, if not most of us do now.
So how are we gonna derive meaning from our. So how are we going to derive meaning from
our lives? And are we going to pursue mastery if there's no economic gain to be had? Well,
we just start doing things for the joy alone.
There's this wonderful onion headline. It's something like
man who purchased guitar from guitar center fails to play Wembley's stadium.
You know, which basically like points to the ridiculousness of the idea that if you buy a guitar
eventually at some point you have to have a number one hit song and chart on the build will top 100
and like have a successful career. But ultimately, you know, I think it's something that we all struggle with on some kind of micro level already,
which is if you choose to do a hobby, there's always this nagging thing at the back of your
mind.
Like, should I turn this into a business?
And that question as small as it might seem in that context is probably going to get
a lot bigger over the next few years.
And it's something that we're going to have to have a really big debate about.
Because we've been operating for quite a while on this kind of productivity,
capitalistic mindset, many, if not most of the things we do,
are for some sort of outcome, whether it's economic or not.
Yeah. We're doing something to produce a result.
It's not common to spend a significant amount of time doing things purely because you
enjoy it.
And so now we may be entering in the world where for better or worse, you're going to have
a bunch of extra time.
And potentially.
And so what are you going to do with it?
I think one of the really important lessons that COVID taught us was that incredible change
as possible at a really short periods of time.
I don't think so, so I just had to change like that fundamentally in a very, very, very
long time.
And I think that that's a lesson that we should take from that experience and keep.
If you go back in your family tree to your grandparents, my grandparents worked on a farm.
My great grandparents worked on farms,
like on both sides of my family tree,
pretty much everyone worked on a farm.
That's what life was like.
And then we moved to the cities.
That was a big change.
Saturday did not exist.
The fact that you got an extra day beside Sunday did not exist. The fact that you got an extra day, besides Sunday, did not exist. So I think that
incredible change is possible. This represents an incredible change, and we have to go into it with
purpose and intent. I will say I think I have a little bit less optimism here, but also maybe a
little less fear that our work will be replaced? I mean, tons of people do stuff
that is done better by other people.
You know, like classical pianos?
Basically only the top five classical pianos
really matter or really earn any money, frankly.
But I play piano.
So many other people play piano.
A lot of people work as piano teachers.
This is whole economy around that.
So I personally think even though AI is going to do a lot more work, humans are still going to be not only valuable,
but still going to keep doing a bunch of work even uncompensated. And stuff that we still
classically view as work. Something we'll just love doing, Excel model spreadsheets, you
know. I want to cite a researcher, John Maynard Keensens who came up with a lot of fundamental economic principles
that we still used to think about the world today predicted in 1930 that by 2030 we would
only be working 15 hours a week and he was so excited about this future for our grandchildren.
Unfortunately, Keens was a market fundamentalist.
He was so blindsided by the promise of the market that he thought that people rational things would naturally happen because the market would take to such.
And I have a message with audience. The market will not create better labor conditions. The market will not decrease your labor hours.
We have seen time and time again. Only reason we have Saturdays, as Ian mentioned, now that we do, is because our forefathers, our foremothers, our four non-binary queers said,
hey, we need weekends.
That was their chat, that was basically their matchup.
We need weekends, 40 hours here, 40 hours there,
40 hours for play, you know, to be human.
If we want that 15 hour week,
we're gonna actually have to fight for it.
So I actually think that things are less dire here
than they appear. I don't think we'll be working as much, we don't need to work for it. So I actually think that things are less dire here than they appear.
I don't think we'll be working as much. We don't need to work as much. I hope you have UBI in a
floor to guarantee that. However, we probably still will be doing the things we want to do.
There's a lot of redundant labor in the world today. I hope AI in particular replaces
work that is dangerous, monotonous, and not high leverage. I think the discourse has worried way too much
about like nihilism, lack of meaning, et cetera.
Humans have been automated away from so many jobs
that we still continue to do, people write horses,
people draw with pencil and paper
when they could take photographs.
Humans just do so much useless stuff,
and we will continue to do useless stuff
for as long as we are alive.
And AI doesn't really change that, or the meaningfulness of it.
I think it will make it more demanding to become the best.
I forget who was the top go player in the world when AI defeated them and go, but he quit
I think at least for a couple of years, because he was like, this changes my conception
of what it means to play go.
I'm no longer the best.
What does that mean?
But it doesn't mean that the game entirely loses meaning,
and it definitely did not change the number of Goat players.
It actually made it go up.
So I think we need to update some of our models
around this stuff, and I don't think people
are thinking about this in a way that's quite reasonable.
I think they're drawing very broad strokes of,
oh, if I didn't get to play piano anymore, how would I feel?
But that's not the exact question to be asking.
Coming up Jasmine and Ian talk about more of the perils of AI, including killer robots,
Ian's contention that both the upsides and downsides of AI are infinite and what you can do about
all of this.
What about the other perils of AI? Very much a lay person here,
but as I understand it,
they fall into at least three categories.
One, in the short term,
a flood of misinformation and deep fakes,
hitting social media in the short to medium term, of misinformation and deepfakes hitting social media in the
short to medium term, putting a lot of people out of work, which we've kind of covered in
the longer term, especially if we get anywhere close to artificial general intelligence,
evil robots deciding to kill their creators or taking over warfare for their own purposes or whatever it is.
So how worried should we be about this whole cocktail?
I think that's all worries.
The things I'm worried about is that I want people
to be more demanding of the companies
that influence our daily lives and give us no jurisdiction
over their governance, over how they distribute their
profits, over how their negative externalities and dark UX patterns impact us.
I'm sorry to sound like such a rallying cry, but I think my work has turned a lot more
activists over the last couple of months.
As a work around this has become even more urgent.
It's really interesting, probably, for listeners to read the tone of this book, which I think
is quite calming and reassuring, and I'm just like, I'm worried.
I'm very, very worried, and I think we need to action.
I hope that people are odd, and then they wake up the next morning, and then they do some
stuff, and like sign some petitions, figure out how to donate to maybe the Collective Intelligence
Project or other organizations doing good work, setting up third party auditing boards for
these organizations. What we need and what I want to see is a world where AI is
democratically accountable because it's going to affect us all. I think transformative technology
is going to touch everybody and everyone who is affected by something deserves to have recourse
and ideally also jurisdiction over that technology. Just to sum that up, all the concerns are real, including killer robots at the
far end of the spectrum. And it's an amazing technology that's also true. And what
needs to happen now is that all of us need to demand that those with power
proceed responsibly. Is that a fair summary? Yeah, I mean, I think that the upside of this technology is infinite and the downside
is infinite as well.
Jasmine early on mentioned post-scarcity economists saying that that's not really a possible thing,
but we could approach something like that potentially because of this technology. The things that are busy happening in medicine and in society in general, just with the initial inklings
of this technology are profound. But as Jasmine has said, I don't think the book and what
she's saying are so divorced from each other. I think the book is very much a way to go,
hey, isn't this crazy? Isn't this amazing what
this can do? We should be talking about this. We should all be involved in this. And I
think it comes from a place of fear because one of the big things is just how far legislator
lags behind technology and society. I know that there's been a bunch of hearings and Sam
Altman has gone in front of Congress and there's a bunch of things that have happened,
but this technology is moving so fast. Whenever I have to give a talk or a presentation or whatever
about this kind of stuff, I always say, if I had to give this talk 12 hours later in the day, it would
be a different talk because of what's happened in the last 12 hours. And that's how fast it's
moving, you know. And so how does the society and culture keep up with that conversation?
How do we keep involved? And I think that's the thing that we're really kind of driving
up. and how do we keep involved? And I think that's the thing that we're really kind of driving at.
You ask a question in the very title of the book,
what makes us human?
And we talked about this a little bit early on,
but just curious if you've landed on something of an answer.
I think the answer changed.
And I think Jasmine's back to that.
At one point when she said that one of the gifts
of being human is reinvention,
is becoming a new person again and again. I think one of the things that's become really
fascinating to me is for a very long time we have separated ourselves from animals by saying that
we're human because we're intelligent, because we can do things like math, so we can play chess, or we can make a computer.
And as this technology has advanced more and more and more,
we've started saying we're human because we feel things,
because we have emotion, because we can sense things.
And so I think the answer will continue to change.
And I think that we're part of something that's big and moving and fascinating.
And it's a conversation which just has to carry on.
I think what makes this human very much includes our capacity for reinvention, reinventing ourselves
and our societies and our political structures, but also our capacity to invent new tools.
I think there was a quote.
At first, we build our tools, we shape our tools,
and thereafter our tools shape us.
But I think also what makes us human
to touch on some of the themes of the talk
is our collectivity,
is our capacity for mutuality and empathy
with those who have less resources than us,
our capacity for solidarity and kinship
with those who need us and need our bodies
to show up besides theirs in protests
at the polls in Zoom Culse.
And for real attention to the luminous beauty
of the other, to alterity, to queerness, to the strange
that will soon become familiar and a friend, even family.
I believe AI can only be a simulation, it is a simulation and a mirror that teaches
us a lot.
We gaze into ourselves more clearly than we ever have before.
A calculator is a poor mirror.
A pot isn't even poor mirror. Looking at AI, we see
some version of ourselves. We see it in all our flaws. The biases, the negativities that show up
when we talk to said technology. But we also see the beauty. We see this wisdom that has been
lovingly collected by Ian and myself in this book that I am so so proud of and
honored to be a part of.
But it is still not us. It is a mirror. It is an image. It is a simulation. It is a melakra.
And I hope that we continuously and painfully
continuously and painfully strife towards the real and the authentic. Straf to really show up when it matters and really make sacrifices when it
matters, both in terms of choosing to use certain types of technologies,
maybe open source models, for example, donating to open source developers,
supporting projects like the Internet archive that keep collective wisdom up
on the internet, donating to Wikipedia,
although honestly I'm not sure how they use their nonprofit funds.
There is a lot of money over here.
Supporting the digital comments that is so crucial to upholding democracy,
which although I increasingly feel more disappointed by, is simultaneously, I think, the only hope
for us surviving the calamities that await us to climate change existential
risk from nuclear war, bio warfare, as well as AI.
Yeah, in some ways, you're giving the same answer that AI gave you, which is what's most
important.
Love, love broadly understood, not just from antichluff, but just showing up for other human, giving
a shit about other people.
And yourself.
That's what's going to be required, perhaps more now than ever, as we head into this uncertain world.
Absolutely.
Before I let you go, Jasmine and Ian,
can you just shamelessly plug the book again and anything else that you've put on into the world that you'd like everybody to know about?
Well, the book is called What Makes Us Human.
It was published by Sounds True.
It's available, I think everywhere.
So please go out and get it.
I think it's a really interesting conversation
with a really interesting mirror as Jasmine put it
in terms of other things that I can plug or talk about.
You can follow me on social media at real NSTomas,
if you search for me, I'm sure I'll pop up.
I'm starting a, you know, kind of creative studio at play space where we're experimenting
with a bunch of different things, like this book, and trying to discover, you know, what's
next in terms of media, in terms of creativity, in terms of how do we interact with this new
world.
So follow along.
Yeah, I really do hope everyone picks up a copy of the book.
The audio book is wonderful as well.
I personally love hearing poetry spoken out to me.
My socials are at J underscore Asman Wall,
but now I change my Twitter to like JUST Wondering,
so I use those initials now.
I'm a co-founder at Trellis, which is an AI for education
company.
You can check it out at retrellis.com.
If you email me at jasmineatretrellis.com,
I'd be happy to talk about working together
or happy to onboard you personally onto the product.
It's, imagine AI integrated into a world-class book reader,
which I think a lot of audience members would love
that you can ask about.
You can talk to Socrates.
Oh, no, Socrates never wrote a book.
You can talk to David Deutsch
about the beginning of Infinity.
Talk to various spiritual figures about their texts as well.
I'll also run a creative collective called
vursus.xyz.
Check out our artifacts online.
We talk a lot about similar themes
as what I've mentioned here,
mutuality, interdependence.
We actually rewrote the declaration
of the independence of cyberspace
into the declaration of the interdependence of cyberspace with
Ian's help. I also run Colonel Meg, which is a magazine about the politics of
technology, and it's written by and for technologists, check that out, Colonel
Meg.io.
We'll put links in the show notes. Thank you both. This was inspiring and terrifying
at the same time. There's a lot of that in what we do.
Thank you so much for having us down. We really appreciate it.
Total pleasure.
Thanks so much, Dan. So good to meet you. Thank you for all your great questions.
Excited for this to come out.
Thanks again to Jasmine and Ian. Thank you as well to you for listening.
Genuinely appreciate that. If you want to to you for listening. I genuinely appreciate that.
If you want to do us a solid go give us a rating or a review. It actually really helps us with
the algorithms. I guess that's a form of AI right there. Thank you most of all to everybody
who worked so hard on this show. My awesome team 10% half years produced by Gabrielle Zuckerman,
Justinian Davey Lauren Smith and Tara Anderson. DJ Cash Mir is our senior producer, Marissa Schneiderman is our senior editor
and Kimmy Regler is our executive producer,
scoring and mixing by Peter Badaventure
of Ultraviolet Audio.
And we get our theme music from Nick Thorburn
of the Great Band Islands.
They've got a new record coming out I see.
We'll see you all on Friday for a bonus episode.
Hey, hey, Prime members. You can listen to 10% happier early and ad free on Amazon Music.
Download the Amazon Music app today.
Or you can listen early and ad free with Wondery Plus in Apple podcasts.
Before you go, do us a solid and tell us all about yourself by completing a short survey
at Wondery.com slash Survey.