Armchair Expert with Dax Shepard - Max Bennett (on the history of intelligence)
Episode Date: October 2, 2024Max Bennett (A Brief History of Intelligence) is a researcher and author. Max joins the Armchair Expert to discuss his lack of an academic background, how he became interested in the way the ...human brain works, and why neurons evolved. Max and Dax talk about reenforcement learning, how emotions developed in the human brain, and how artificial intelligence is being designed to have curiosity. Max explains the paper clip conundrum, how language allows us to transfer thoughts, and chain of thought prompting. Follow Armchair Expert on the Wondery App or wherever you get your podcasts. Watch new content on YouTube or listen to Armchair Expert early and ad-free by joining Wondery+ in the Wondery App, Apple Podcasts, or Spotify. Start your free trial by visiting wondery.com/links/armchair-expert-with-dax-shepard/ now.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
Wondry Plus subscribers can listen to Armchair Expert early
and ad free right now.
Join Wondry Plus in the Wondry app or on Apple podcasts,
or you can listen for free wherever you get your podcasts.
Welcome, welcome, welcome to Armchair Expert.
I'm Dan Shepard and I'm joined by Lily Padman.
Hi.
Let's start at the beginning.
Angela Duckworth.
Yes.
Gave us a few book recommendations.
Yes.
And one of them was a brief history of intelligence,
evolution AI, and the five breakthroughs
that made our brains.
And I read the book and I absolutely loved it.
And I've actually now read it two and a half times
and been talking about it forever.
And this is the author, Max Bennett is here.
Max is an entrepreneur, a researcher.
He started a company, Albee, an AI company,
and he's very tall and very, very charming.
Very.
And shockingly and impossibly smart
because this isn't really his field
and yet he's really done a master class
on the evolution of intelligence.
He has.
It's so fascinating and it's also broken down
in a very linear way, which is nice to follow.
The five breakthroughs.
So fun.
I had so much anxiety walking into this one
because it's a very complex topic
and it's a very dense book.
And then he was masterful and wonderful
at just walking us through very lay person style.
It was great.
Yeah, I love him.
Please enjoy Max Bennett.
He's an object expert.
He's an object expert.
He's an object expert.
Here's the reason that halves are kind of worth it
is inevitably for me, I'm 6'2 1⁄2.
Okay.
I'll say I'm 6'2,
and then I'm with another person that's 6'2,
and I'm taller and they go, no, no, I'm 6'2,
you're right.
Or I say 6'3, and someone 6'3, I'm a little shorter.
Yeah, yeah, so it's the humbler one to.
It's just like, you kind of gotta own it
because inevitably someone will disagree with you.
Yeah, that's right.
How are you, my friend?
I'm good. We already got called out on having a fake book.
I'm sorry.
What book?
Well I was so excited because Stranger in a Strange Land
is an incredible book.
It's one of his favorites and we just put it there.
It's just color coordinated.
It's not faking that there's no text inside, right?
I haven't looked.
It's just faking that we didn't read it.
Yeah.
Stranger in a Strange Time is your favorite?
Yes, Strange Land.
Where is it? The left, the left, left. is your favorite? Yes, Strange Land. Where is it?
The left, the left, the left.
The pretty red one.
What is that, sci-fi?
It's sci-fi.
Okay, I would believe that for you.
Why, what does that mean?
Well.
You profiled me correctly.
Let's start there.
By the way, this is terrible.
Oh no, do you have eight hours?
I know, I know, I know.
I'm gonna own my total anxiety and terror right now,
which is, I've read your book twice.
It's so comprehensive and dense that I was like,
yeah, I could go for another.
And then in anticipation of this, I'm like,
well, I don't really have to research much.
I've read this book twice.
And then I was like, no, this book's impossible to remember.
I gotta go back through it.
I have to be honest, because this is what we do here.
You are not what I was expecting.
What were you expecting?
Yeah, there's even more than what you're about to say.
Ace trains for you?
But continue.
He's AI.
Wow, that makes more sense.
No, I knew we were interviewing you,
I knew your name, I didn't look you up,
and I knew what you wrote.
The subject matter.
Exactly, and I did know it was really comprehensive.
So I did expect an older man, a very professorial type.
Sapolsky type.
But not as playful.
I was like, oh, I hope this is animated.
And then I walked in, I was like,
oh my God, this is gonna be fun.
He's a six foot four babe.
Exactly.
Yeah, unexpected.
I love Robert Sapolsky, so I think he's got a great look.
Well, your books often described as a mix
between Sapiens and behave, which is fair.
Although behave is such its own masterpiece.
100%, incredible.
And one thing I think is so inspiring about his work there
is he integrated so many different fields.
Now is absolutely an inspiration for me.
He didn't only come from the space of neuroscience,
he integrated a ton of evolutionary psychology
into one comprehensive story,
which I thought was really beautiful.
Yeah, just the notion that he himself
is an armchair primatologist
that is the foremost expert on baboon behavior.
He's preposterous.
That's not even his field.
He lived with baboons for a long time.
Like he really put his money where his mouth was.
Yeah, okay, so similarly your book does that,
but even knowing a little bit more about this,
I still expected to see that you would have had a PhD
in neuroscience or perhaps computer science.
Not the case.
No.
Let's start at the beginning.
Ooh, beginning.
And let's go through your pretty unimpressive
academic background, if I can be honest with you.
Oh, I love this.
Versus how profound your book is.
Yeah, I like this.
Makes me like you a lot.
Where do we wanna start?
Where did you grow up?
I grew up Upper West Side, New York with my mother.
Single mom.
Single mom.
Great.
A lot of time alone reading books by myself.
Self-learning was a very standard part of my upbringing.
Siblings?
I have two half-brothers who I adore.
And they're younger?
Younger.
So mom remarried.
Mom never remarried.
I don't know if I should say this on the podcast. I think she's mostly sworn off men.
Sure, sure.
But very happy.
She goes on world trips with her cadre
of 70 year old single women and they live the best life.
My dad remarried, so I have a stepmom.
There we go.
I should feel like I should introduce my mom Laura
to your mom, because she's in that phase of her life too.
Great, but she's dating a lot.
My mom loves dudes.
Yeah, she is not sworn off.
Yeah.
She'll probably be on a date in hospice at some point. loves dudes. She is not sworn off.
She'll probably be on a date in hospice at some point. Oh wow. Yeah, yeah.
Like your dad.
Like your dad, you know.
Yeah, yeah.
They're a match made in heaven.
Okay, so upper west side, any step dads in the mix?
No step dads.
Okay, you dodged a big hole.
No, no drama there.
Now you go to Washington University.
It's probably an incredible school.
I hadn't known that there was a Washington University
in St. Louis, Missouri.
So how do you find that school?
What is that school known for?
How do you end up there?
There was a bunch of schools that really interested me,
but what I loved about WashU is how interdisciplinary it is.
Most schools, when you go into an undergraduate program,
they force you into a single program,
but WashU lets you take classes across the board.
So I started studying physics,
and then I did some finance, economics, math.
So it really lets you try everything, which I loved.
But my first job out of school was actually in finance.
Because your degree was economics and mathematics.
I grew up in New York where you get this bug
where you're supposed to be like in finance,
be a lawyer, a doctor.
I worked in finance for a year and did not like it.
Yeah, Goldman Sachs?
Goldman Sachs, don't hate me.
I left.
I can share some funny stories. I'll need to check if I can share this publicly, but I think it. Yeah. Goldman Sachs? Goldman Sachs, don't hate me. I left. I can share some funny stories.
I'll need to check if I can share this publicly,
but I think it's fine.
Yeah.
We didn't kill anyone there.
I didn't kill anyone.
No, there's no deaths.
But I should have known that I wasn't a good fit
when after the 12 week training program,
so they just put you through class for 12 weeks,
the first day Friday was Casual Friday.
So I was like, great, Casual Friday.
And so I showed up in my shorts and a t-shirt
and I was like, I'm so excited to have casual Friday.
Casual Friday meant no tie.
Right, you went all the way.
You thought it was beach day Friday.
I thought it was beach day,
I thought it was gonna be casual,
I'm gonna meet my new friends.
And when I showed up on the desk,
they had security just to mess with me, escort me out,
put me in an Uber and drop me out of Brooks Brothers
and say, come back when you have a seat.
That was day one.
Yeah, yeah, yeah.
I love that they took you to Brooks Brothers.
Yeah.
Did they give you a credit card?
No, no, no.
Okay, so you're one year there,
what were you actually doing?
So I was a trader and I traded
Latin American interest rate derivatives.
Okay, great.
Latin American interest rate derivatives.
Explain that further if you're interested,
but I'm gonna guess no.
I kinda just wanna guess, which is,
it's not even based on playing the interest rate game.
You're not buying and selling their money.
There's a product based on when that moves.
Exactly.
Other complicated product.
My man.
Yes.
The only thing I really know about derivatives
is I got weirdly interested in credit default swaps
post 2008.
There you go.
And I was quite shocked to learn
and I think most people would be that
you can basically get an insurance policy
on securities that you don't own.
That is the premise of a credit default swap.
That's preposterous.
And the notion that you'd be heavily incentivized
to make a company go out of business
is also really crazy to me.
Okay.
That's a problem.
And we saw the manifestation of those problems.
So the people that they brought in to work on derivatives,
were a lot of you mathematicians?
A lot of them have math backgrounds, yeah.
Okay, so you did one year of that,
and was it hard to quit
because you've kind of gone to where ideally you'd want?
It wasn't for me. There's nothing wrong with people who love that at all. I don't want
to disparage that, but I learned a lot about myself, which is I am much more of a collaborative.
I like working with the person sitting next to me. I don't want to compete with the person
sitting next to me. I'm much more interested in long-term projects where we're working
together towards a common goal. And in retrospect, it's silly
that I thought I would like being a trader,
which is the most cutthroat competitive thing ever.
So it just wasn't who I was.
And so it was actually quite easy to leave.
How about the bro scene?
Were you getting invited out to like rock?
Yeah, yeah, gross, gross.
Yeah, yeah, yeah.
And that's also so not me.
I grew up with a single mom.
I'm trained to be very averse to that.
Cocaine and strip clubs wasn't a natural fit.
Wasn't a natural fit for me, no.
So when you leave, do you know immediately
what you're gonna do or do you have a period
of trying to figure out what you're going to do?
I didn't know this was gonna be like a whole life story.
This is great.
I think we care about them both.
Well, I can tell you why.
Okay.
And I think Zapolsky is a great person.
My other favorite one is Sam Harris,
which is I think a lot of people know a ton
about a lot of things, and they're not really curious
why that was even appealing to them,
or they were driven in that direction.
And I find that element to be really important,
because I think there's certain concepts
that make us feel safe,
given our own background and childhoods,
and for whatever reason, those are very comforting to us.
And then we just pursue them.
And so, I don't know, I just feel like
that should always be a part of the recipe
of why you would lay a man.
Makes sense.
Okay, so I got a job as an unpaid intern
at this company called Techstars,
which is a startup accelerator.
And so I was like the free business help
for all these startups going through the program,
which was hilarious, because I had no business experience,
but for whatever reason, they thought I did.
And I met these two very close friends of mine now
who added me as sort of the third co-founder
of this business idea.
That's when I got into AI
and that's where I spent nine years of my career.
This is Bluecore?
This is Bluecore.
Bluecore, as I understand it,
it uses AI, but in the column of marketing.
Yep.
Walk us through how that actually functions.
Sort of my whole career in business
has been about empowering companies to compete with Amazon.
Uh-oh.
That's our new home.
That is our boss.
Okay.
But let it rip.
You know what?
Let's see if we can get into a bus.
You're honest here.
Yeah, daddy trouble.
Okay.
Ooh.
Thank you.
Our corporate daddy.
So.
Mm-hmm. Amazon has obviously an incredible business,
and they have an incredible amount of technology
to help both with their marketing,
so they do incredible job personalizing marketing.
The emails they send are based on all of the things
that you've done on Amazon.
Their recommendations are incredibly intelligent,
and so most other brands and retailers
did not have that technology.
And is that a two-prong issue?
One, you don't have the actual tech.
Two, you don't have the database of all their info.
Both of those are problems,
which makes the AI problem harder
because you need to make an AI system work with less data.
And so what Bluecore was all about
was helping brands like Nike, Reebok,
compete with an Amazon.
Isn't it funny to think of Nike as like the underdog,
the David in the Gwaii story?
Yeah.
And so that was where my original interest
in the brain came,
because when working with machine learning systems,
it begets this thing called Moravec's paradox.
So Hans Moravec was a computer scientist
who made the following observation.
Why is it the case that humans are really good
at certain things like playing basketball
or playing guitar or doing the dishes
that is so hard for machines to do?
And yet things that are really hard for humans
like counting numbers really fast,
doing arithmetic are so easy for computers.
And this is classically called Moravec's paradox.
Like, why is that the case?
And so that was the beginnings of my interest in the brain.
The book starts with the most intriguing premise for me,
which is what we're trying to do now
in this phase of our technology is to get these computers
to have intelligence that either matches ours
or exceeds ours or is similar to ours.
But what's really funny is we don't know how our intelligence works.
So we're trying to replicate a system that actually we don't really truly know
why and how we're intelligent.
And I think that's really fascinating.
How are you going to recreate something that you don't actually understand in
the first place?
So it's just a very intriguing jumping off point.
And then you break this story, these are co-evolving narratives.
One is the birth of AI and where we're at today.
And then one is the birth of the first brain and
the first intelligence and the first neurons.
And you break it up into five really helpful sections of
evolution where these big leaps happened.
Was that an intuitive way to frame this?
So my original intent was actually never to write a book.
I started just by trying to understand how the brain works.
The whole book wouldn't exist
without a huge dose of being naive.
Because I thought that I was gonna buy
a neuroscience textbook, take a summer and read it,
and I'd understand how the brain worked.
Yeah, yeah, I would have thought that too. Yeah, yeah, yeah. And after reading a textbook, take a summer and read it, and I'd understand how the brain worked. Yeah, yeah, I would have thought that too.
Yeah, yeah, yeah.
And after reading a textbook, I just realized,
wow, we have no idea how the brain works.
So then I bought another textbook.
And this process continued for about a year and a half
until I had unintentionally sort of
self-taught myself neuroscience.
And yet I still felt like I had not satisfied
the itch of understanding how the brain worked.
So that led me to start reaching out
to various neuroscientists
because I wanted to collaborate with them
and no one would respond to my emails.
To your point, lack of impressive academic background.
No, I think you should apologize for that.
No, not at all.
It's true.
No, I was expecting Harvard and Stanford and PhDs.
It's a very good school.
I thought you were gonna say like Arizona State.
Hey, you're on a real run about bagging on Arizona State.
Sun Devil Stadium, Flaky Jakes, go-karts.
No, listen, they have a floaty thing
around the whole campus.
Lazy River.
I think it's okay for us to say that.
Anyway, sorry.
No, no.
Sorry to drag you through the mud on her.
So no one would respond to me.
So what I decided to do,
because I started coming up with these ideas
which were not based on evolution.
Actually my original ideas were trying to reverse engineer
this part of the brain called the neocortex,
which I can get into,
cause a lot of fascinating things with that.
And so I had an idea for a theory of how it might work.
So I decided the only way I was gonna get people
to respond to me was I'm gonna submit it
to a scientific journal,
because they'll reject it obviously, who am I?
But they'll have to at least read it
to tell me what was wrong with it.
And this is where this whole unlikely journey began
because to my surprise, it actually got accepted.
He wrote a theory and it was peer reviewed
and it was published.
And I won what's called the reviewer lottery.
Just by luck, one of the reviewers
is a very famous neuroscientist named Carl Friston.
He became sort of an informal mentor of mine.
And then sort of this started cascading
into this self-directed academic pursuit.
How many years ago was that?
That was four years ago.
Oh my God, this is so accelerated.
Yeah.
How old are you?
I'm 34.
Oh no.
Was I supposed to be younger or older?
Oh, way older.
Stanford, old.
Younger than me is upsetting.
You're a loser. You're a loser now.
You're a big boy.
I know.
Oh, that's rough, 34.
Wow, so you were 30.
But you've had a few articles published now.
What was your novel proprietary theory?
So with the neocortex, and in large part,
I don't think this approach
is actually gonna work that well,
which is why I pivoted to the evolutionary one, but the broad theory it's a little nuanced
But the neocortex is so fascinating because if you look at a human brain
All the folds that you see if you remember an image of a brain that whole thing is neocortex
And what it actually is is this sort of film?
It's like a sheet that's bunched together
What we've always thought is different regions of the neocortex do different things
So the back of your brain is your visual neocortex.
If that gets damaged, you become blind.
There's a region of neocortex
that if it gets damaged, you can't speak.
It's a part of neocortex that if it gets damaged,
you can't move.
And so one would think that this isn't really one structure.
It's actually a lot of different structures.
Each have a unique structure
that would facilitate the scene or the hearing.
Exactly.
And what is mind blowing is if you look under the microscope,
the neocortex looks identical everywhere.
What?
And somehow this one structure
does all of these different things.
And this was when I started becoming really fascinated
with this topic.
And can be relocated and moved, right?
You can co-opt areas of the Neo.
Well, if one area is destroyed,
you can relocate into another section of Neo Cortex
to perform that task.
You clearly remember the book.
I'm impressed.
Well, we'll get into the columns at some point.
So the best study that demonstrates this,
it's a very famous study where they took ferrets
and they rerouted their visual input
from the visual cortex into their auditory cortex.
So if it is the case that these regions are different,
then they shouldn't be able to see appropriately.
But what they found is they could see pretty much just fine.
And this areas of auditory cortex
were just repurposed to process vision.
And this is also why after strokes,
you can regain abilities.
The region of neocortex has damaged,
doesn't actually grow back.
And there's no repair, it's just relocation. Correct. Take a second. What a fucking thing the neocortex has damage, doesn't actually grow back. There's no repair, it's just relocation. Correct.
Take a second.
What a fucking thing, the neocortex.
What is it?
How come we're just learning this now?
Me and you, I mean.
Max has known for four years.
I know I'm gonna say how long I've known,
but we'll just say, yeah, I'm just learning now,
because I was so humble.
Okay, so that really sparked your curiosity.
And what paper do you write based on that?
So I wrote a paper that was trying to understand
how the neocortical column, so what we now think
what the neocortical sheet actually is
is a bunch of repeating columns called a microcircuit.
And there's a big mystery to figure out
what does this microcircuit do
that makes it so repurposable?
And there's a few theories that we can go into.
But my question in particular was
how does it remember sequences?
So how is it the case that you can hear a sequence
of music one time and you can hum it back to someone?
There's a sort of mathematical framework
where I theorize how am I to do that.
And that part is still a great mystery, right?
We're getting into memory too.
Where is that at?
We can't observe it.
We can't see anything etched into a neuron.
Presumably there's new connections
and a new pattern of connections.
That pattern of connections somehow represents
that sound we heard.
There's pretty strong consensus
in the neuroscience community
that the physical instantiation of memory
is in changing the synaptic weights.
So the connections between neurons,
either delete or more get formed or the strength changes.
But the big question is for a specific memory,
like a memory of your childhood
or the memory of how to do something,
where that lives is more of an open question
because it doesn't seem to live in any one place.
It seems to be distributed
across many different regions of the brain.
And there's components to a memory, right?
There's an emotional component.
There might be a smell, there might be a color,
and those are all drawing from all areas of your brain
to come together for this one memory.
And one way we feel, there's a lot of confidence
that the brain is different than computers.
In computers, there's something that could be called
register addressable memory.
The way you get a memory is you need to know the code
for its location on a computer chip.
You lose the code, you lose the memory.
In the brain, there's something called
content addressable memory,
which is the way you remember something
is you give it a tiny piece of the overall memory,
then the rest is filled in.
And this is why when you smell something,
a memory pops back into your head
or you forget how to play a song,
but you start playing the first chord on a guitar
and then the rest starts flowing to you.
Okay, so, and now I'm gonna have to consult my notes,
which I try not to do, but here we are.
Let's first talk about the complexity of the brain,
which is 86 billion neurons
and over 100 trillion connections.
I mean, that's hard to really comprehend.
Within a cubic millimeter,
so if you look at a penny, like one of the letters,
there is over one billion connections in the brain.
No.
It's boggling, right?
Okay, so the way you decide to lay out the book is
instead of trying to reverse engineer how the brain works
so that we can apply it to how the AI
should mechanistically work,
let's start at the very beginning
and then just take the ride up the ladder
because everything that's already happened
in evolutionary terms to previous iterations,
animals, amoebas, it's all still here, right?
I think that'd be the first concept
that might shock people is virtually everything
that happened 600 million years ago,
we still have that and then we've added things
on top of that.
Evolution is very constrained.
So every iteration of an evolutionary step,
in other words, another generation,
it can't redesign things from scratch.
It has to tinker with the available building blocks.
And that's why you see that our brains
aren't that different than fish,
even though our common ancestor with fish
is 500 million years ago.
Because once a lot of these structures are in place,
it's very hard for evolution to find solutions
that reverse engineer them.
So usually it just tinkers with,
even bird wings are a repurposing of arms.
It didn't just reinvent wings,
it repurposed an arm over time.
Do you even go back into say scales become the feet
and the feet become the wings?
It's all one steady line we can follow back.
So what is the first version
of what we might think of as a brain?
What's the first animal that has neurons?
Okay, so interestingly,
there were neurons before there was a brain.
So if you look at a C anemone, which side note,
because most of my learning was self-taught,
one of the most funny parts of me
entering the sort of neuroscience community
is I pronounce words wrong all the time.
Oh, sure. Oh, good, good.
I do too.
Yes, because you've only seen it in print.
I've only seen it in print.
But this one's actually, I don't have a good excuse for it.
I used to say animone until my friend was like,
did you ever watch Finding Nemo?
It's not animone.
Anemone, a sea anemone, jellyfish.
These are creatures that have neurons, but no brain.
And they have what's called a nerve net.
Their skin has a web of neurons
that are implementing a reflex,
but there's no central location where decisions are made.
Each arm or tentacle is independent largely
from the other tentacles.
The first question is why did neurons evolve?
And there's a really interesting diversion
between fungi and animals that I find fascinating.
Yeah, I love this.
Because we are actually not that different than fungi,
even though they look very different.
Because at the core, the way we get energy
is through respiration.
In other words, we eat sugar from plants or other animals
and we breathe oxygen and release carbon dioxide.
We combine sugar and oxygen to make fuel.
Exactly.
So we can't make our own fuel.
Plants can make their own fuel.
They just need sunlight and water and they're fine.
We can't survive without plants creating sugar for us.
So fungi and animals very early in evolutionary time
took diverging strategies to get sugar.
Fungi took the strategy of waiting for things to die.
I just wanna say one thing on this
because it's really fascinating.
It's the birth of predation.
So prior to this, and the earth is
four and a half billion years old,
this is only hundreds of millions of years ago.
Organisms that live, they just lived
and they got energy from the sun
and they didn't eat anything.
Fungi is like the first thing that it's going to need
other organisms to exist.
It's gonna consume other organisms for its source.
That's really crazy that that even happened.
Totally.
And so even though there might've been
some minor forms of predation,
before oxygen was around,
what's called anaerobic respiration
is much less energy efficient.
So trying to do respiration without oxygen.
But when oxygen came around,
you can get way more energy
by consuming the sugar of others.
And that's really when you got the birth of predation.
So fungi, they survive by having fungal spores
all over the place.
They're around us all the time and when something dies
It then flourishes and grows and this is why when you leave bread out
There's fungal spores everywhere and we'll just start eating them the gross filaments emerge
Animals took a very different route their route was to actively kill other things
And so this is where neurons are really important because even the very first animals, and we don't know what the first animals actually looked like.
There's theories, but the best model organism is a sea anemone.
They probably sat in place, they had tentacles,
and they waited for things to fly by,
and then they would just capture them
and bring them into their stomach.
They just waited.
They can't see, they can't hear, they can't smell, nothing.
There's no data coming in.
They're just existing,
and hopefully things will bump into them and they'll eat. They have data, which is touch. So like if something touches their tentacle, then they data coming in. They're just existing and hopefully things will bump into them and they'll eat.
They have data which is touch.
So like if something touches their tentacle,
then they pull it in.
So that's one of the main reasons we have neurons.
And with that, there's a bunch of commonalities
to your point about how evolution is constrained.
Our neurons are not that different than jellyfish neurons,
which is kind of mind blowing,
which is the unit of intelligence between us
and a jellyfish is not really much different.
What's different is just how it's wired together.
Some of the foundational aspects of neurons,
which are they come in excitatory and inhibitory versions.
So some neurons inhibit other neurons from getting excited
and other neurons excite other neurons from getting excited.
You see that in a sea anemone.
Why does that exist in a sea anemone?
Because even in a basic reflex,
you need to have an if this, not that sort of rule.
If I'm going to open my mouth to grab food into it,
I need to relax the closed mouth muscles.
If I'm gonna close my mouth,
I need to relax the open mouth muscles.
So you need this sort of logic of one set of neurons
inhibiting another set of neurons.
It's very binary, right?
It's like on or off, exciting or calming.
And that's a fun parallel with just how computers
up until whatever quantum happens, I guess.
But that is also the basis of computing, right?
It's ones and zeros, it's on and off, it's binary.
There's a really rich history in computational theory
and computational neuroscience about the degree
with which neurons and computers encode things similarly.
There is a big debate around that.
Some neurons we know encode things in their rates.
So it's not really ones or zero, it's how fast it's firing.
So pressure is like this.
So the reason you can tell the difference in pressure
is because your brain is picking up the rates
that a specific sensory neuron in your finger is firing.
But we also know other parts of the brain
are clearly encoding things in a different way
than just rates.
Other parts of the brain seem to be encoding things
more like a computer.
And so it's a big mystery sort of how it all comes together,
but the brain's probably doing both.
Okay, fascinating.
So now as organisms are going to consume other organisms
and there are versions just waiting to get lucky
and have food bounce into them and then eat it,
somehow some new organisms develop where they decide,
no, we're gonna go in search of the food.
And this is the first brain.
The first brain.
And so the first brain is all about locomotion and steering.
Bingo.
Yeah, break that down for us.
One thing that's fascinating about this is this is an
algorithm that the academic community would call
taxis navigation.
I think steering is a simpler word for it.
And this algorithm exists in single-celled organisms.
So how does a bacteria find food?
Bacteria has no eyes or ears.
Bacteria doesn't have complex sensory organs.
All it does is it has a very simple
sort of protein machine on its skin.
And if it's going in the direction of something
that it likes, it keeps going forward.
And if it detects the decrease in something that it likes,
it turns randomly.
And this takes advantage of something in the physical world,
which is if I place a little food in a Petri dish,
the chemicals from the food make a plume around it.
And the concentration of those smells
are higher closer to the food.
So if I have any sort of machine
that detects an increasing concentration of this food,
if I just keep going in that direction,
eventually I find it.
Right.
And if you turn and it decreases very quickly,
you'll know wrong direction.
You turn again, it decreases again, I turn again.
Really quickly, you're going to run through
the trial and error of it.
Exactly, it's like playing a hot cold game.
So that works on a single cellular level,
but on a large, not large to us,
but from a single cell, very large scale
of even a small nematode, which is very akin
to the very first animals
with brains 600 million years ago,
you can't use the same machinery that a single cell uses
because they have these tiny little protein propellers
that can't move an organism with a million cells.
So this algorithm was recapitulated or recreated
in a web of neurons.
So the same thing happens,
but it doesn't happen in protein cascades within a cell.
It happens with neurons around the head of a nematode or worm-like creature that detects
when the concentration of a food smell increases and drives it to go forward, and another set
of neurons that detect a decreasing and turn randomly.
And what's so cool is the point to the very first brain is you have to integrate all this
input to make a single choice.
And they've tested nematodes, which are good organisms to experiment with what
early brains were like because they're so simple.
And you can create a little Petri dish and you put a bunch of nematodes on one side
and you put a copper line in the middle and nematodes hate copper for a esoteric
reason that it messes with their skin and put on the opposite side, some food.
Does it decide to cross the copper barrier?
And so the very first brain had to make these choices.
And it depends on two things,
which would make a lot of sense.
One is the relative concentration of each.
So the more food, the more of them are willing to cross
and how hungry they are.
The hungrier they are, the more willing to cross.
Yeah, if the other option's death at that point,
if you're full, I don't need to cross.
Exactly.
If I'm about to die, there's nothing at risk.
Are they gonna die anyway from the copper?
No, the copper is just uncomfortable.
Oh, I see.
Yeah, yeah. Oh, wow.
Makes them cranky.
Yeah.
Is the medical and scientific term.
Stay tuned for more Armchair Expert, if you dare.
What's up, guys?
It's your girl, Kiki,
and my podcast is back with a new season, and let me tell you, it's so good. Dare. on and now I have my own YouTube channel. So follow, watch and listen to Baby, This is Kiki Palmer on the Wondery app
or wherever you get your podcasts.
Watch full episodes on YouTube and you can listen to Baby,
This is Kiki Palmer early and ad free right now
by joining Wondery.
Undub, where are my headphones?
Cause it's time to get into it.
Holla at your girl.
["Wonder Woman"]
Okay, so that sounds so simple. I want to talk about Roomba because that too is very interesting.
I think it's in this realm, but just right out of the gates, I would say there's a much
bigger implication to what you just laid out, which is I'm detecting more of that food.
I move towards it, I'm
detecting less, I turn.
Inadvertently, that's created good and bad.
The very foundation of good and bad, and the way that I'm constantly lamenting that we're
so drawn to binary, it's so appealing, and we follow people who seem to know which of
the two binary options is the correct one.
It's like our Achilles, our binary wiring, It's from the very jump, good and bad.
There's less food or there's more food.
That's how simple the world is.
And we're inheriting all of those neurons
in that evolution.
And yeah, that's a hard thing to transcend.
Yep, we'll see as we go forward in evolutionary time,
there are aspects of humanity
that are not binary in that way.
But evolution does seem to have started
in a binary way like this.
And one interesting thing about the brain of a nematode,
so a nematode, the most famous one is called C. elegans,
where people study this nematode a lot.
I have a poster of him on my own.
Okay, yeah, he's a great guy.
He's very famous.
Oh.
No.
Oh, that's it.
Don't trick me.
My brain is already working really hard right now.
He was setting it up though,
like that he was the Brad Pitt of the anemotodes.
You got him, you.Elegant.
He, oh my God, he, C.Elegant have a neuron.
So what's interesting is when we see things in the world,
our neurons in our eyes encode information in the world.
It goes into our brain and then elsewhere in the brain,
we decide if it's quote unquote good or bad.
Human brain is much more complicated,
but an anemotode brain,
whether or not something is good or bad
is directly signaled by the sensory neurons themselves.
So the neuron that detects smell
directly connects to the motor neuron for going forward
and is directly sensitive to how hungry the nematode is.
So in these early brains, there was only good or bad.
There was no measuring the world in the absence
of putting it through the lens of whether it's good or bad. There was no measuring the world in the absence of putting it through the lens of whether it's good or bad.
Right. So how does emotion originate in this phase?
So this is one of the most fun parts of doing this research.
I found this to be one of the most elegant things that I had not seen discussed
in the sort of comparative psychology world of studying other animals.
In humans, there's two neuromodulators we're probably all familiar with, dopamine and serotonin.
And we hear a lot about these two neuromodulators we're probably all familiar with, dopamine and serotonin.
And we hear a lot about these two neuromodulators.
They do very complicated things in human brains.
But by and large, we know that dopamine tends to be the seeking, pursuit, want more chemical.
And serotonin is more of the satiation, relaxation, things are okay chemical.
So let's go back to a nematode and see if we can learn anything about the origin of these neuromodulators
by seeing those two chemicals in their brain.
And what you see is something completely beautiful.
Their dopamine neurons directly come out of their head
and detect the presence of food outside of the worm.
So it detects food outside, floods the brain with dopamine
and drives a behavioral state of turning quickly
to find food nearby.
Serotonin lives in its throat
and it detects the consumption of food
and it drives satiation and stopping and sleep.
Digestion.
Digestion.
So this dichotomy between these two neuromodulators,
one for there's something good nearby, quickly get it,
and another for everything is okay, you can relax,
you see in the very first frames.
Whoa, that is crazy.
Wild, right?
Good time to talk about Roomba.
Let's talk about Roomba.
Because Roomba is so impressive.
The first time you see a Roomba,
it's vacuuming someone's home, who's in charge,
how does it know?
And there were really complicated attempts, right,
to create this self-vacuuming device that didn't work.
Give me the history of Roomba a little bit.
So Roomba.
Tell me about Roomba.
So Roomba was founded by this guy, Rodney Brooks,
who actually is a computer science MIT roboticist,
did a lot of writing about the challenges
with trying to implement AI systems
by reverse engineering the brain
for exactly the problems we were talking about,
which is the brain is so complicated.
He gives this great allegory,
I'm gonna botch it a little bit,
but the general idea is suppose a bunch
of aerospace engineers from the year 1900
went through a little portal and woke up in the modern world
and were allowed to sit in an airplane for five minutes,
and then they were sent back.
Would they correctly identify the features
of an airplane that enables flight?
And he argues no, because you would look at the plastics, you would look at the material
on the edge of the plane, and you'd be confused by the actual features that make flight possible.
And so he thinks that's the same problem with peering into the brain as it is now.
We're at risk of thinking certain things are important when they're in fact not.
Right, right, right, right, right.
So he started, interestingly,
I don't think he thought about it this way,
but the serendipity is interesting.
His robot, the Roomba, works not that differently
than the very first brain,
because he decided to simplify the problem dramatically.
And he realized, you don't really need
to do many complicated things
to create a great vacuum cleaner.
All it has to do is turn around randomly,
and when it hits a wall, it turns randomly,
it'll keep going, and it'll just keep doing that.
And then eventually it'll get around the full room.
I mean, there's a slightly more complicated algorithm
it uses, but by and large it's the same thing.
And most interestingly, the more modern Roombas
have something called dirt detect,
where if it detects dirt, it actually changes its behavior
for a minute or so, and it turns around randomly.
Why?
Because the world is clumpy.
If you detect dirt in one area,
it's likely there's dirt nearby.
This is exactly what nematodes do.
They flood the brain with dopamine
and they change their behavior to turn quickly
and search the local area.
Now I need to watch more docs on Roombas,
but I would imagine the first people trying to crack this
were thinking the thing has to map the room first
and then it has to come up with a very efficient protocol
for going back and forth and not overlapping.
Maybe the goal also was too steep.
The Roomba is clearly going over stuff
it's vacuumed a bunch.
Is that the case?
Yes.
But who gives a fuck?
It'll eventually get it all.
Exactly.
You could also start with maybe too high of a goal.
Yeah, it needs to be so efficient.
Yeah, like the way this must work
is it must know every inch
and then it'll design the most efficient route through here
and that's how it'll work.
But that's not how the organisms were working.
Okay, now breakthrough number two,
reinforcement learning in the first vertebrates.
And you were gonna tell me when those came along.
And maybe tell us about the Cambrian explosion.
Yeah, so in the aftermath of the first brains emerging,
the descendants of this nematode like creature
went down many, many different evolutionary trajectories
and eventually they get caught
in this predator prey feedback loop.
And so this is where evolution really accelerates
because for every new adaptation that a predator gets
to get a little bit better at catching prey
creates more pressure for the prey to get another adaptation to be better at getting away from the predator,
which puts more pressure on the predator, and you get this back and forth feedback loop.
The Cambrian explosion is around 500 to 550 million years ago. We see an explosion of
animals in the ocean. Before that, the Ediacaran period, very sparse fossils from that period.
Afterwards, you see animals everywhere. That era is mostly run by arthropods.
So if we went back in time
and we were in a little submarine around the Cambrian,
our ancestors would be hard to spot.
It was mostly these huge insects,
crustacean-like looking creatures.
And our ancestors were as a small,
probably four inch long fish looking creature,
which were the first vertebrates.
And there's a bunch of brain structures that emerged there.
And maybe we set up Marvin Minsky at this moment fish looking creature, which were the first vertebrates. And there's a bunch of brain structures that emerged there.
And maybe we set up Marvin Minsky at this moment too,
because as we were saying,
the book so elegantly parallels the different evolutions.
So really 1951 is the very first time
we hear artificial intelligence.
So what's Marvin Minsky up to?
Marvin Minsky is one of the sort of founding fathers of AI.
In the 50s, there was a lot of excitement around using
computers to do intelligent tasks.
And of course, there was a winter after this, what's
called an AI winter, because a lot of the promises from the
50s didn't come to fruition.
But in relation to vertebrates, one of the interesting things
that Marvin Minsky tried was training something through
reinforcement.
And so this had been discussed in psychology
since the 1900s.
We know that you can train a chicken
to do lots of really complicated things
by giving it a treat when it does what you want.
Like Pavlov's.
Harnessing the reward system.
The reward system.
So Pavlov was about sort of learned reflexes.
Thorndyke was the one who saw that you could teach animals
to do really interesting things, not just have a reflex. Oh, I see, okay. So Thorndyke's famous one who saw that you could teach animals to do really interesting things, not just have a reflex.
Oh, I see.
Okay.
So Thorndyke's famous experiments were these puzzle boxes.
You put like a cat in a box and you put a yummy food treat, like some salmon, outside
of it.
And the only way it can get out of the box, it's fully see-through, is it has to do some
behavior.
Let's say it has to pull a lever, it has to poke its nose in something.
Thorndyke theorized that these animals would learn through imitation learning.
So once one animal figured it out,
he let the other animal watch it.
And this did not happen.
They never learned through imitation.
That's a primate thing.
Right, but he found something else.
He found oddly that over time,
the speed to get out of the puzzle box
just slowly went down.
And what he realized what was happening
is it was just trial and error.
When they got out, they just became slightly more likely
to do the behavior that happened before that got them out.
They didn't really understand how they were getting out,
but through trial and error,
they just slowly were getting more likely to do these things.
So that had been an idea from the early 1900s.
And we would call that reinforcement learning.
Reinforcement learning.
He never used that word,
but we would call that reinforcement learning now.
And Marvin Minsky had this idea of what happens
if we try to train a computer to learn through reinforcement.
So he had a sort of toy example of this
where he tried to train an AI system to navigate out
of a maze through reinforcement.
And although it did OK, he quickly
ran into problems with this.
And he identified this very astutely
as the problem of temporal credit assignment.
And so although those are a bunch of annoying words,
it's actually a very simple concept.
If we played a game of chess together
and at the end of the game you won,
how do you know which moves to give credit
for winning the game?
You make a bunch of bad ones, you make a bunch of good ones,
and then the net result is you win,
but how the fuck would a computer know which of those?
Was the key.
Exactly. Yes.
It's so complicated and so long and so multifaceted,
you can't point to anything.
Exactly.
Well, yeah, because it wouldn't just be one, right?
It wouldn't just be one.
And now is where is a fun time, I think,
to introduce just how many possibilities
there are in checkers.
So in the game of checkers,
there are 500 quintillion possible games of checkers.
That's not even a word.
That's not a real word.
Which is nothing compared to how many possible games
of chess there are that's 10 to the power of 120,
which is more than there are atoms in the universe.
So the computer would have to have a data set much faster
than the total amount of atoms in the universe
to have played every single scenario and no.
Yeah, so it's not possible to what you would be describing as like brute force search every
possible game to decide the best move. And it doesn't work to simply reinforce the recent
behaviors right before you win, because the move that won you the game might have been very early.
And so this was a problem in the field of AI
for a very long time until this guy, Richard Sutton,
I've met him now, he's an amazing guy,
invented a solution to this in AI,
which turns out is also how vertebrae brain
solve the problem.
And that's what's so cool.
This is the actor and the critic.
Yes.
This one to me is one of the concepts
I struggled with a little bit more.
So lay out the actor and the critic. Richard Sutton's idea comes from a little bit of intuition. So let's imagine
we're playing a game of chess. You could imagine partway through the game, I make a move that
makes you realize all of a sudden, holy crap, I'm in a way worse position right now. The
points didn't change. Yeah. But all of a sudden you go, holy crap, I made a mistake. And he
asked what just happened there. In that moment when you have an intuition
that the state of the board just got worse for you,
maybe that's something that the brain
is using to teach itself.
So his idea of an actor and critic is the following.
Maybe the way reinforcement learning happens in the brain
is there's actually two systems.
There's a critic in your brain
when you're playing a game of chess
that's constantly every state
predicting your likelihood of winning. And then there's an actor that given a game of chess, that's constantly every state predicting your likelihood of winning.
And then there's an actor that given a state of the world
predicts the next best move.
And the way that learning happens
is not when you win the game,
it's when the critic thinks your position just got better.
So when the critic says,
oh, I think your probability of winning
just went up from 65% to 80%,
that's what reinforces the actor.
Interesting.
And what's so weird about this is the logic
is kind of circular because if you start training
a system from scratch, a critic's probability
of winning depends on what the actor is gonna do.
And so the critic should be wrong a lot.
And yet there's this magical bootstrapping that happens
when you train a system like this that the critic
gets better over time, the actor gets better over time.
They make each other better.
Exactly. Exactly.
Wow.
It's complicated though, right?
Or you got that really easy.
That is so easy.
Yeah, that was easy peasy for you.
But you're saying that that's equivalent
to the predator pre-evolution?
Well, we're gonna find out now how vertebrates
basically created this system.
What happened in their brains?
So there was a whole history in the 90s
where there was a big mystery about
what dopamine does in the brain.
Because we used to think that dopamine
was the pleasure signal.
And there was a lot of really great evidence
by this guy, Kent Barich, that discovered
that dopamine actually is not the pleasure chemical.
And there's two reasons why we know this is the case.
So he came up with this experimental paradigm
to measure pleasure in a rat.
And you can do this in babies too.
A rat, when it's satisfied, will like smack its lip,
and when it's not unsatisfied, it like gapes its mouth.
So it's like a facial expression.
He asked the question, okay,
when you give a rat's brain a bunch of dopamine,
it eats a lot.
We can see it consume dramatically more food,
but is it enjoying it more?
And what he found is actually, if anything,
it makes less pleasurable lip smacks. It's just consuming because it can't resist. And so dopamine doesn't
actually cause pleasure. It just creates cravings.
This is where Buddhists enter the chat.
Exactly. So dopamine is clearly not the pleasure signal. When we record dopamine
neurons, we also see that it doesn't get activated when you actually get a
reward. It gets activated when some cue occurs that tells you you're soon going
to get a reward. And so what's some cue occurs that tells you you're soon gonna get a reward.
And so what's interesting is in a game of checkers,
that's exactly the same thing as the signal
when you make a move and all of a sudden you realize,
holy crap, my position just improved.
That's when you get a dopamine burst
because your likelihood of winning goes up.
And that's the learning signal that drives you
to be more likely to make that move in the future.
So there's a part of the brain called the basal ganglia.
You love this.
This is your favorite.
I love the basal ganglia.
You're horny for the basal ganglia.
It's a great brain structure.
And the basal ganglia in a fish is pretty much identical
to the basal ganglia in a human, which is kind of crazy.
I mean, there's some minor differences,
but the broad macrostructure is the same.
And there's a lot of good evidence
that the basal ganglia implements
an actor critic like system.
And so if you go into the brain of fish,
you see these same signals where when this cue comes up
that makes it look like the world's about to get better,
the regions around dopamine neurons get excited
the same way that happens in mammals.
And so there's good evidence that invertebrates
in order to learn through trial and error,
which we know fish can do,
it implemented something akin
to Richard Sutton's actor critic system.
And this enabled it to learn arbitrary behaviors.
This is why you can train fish to jump through hoops,
through treats.
This is why fish will remember how to escape
from a maze a year later.
They can learn through trial and error.
I didn't know fish were doing that.
People don't appreciate fish enough, man.
Not at all.
You don't see them at SeaWorld doing any cool tricks.
Yeah, yeah.
That's why.
Good for them.
Not a sponsor.
They dodged a bullet.
Fish are the sponsor.
If you go to YouTube and you look up fun fish tricks,
you can find, I'm sure you'll do that on your spare time.
Oh yeah.
I'm pretty busy watching crow tricks.
I see.
I love crow tricks.
Crow tricks, yeah.
They can do like an eight step problem solving.
Yeah, they're incredible.
Okay, so within this new development in the brain,
we also get relief, disappointment,
and timing, and pattern recognition.
This all happens in this section of evolution,
the reinforced learning section.
So how does relief and disappointment enter the equation?
So when you have a running expectation
of some future good things,
so if you're playing a game of chess,
and you have now a system
that's predicting your likelihood of winning,
then when you lose,
when you have an expectation that doesn't come true,
this is the emergence of disappointment.
Relief is just the inverse.
When you expect something really bad to happen
and it doesn't happen, that's relieving.
So when you have an expectation of a future reward,
that's what begets the emotions
of relief and disappointment.
We discovered quickly that AI systems
can't really be taught sequentially.
Like what they first started doing
when they were just teaching the computer to add,
they started with ones,
and they taught it how to add by ones.
And then once it mastered that
and they taught it how to add by twos,
it would have forgotten how it added by ones.
Which is weird to me, I'm not fully sure why it would just ditch what it just learned,
but that was a big issue in teaching these computers.
And still is. So what's called the continual learning problem is still a big area of outstanding
research in AI. The way we train these neural networks is we have like a web of neurons,
we give it a bunch of input, it has a bunch of output. We show it a data sample. So let's
say we have a picture of a cat.
And then at the end, we say, this is a cat.
We look at how accurate the output was,
and we nudge the weights to go in the direction
of the output we want.
The problem is when we're updating the weights,
we haven't figured out how the brain knows
not to overwrite the weights of previous memories.
So the way we do this AI systems
is we give it a bunch of data at once.
There's a bunch of techniques for doing this,
but then when we send it out into the world,
we don't let it continuously update weights because it degrades over time.
So this is a fascinating part of AI and I learned it from this book and I think it's
really interesting if you're familiar with chat GPT 1, 2, 3.
When those things are released, they're at peak efficiency.
They're never going to change.
They were educated with all the data at once so it couldn't overwrite things and get rid of things.
And then it can't take on anything new.
So it itself can't evolve.
It has to stay put or it'll start ditching
everything it learned.
I don't wanna overstate the claim.
The AI research community has lots of techniques
to try and overcome continual learning.
But by and large, they're not nearly as good
as the human brain does it.
And the best proof and principle of this
is we don't let chat CPT learn from consumers talking to it.
So when you talk to it, it's not updating its weights.
It only updates its weights when researchers at OpenAI
retrain it and meticulously make sure that it didn't degrade
in the key features that they want to perform it at,
and then they send a new version out into the world.
But the AI systems we want are like humans,
that we're constantly learning as we're interacting with each other.
We're constantly updating our weights in our brain
and we don't have AI systems that do that yet.
Well, the relief and disappointment is kind of interesting
because it was reminding me of when I went state.
Monica's the state champion cheerleader.
Yeah.
And not like go bangles like flying in the air.
Competition, damn.
So the second, we went twice.
Sorry, two times.
Oh wow, wow.
Yeah, but the second time we won by one point
and it was very close.
And the expectation was that we were to win
cause we had won last time.
And the feeling of winning was not happiness.
It was just pure relief.
And I always thought that was weird for all of us.
We were just so relieved.
Where does happiness fit into any of this?
It doesn't really.
Well, happiness is a big mystery.
I think it's also a problem of language.
Cause I think when we use the word happiness,
we are referencing many different concepts,
but there's a loose connection to reward.
And so we definitely know that if you have a high expectation
of a reward and you still are given a reward,
but it's lower than your expectation,
you get a decrease in dopamine.
And so we know that there's a running expectation
of rewards that affects how much dopamine you get
when you receive something.
How did curiosity evolve?
That was the last one.
Curiosity is so interesting
because when we were building AI systems to play games
and DeepMind, which is now a subsidiary of Google,
did some of the best research.
We interviewed Mustafa.
Oh, great.
There's certain games that they still,
even with all of the smart actor critic stuff,
figure out how to play.
And most famously, Montezuma's Revenge,
which was like an old game.
Well, really quick, it had conquered all the Atari games.
Yes.
DeepMind.
It could win every game except for this one game.
They couldn't figure it out.
The problem with this game is there is no reward
for a long time.
The first reward is like five rooms over
when you escape from this first level.
And so the system just never could find it.
And they realized it was because
it never had a desire to explore.
So there's classically this thing called
the exploitation exploration dilemma in AI.
Exists in animals too.
Once you know something works, you don't know for sure
that it's the best way to solve the problem.
So when do you actually explore new approaches
to figure something out?
Even though you've got a reward,
there might be a better way.
There might be more reward.
So imagine that you enter a maze,
you go and you turn right and you get a cool whatever.
When you go in the maze next time,
are you gonna do that same move
or are you gonna try somewhere else?
Original AI system solve this problem by just saying 5% of the time you're gonna do something random
But that's an issue because if the solution is far away
Doing one move locally in this area isn't gonna get you to the full new room
So what they came up with was effectively in viewing curiosity into the system what they do is they have another?
perception module that's trying to predict what the next state of the screen's gonna be.
And whenever something surprising happens,
that is a reward signal.
So the system actually tries to surprise itself.
Once it's explored a room, it gets bored,
and it likes finding the way out of the room,
and it gets excited to see a new room.
Armed with curiosity, it beat level one.
And so we see curiosity also emerge in vertebrates.
What a task to diagnose, oh, it's lacking curiosity.
And then trying to think of what is the solution
in code for curiosity.
Incredible.
Yeah, it's really mind boggling.
Okay, breakthrough number three is
simulating in the first mammals.
Now we're getting into the zone
that endlessly fascinates me.
So what happens 150 years?
Oh my God, this whole thing has been so insane.
Well, it's going to peak at primates, as you would imagine.
Chimp crazy.
Yeah, theory of mind.
That's so crazy that we can imagine what someone else's motivation is
and what they're trying to do, but we're a couple steps away from that.
But simulating is the first step in route to that.
So what is it that mammals do uniquely?
When people study the neocortex, people usually think about the neocortex But simulating is the first step in route to that. So what is it that mammals do uniquely?
When people study the neocortex,
people usually think about the neocortex
as doing the things that it seems to do in human brains.
So movements, perception,
without a neocortex, humans can't see, hear, et cetera.
But what's odd about the argument
that the neocortex does those things
is if you go into a fish brain and a lizard brain,
they don't have a neocortex.
And yet they are pretty much just as good
as perception as we are.
I mean, you can train an archer fish,
they're called archer fish because they spit water
to get out of the water to catch insects.
And you can train an archer fish to see two pictures
of human faces and spit on one of the faces to get a reward.
No.
What?
It will recognize the human face
and you can rotate the face in 3D and it will still recognize
the face.
No.
Oh my Lord.
So fish are just as good at perception.
So then why did the neocortex emerge in these early mammals?
What was the point?
So this is one of the big things that motivated me to go really deep on the what's called
comparative psychology literature, see what really are the differences between most mammals
and most vertebrates.
I need to make one caveat here.
Evolution independently converges all the time.
So just because fish can't do something that mammals do,
it doesn't mean all vertebrates can't do it.
A great example is your love of crows.
Birds seem to have independently evolved
a lot of the things that mammals did.
That's just an important caveat.
There's at least three things
that seem to have uniquely emerged in mammals.
And they seem different at first,
but then through this sort of framework, it became
clear to me that I think they're actually different applications.
It's the same thing.
One is something called vicarious trial and error.
So this guy, Edward Tolman, in the 30s and 40s, noticed that rats, when put in mazes,
would stop at choice points.
So a point where there was a fork in the road, and it would look back and forth and then
move in one direction.
And he theorized, I wonder if what the rat is doing is imagining possible paths before deciding.
And he was largely ridiculed for this idea because there's no evidence that the rat's
actually doing that. And it wasn't until this guy, David Reddish, and his lab, I think it was early
2000s, went into the brain of a rat and confirmed that they are doing exactly this. And the
experiment's very cool. So in a part of the brain called the hippocampus,
there are something called place cells.
So you can see that as a rat is moving around a maze,
there are specific cells that activate
whenever they are at that location.
So neuron A activates only when they're at location A,
neuron B, location B.
By recording them, you can see a map of the space.
Usually these place cells are only active
when the animal's actually in the location that they are.
But when they reach these choice points,
the place cells no longer are the current location.
You can see them playing out possible paths.
Oh, wow.
You'll see the neuron fire for location C or D.
Not only the location C or D, the path to location C or D.
Whoa.
So you can watch a rat imagine possible futures.
Whoa, whoa, whoa.
Oh my God.
And we like to think,
this is weird paradox where we're so high on ourselves.
We're really reluctant to acknowledge
that other animals are doing this.
Except for rats.
The modeling, which is what it is.
You really think about the ability to model a scenario
in your head and predict the future
and create the whole thing and see the whole thing.
What a step forward.
Mind-blowing.
And you can imagine, I mean, the ecosystem
that this ancestor of ours grew up in
was one ruled by dinosaurs.
Our ancestors were very small,
four inch long squirrel-like creatures
that hid underground and in trees
in a world that was ruled by huge predatory dinosaurs.
And so it's speculative, but it's likely
they use this ability
to come out of the burrow at night and try to figure out,
how do I get to this food and back without getting eaten?
Because they have the gift of this sort of first move.
I can see where the predators are and decide and plan,
will I make it back and survive?
And that's sort of the gift of males.
And we can see birds probably have this too,
that evolved independently, by the way.
It's even pointed out, we would intuitively know that there's something different
because if you watch reptiles move around,
they're kind of clumsy,
they're still kind of moving in the like,
oh, turn this way, turn this way.
Whereas the mammals are moving much quicker,
they can predict where they should land on a branch,
they can predict which part of their body should be used.
So fine motor skills emerge.
There's two other abilities that I think are good to go through.
So David Reddish did a follow-up study.
He wanted to ask,
okay, if I can imagine possible futures,
might I also be able to imagine possible pasts?
And so he came up with this experiment
that he called Restaurant Row.
So you put a little rat in this sort of circle,
and there's four different choice points.
And every time it reaches one of these doors,
a sound goes on.
And the sound signals whether if it goes to the right,
it will get food in one second,
or if it has to wait 45 seconds.
Ooh, that's a big delta.
That's a big, big choice.
For a rat it is.
That's life.
And it learns which treat is at each location.
So one is a cherry, one's a banana, et cetera.
And once it passes the threshold, it can't go back.
So it's a series of irreversible choices.
So here's the experiment.
We know that rats, he verified this,
rats prefer certain treats over others.
So what happens when it's next to a cherry
and it gets the sound for one second,
and the next one is a banana,
which it likes way more than cherry,
it has to make a gamble.
I can go to the next one and hope for the banana,
but if it gets 45 seconds, I'm gonna regret it.
I should have just had the cherry.
Bird in the hand.
Exactly.
So what happens when the rat makes the choice
and 45 seconds goes on,
he goes into the brain of the rat
and he sees two fascinating things.
One, you can go into the part of the brain
called the orbital frontal cortex,
and you can see the rat imagining itself eating the cherry.
Oh my Lord.
It regrets the choice and is imagining the alternative one,
and you can see it look back.
Oh my God.
And the next time around,
it becomes much more likely to say,
screw it, I'm just going for the cherry.
So you can see rats imagining alternative past choices,
which is really useful
for what's called counterfactual learning,
which is imagining if I had done something different,
the outcome would have been better.
We do a lot of that.
That's an ability that evidence suggests
emerging mammals.
Counterfactual learning.
Yeah, give some more human examples of counterfactual,
how we do this all the time.
We play a game and at the end we lose and I go,
man, we would have won had I done this one single thing.
That's actually incredible feat
because reinforcement learning alone
doesn't solve that problem.
All you know is you lost at the end,
but we're capable of imagining
what move would have won us the game.
And that's a really powerful ability. You're right, you're doing both things at the end, but we're capable of imagining what move would have won us the game. And that's a really powerful ability.
You're right.
You're doing both things at the same time.
You're playing through the whole event from your history and then you're modeling future
or different options and then seeing how that would have played out.
You're juggling a bunch of different timelines.
Right.
One really cool study of this too is you can can play rock paper scissors with a chimpanzee.
You can train them to play rock paper scissors.
And they've found that, let's say a chimpanzee loses
when it plays scissors and you play rock.
What is it more likely to do next?
Eat you.
Probably eat your face.
Maybe.
Play rock.
If you played rock and it played scissors,
the next move is likely to be paper.
Because it's imagining what would have won the prior move.
If it was just reinforcement learning
where it wasn't being able to model these things,
it would maybe be less likely to play scissors,
but it wouldn't be more likely to play paper.
But because it's imagining what would have won it does.
Well, I'd even argue if it was strictly reinforcement,
it would just play rock because rock won, rock's a winner.
Sorry, I was imagining the chimpanzee lost.
It played scissors and you played rock.
Right, so then the chimp next time,
if it was strictly reinforcement learning,
would probably play rock, no?
If it's capable of imagining them in your shoes.
But all it did was it took the action of scissors and lost.
So a chimpanzee probably could do that, you're right,
but standard reinforcement learning is I take an action,
I get an outcome and then that's it.
And I just don't play scissors again.
Right, right.
That's all I can really think.
Now, Henry Malaizen is an interesting character.
He's kind of like Phineas Gage.
He's the Phineas Gage of the hippocampus.
Phineas Gage, of course, lost his frontal lobe
when we learned a lot about that.
Yeah, he had a pole.
So what happened to Henry?
Famously, patient HM.
He had to get all of his hippocampus removed
because he had epilepsy. Ding, ding, ding, epilepsy. Yeah, yeah, yeah, he had to get all of his hippocampus removed because he had epilepsy.
Ding ding ding, epilepsy.
Yeah, yeah, yeah, he had very bad epilepsy.
Monica has very mild epilepsy.
Excuse me.
Well, you don't have to have your hippocampus removed.
I don't, actually you're right, you're right.
So I guess it's very mild.
Call me when you've had your hippocampus removed.
Yeah, yeah.
So patient HM lost the ability to create new memories
when he woke up from the surgery.
Everything else is the same, Monica.
His personality hasn't altered, he's competent,
but he cannot make a memory going forward from that moment.
Which again would really lead you to think
that memory's in the hippocampus,
but that would be incomplete.
Well, the fascinating thing about,
this is one reason why if you take
like an intro psychology course,
they try to divide memory into different buckets.
This is one of the canonical studies
that differentiated what's called procedural memory
from episodic or semantic memory,
because he could learn new things,
but it wouldn't be an episodic memory of an event.
He could learn to play piano,
and he would be like, I don't know how to play piano,
but then you put him in front of piano and he's playing.
And he doesn't know how he knows it.
Memory broadly lives in more places,
but a certain aspect of memory was lost.
We see this with Alzheimer's a lot.
Yes.
Also, which comes out of this new ability in mammals to model is goals versus habits.
This one really hit me this morning because we can all relate to this so much.
Let's talk about the mice and what they learned in goals versus habits.
The famous study around this is what's called
a devaluation study.
So the way it works is you train a mouse or a rat usually
to push a lever a few times and then get a treat.
Now what happens if separately from this whole experiment,
you give it the same treat,
but you lease it with something that makes them sick.
You verify that it doesn't like the treat anymore
because the next time you give it the treat,
it eats way less.
When you bring it back and show it the lever,
does it push the lever?
And what they found is,
if it's only experienced it a few times,
the rat will remember what the lever leads to
is the treat, I don't want it.
It simulates the goal and says,
I don't want to push the lever.
But if it had pushed the lever 500 times before,
it will push the lever, it'll ignore the food,
but it can't stop pushing the lever,
because it's built this habit
that I see lever and I push it. Whoa. Yeah, it'll ignore the food, but it can't stop pushing the lever because it's built this habit that I see lever and I push it.
Whoa.
Yeah, a hundred times or less,
it'll stop pushing the lever,
but if it's gotten to 500, it'll just keep doing it.
And I was just thinking of humans and like smoking,
you know, it's just something you've done
a hundred thousand times by the time
you're five years into it.
And really nothing's gonna combat that.
There's good evidence that when behaviors become automated,
they actually change the location in your brain.
So they shift to different parts of the brain
that are more automated.
And this is obviously useful for us.
It's the reason why we can walk down the street
and not think about walking.
Or drive a car.
Or drive a car, it's all automated.
Daniel Kahneman famously talked about system one,
system two, habits are also the cause of lots of human
bad decision-making because we are devoid of a goal when we're acting habitually.
Stay tuned for more Armchair Expert, if you dare.
Okay, so somehow AlphaZero and Go are involved in this some way.
And I'll just add, we laid out the number of possible moves in a chess match being 10
to the 120.
Go is even more complicated.
So this game Go, so AlphaZero was the first AI, and that was DeepMind?
DeepMind as well, yep.
They beat Go, but it can't do this in the real world.
It can do it in the framework
of there's limited moves to be made,
but once it's in a world with a mushy background
and lots of variables, like it can't handle that.
How are we handling that and they aren't?
There's classically two types of reinforcement learning.
There's what's called model free,
which is what we talked about in early vertebrates.
It means I get a stimulus and I just choose an action
in the response to it.
Model based is what we've been saying comes in mammals.
That's when you pause to imagine possible futures
before making a choice.
And model based has always been a harder thing in AI
because you need to build a model of some worlds
and you need to teach a system to imagine possible futures.
And as we said, you literally can't imagine
every possible future in a game of even checkers,
let alone chess or go.
So the brilliance of what they did in AlphaZero
is it did imagine possible futures, but not all of them.
What they did is in simple terms,
they repurposed the actor critic system.
And instead of just predicting the next best move,
they said, you know what?
Why don't we play out a full game,
assuming you made this next move,
and then let's see what happens.
Then let's go back and let's choose your second favorite
move and play a full game.
And let's choose your third.
And let's just do a few of those and do that maybe
a few thousand times in a few seconds.
And so that was a clever balancing act
because it wasn't choosing a random move.
It was choosing a move that the actor was already predicting,
but it was checking to make sure
that it was in fact a good move.
But to your original question,
still the game of Go has certain features
that are way simpler than the real worlds.
On any given board,
there's only a finite number of next moves you can make.
Even though the total spectrum of games is astronomical,
from a given position, there's only really a handful,
maybe a hundred or so, I forget the exact number.
In the real world right now, the variables of where I put
my hand and how I speak is infinite.
These are continuous variables.
And so that's a much harder problem of how do you train
an AI system to make a choice when there's literally
an infinite number of possible next things to do.
Yeah.
Okay, now we're getting to primates, my favorite.
And this is mentalizing.
So what happens in the brain and what is the outcome?
One thing that I found fascinating going into the brain
of non-human primates is how incredibly similar they are
to other mammals.
I mean, there's not much different in the brain
of a chimpanzee from a rat, other than just being bigger.
But there are two regions
that are pretty substantially different.
One is this part of the front of the brain called the granular prefrontal cortex.
And there's another part in the back of the brain with complicated names.
And so there's a big mystery of what these brain regions did.
Phineas Gage is an example of this odd thing where we've known from the 40s
that people with damage to the older mammalian parts of the brain have huge impact.
I mean, if you damage the old part, the original mammalian part of the brain have huge impact. I mean, if you damage the old part,
the original mammalian part of the frontal cortex,
you become mute.
The older mammalian part of visual cortex, you don't see.
But if you damage these primate areas,
people seem oddly fine.
In fact, some people's IQs don't even degrade that much.
But the more you inquire,
you realize there's all these oddities in their behavior.
They make social faux pas.
If you start testing them on their ability
to understand other people's intents,
you realize all of a sudden they struggle
to understand what's going on in other people's minds.
Sometimes they struggle to identify themselves in a mirror.
One of my favorite studies to differentiate
what were these new primate areas doing
is they looked at three groups of people,
people with no brain issues,
people with hippocampal damage,
and people with damage to the primate area
of prefrontal cortex.
And they asked them to do something very simple.
They just asked them to sort of write out a scene
in the future.
And what they found is people with hippocampal damage
would describe a scene with themselves.
So they could articulate themselves in that scene,
but the scene was devoid of details
about the rest of the world.
The people with prefrontal damage was the exact opposite.
They could describe the features of the external world,
but they couldn't project themselves
into their own imaginations.
They were absent from it.
Yeah, like Phineas couldn't imagine what would happen
when he left whatever room he was in.
But he could imagine the room.
You could say what's a kitchen look like,
or picture the kitchen,
but he couldn't see himself traveling forward in time,
as I remember.
When we now look at primates, we say, cool.
So what would that suggest?
That would suggest some set of theory of mind,
sense of self emerging in primates.
And there are some really cool studies
that you can show that chimpanzees and monkeys
are very good at inferring the intent in other people,
much more so than it seems like most other mammals.
There are, again, exceptions, dolphins, dogs.
There are some animals that seem
to have independently gotten this.
So one experiment I like about this
is you teach a chimpanzee when it's shown two boxes,
the one with the little marker on it
is the one that has food.
So then you have an experimenter come in
and they bend over and they mark one,
they stand up and they accidentally drop the marker
on the other one.
So the cue is identical.
A monkey always knows to go to the one
that they intentionally marked
because it can tell what you intended to do.
Oh, because they saw the accidental mark.
They saw one was a mistake.
Right.
And one was intentional. Wow.
And then they also have this wild thing
within that same experiment, right?
Where there's two people administering
and one experimenter is incapable of giving them the treat
because he simply can't see it or she can't see it.
Their ineptitude is preventing them
from giving the chimp what they want.
And then the next experimenter clearly knows
where the thing is that they want
and is choosing to not give it to them.
So the results are saying they don't get the thing.
But when given the option of who they wanna be
in the room with, they'll always pick the person
that was just oblivious to where it was,
because there's some chance they will discover it.
How fucking complicated is that?
That's when people go like, not to get political,
but it's like, intentions don't matter, outcomes matter.
Well, no, they really matter.
They're really what's driving almost all of our good faith
and goodwill towards each other.
It's like, we are acutely aware of intention.
I think we're gonna ignore intention is bonkers.
Yep.
Also let's talk about the one I really liked from it.
And we can talk about how this evolved
and what probably incentivize it
is this political arms race
where you're living in a multi-member group.
The hierarchy is kind of flexible.
There's ways to outwit other people.
Knowing their intentions is gonna be hugely rewarded.
And so Bella and Rock.
I loved this story too when I stumbled upon it.
So this researcher, I think his name was Emil Menzel.
Just so you know, on this podcast,
I've told this story a few times.
When I was reading the book,
I was so excited about this example and a fact check.
I was like, Monica, this fucking chimp figured out that,
you know. Sorry, I'm still kind of stuck on the other thing.
Do you think then people are more likely to choose
a partner or a friend who they think is stupid over mean?
Well, sure, if you've dialed their intention
to be harmful to you,
yeah, you would definitely pick a dumb dumb
that might accidentally hurt you.
I guess so, but there's this whole thing on TikTok
Liz taught me about.
I love you always have to qualify why you're on TikTok.
Because I don't know it well enough.
Cause she just relays it.
But there's a grid of partnership and it's like
stupid and cute, mean and cute.
Like you pick and I mean, I picked.
Hot and mean.
And I picked smart and hot first, obviously.
Sure done.
But I did pick hot and mean over-
You don't like Hufflepuffs.
This is all in keeping.
So to be fair with the monkey experiment,
it wasn't that one of them was just so dumb
they couldn't help out.
It was that it was hidden from their view, for example.
So they just were unaware of the treat
or it was in a box they couldn't get into.
So it wasn't like the monkey just liked the dumb human.
That they were inept.
It was that you maybe would have the intention to help me,
you just weren't capable of it,
but the other one could have and I knew they didn't want to.
That's the difference.
Interesting. Not that it, you know.
Not that it changes my grid, but yeah.
Also that cost benefit analysis is quite simple.
One could result in a treat and one will definitely not.
Exactly.
So you're way better off rolling the dice. Even if they had little confidence that the dumb dumb was going to figure
it out, it was possible. Okay, so Emil Menzel. Yes. He was doing studies in the vein of Edward
Tolman I was telling you about, put rats in mazes, and he wanted to experiment with the degree with
which chimpanzees could remember locations in space. That was all he was trying to study. And
so he had this little one acre patch of forest
that he had a few chimpanzees living in.
And all he did is he would hide a little treat
under a tree or in a specific bush
and show it to one of the chimpanzees.
Bella.
She very quickly learned where to find the treat.
And so she was also, as many chimpanzees are, good sharers.
So she would share the treat with Rock,
who was a high ranking male,
and very quickly he became a jerk
and he would take the treat from her.
Wouldn't share, he'd eat it all himself.
Of course.
Yes.
Well.
Typical.
Yeah.
And what happened from there, he wasn't expecting,
which is what got really interesting.
So then Belle decided, okay,
I'm gonna wait for Rock not to be looking.
And then I'm gonna go get the treat.
Okay, that's kind of smart, but maybe not that crazy.
And she would sit on the treat.
I think that's a part of it. They'd show Bella, she'd see it, she'd kind of smart, but maybe not that crazy. And she would sit on the tree. I think that's a part of it.
They'd show Bella, she'd see it, she'd sit on it,
and then she'd just wait for Rock to not be around
so she could gobble it up herself,
because she too is a selfish little piggy.
No, she's hungry and this mean guy keeps eating on our shit.
I agree, I'm on Bella's side.
Yeah, we all are.
And so then he started pushing her off.
He would know what she was doing.
Then he started pretending not to pay attention
until she would go to the tree.
Then she decided to start leading him
in the wrong direction.
She pretended the tree was over here,
then he would go and run around.
She'd sit on something for a while,
he'd shove her off,
it could start rooting around,
then she'd go gravel.
This is how these romantic games started.
Right, right.
And then Bell and Rock fell in love.
Yeah, exactly.
So this cycle of deception,
counter deception demonstrates
an incredible theory of mind
because they're both trying to reason about
what's in the mind of the other person
and how do I change their behavior
by manipulating their knowledge.
Yes.
In these multi-member chip groups, there is a hierarchy,
but some are clever enough that they will call out Leopard
to the whole group, Predator,
and Alpha of course has to respond to that
and rally the troops so he's,
oh, I'm gonna fucking deal with this.
And he goes over there and then the subordinate gets humping
on the female and passes on his genes.
So right there, the intelligence
and the deception was rewarded.
And you can see quite easily how that would ratchet up
over time where cleverness would be rewarded.
And that's why we're just innately deceptive.
Interesting. That's the brown lining're just innately deceptive. Interesting.
That's the brown lining of this.
We can overcome it.
We can transcend it.
Okay, so this obviously has not been completed.
We're not there yet,
but maybe we introduced the paperclip conundrum
as this theory of mind would be a solution
to one of the problems AI deals with, right?
So Nick Bostrom, who's a famous philosopher
who wrote the book, Superintelligence,
has this famous allegory.
And in the allegory, he imagines,
we have this superintelligent AI system
that's fully benign.
So it's not an evil, it's not trying to hurt anyone.
And we give it a simple instruction.
We just say, hey, can you make as many paperclips as possible
because you are running the paperclip factory.
And it goes, okay, cool.
And it quickly starts enslaving humans
in the nearby neighborhoods to make paperclips.
And then it converts huge chunks of earth into paperclips.
And then it decides I'm gonna turn all of earth
into paperclips and it starts taking over the solar system.
And his point, as silly as that sounds,
is it demonstrates that you don't need
an intentionally nefarious AI.
You just need to misunderstand our requests.
And I will point out, I do think this is where most people are fearful of AI,
are actually on the wrong path.
I think Steven Pinker points this out.
We would have to program it to have that strongest,
we'll survive, I must dominate.
But a confusion, a well-intentioned accident
is actually what you should be afraid of.
Exactly, and we don't realize this,
and we'll get into this to break through five with language,
but when we're speaking to each other,
we're always using mentalizing.
We're inferring what the other person means by what they say,
because our language is a huge information compression
of what we mean.
If you asked a human,
can you maximize paperclip production?
They would very easily be able to infer a set of outcomes
that you clearly don't want,
like converting all of earth into paperclips.
Or making more than needed.
Exactly. But an AI system might not.
Well, the great little exchange
that could detail this for you is the exchange,
Bob saying, I'm leaving you, Alice says, who is she?
We would all know immediately what that conversation means.
Bob has not been faithful.
Yeah, she wants answers.
But an AI couldn't understand that.
Well, so it's a really interesting,
the way all of this has unfolded is
if you ask Chachi BT these theory of mind word problems,
it does pretty well.
But the question emerges, how is it solving the problem?
And is it solving the problem in the same way
that the human mind does?
Because the way it solves the problem is we've written
millions and millions of theory of mind word problems
and just trained it on those word problems.
And so it's a big open research question,
the degree with which it's actually reasoning
about our mind.
And that's important when we think about
embedding them into our lives.
If we want an AI system that's gonna help the elderly
or help with mental health applications,
we really wanna know how is it thinking
about what's going on in people's heads.
Oh, great point because the data set will be based on
quote, sane people,
and now you're dealing with some level of insanity.
We're pretty good at intuiting what is really going on here
with the person who's saying gibberish.
But if there's no comp for this gibberish,
how on earth are they to?
So what you're describing is the problem of generalization,
which is how well does a system generalize
outside of the data it was trained on?
And humans are very good at that.
And it's an open question,
the degree with which chat GBT is actually good at it.
What we really do is we just give it more data
to try and solve these problems.
Right.
Okay, so then that brings us to the last breakthrough,
which is speaking.
And I think it would be interesting
to just initially set up how the way we communicate
does differ greatly from other animals
who clearly have communication.
Animals have calls, there's some dictionary
of over a hundred words that chimps use,
or maybe words isn't the right word, but they have calls.
Yep.
Yeah, is this where we break off from those other primates?
Yes, the argument in the book, which is not novel,
lots of people have identified this,
is the key thing that really makes us different is language.
Now there is a small subset of the community
that it's important for me to nod to,
that thinks there's many more things
that make humans unique.
For example, there are still some comparative psychologists
that think only humans have imagination.
I find that evidence not very compelling,
I don't agree with it,
but there are people that make that argument.
To me, most of the evidence suggests
that really the dividing line between us and other animals
is just that we have language.
And the more you learn about language,
you realize how powerful this seemingly simple tool is
and why it empowered us to take over the world.
So right, the bird calls
and the different calls we've observed in nature,
they do a very specific thing versus what we do.
We do declarative labels,
so we can assign abstract symbols to things.
Whereas, and I thought this was fascinating,
if you look at different chimp troops,
even though they've never had contact with one another,
their calls are pretty much the same.
You could travel all of Africa
and you'll find that they're kind of the same,
which is interesting. It's almost suggests that they're implicit or they're in their brain
already.
In fact, different species of apes have similar gestures. And so when we go into the brain,
it's likely that their gestures, their communication is more akin to our emotional expressions,
our smiles, our laughs, our cries, not the words we speak. Okay, so our ability to have these abstract symbols
and have this enormous lexicon now opens up really
like a rocket ship to progress as a species
because we can transfer thoughts now.
Break that down for us.
One really cool framing of this whole story
that I find satisfying is you can kind of look
at these breakthroughs as expanding the scope
of what a brain can learn from.
So the very first vertebrates could learn pretty much only
from their own actual actions.
When mammals come around, they can learn from that,
but they can also learn from their own imagined actions.
I can imagine going through three paths,
realize only one of them leads to food,
and I don't have to go through all three. My imagination
taught me that. When you go to primates, I can also now learn from other people's
actual actions. Primates are really good imitation learning because with theory
of mind, I can put myself in your shoes and train myself to do what I see you
doing. But never before language was it possible for me to learn from your
imagination. Never before was it possible for you to render a simulation
of something in your head and for you to translate
that simulation to me so we all share in the same thing.
And that's the power of language.
That is fucking mind blowing.
I have a section in the book where I call that
we're the hive mind apes because humans,
we're sharing ideas all the time
and we're mirroring each other's ideas.
Although our minds are not actually physically
tethered together, in principle we are tethered together because we're constantlying each other's ideas. Although our minds are not actually physically tethered
together, in principle we are tethered together
because we're constantly sharing these concepts.
And a fascinating graph to look at side by side
as the physiology of our brain in the last 300,000 years
as homo sapien, sapien has not changed dramatically.
But if you chart our understanding of the world,
because it compounds and compounds and compounds,
anything learned is not lost, it's built upon,
it becomes this crazy trajectory
by being able to pass all this on.
Right, ideas become almost their own living thing,
because even though humans die,
ideas can keep passing through humans and evolve.
You know, we like to think we do a lot of invention,
but really what happens is we're given all these building blocks and then we kind of just ideas can keep passing through humans and evolve. You know, we like to think we do a lot of invention,
but really what happens is we're given
all these building blocks,
and then we kind of just re-merge them together.
That's our contribution as a generation.
And if you took Einstein,
you brought him back 10,000 years ago,
he's not gonna figure out special relativity.
And this is how we always sort of move the puck forward
a little bit as a generation.
All we're doing is receiving ideas
that are handed down from thousands of years
of other past humans,
and we're tinkering with them a little bit
and then passing them on.
0.01% better.
Ideally better.
We hope.
Not always.
When does that hit critical mass?
Like when is there so much prerequisite info
already obtained that just acquiring it takes a lifetime?
When does that curve start slowing down?
I even think about what I had to learn 20 years ago
in college versus what you would have to learn now.
And it just keeps building and accelerating.
I guess in some way that's why AI is almost essential
at some point where it's like somebody has to keep
all the building blocks.
I'm not an expert in this,
but there are cases that I've read about
where in anthropological history,
we've seen where groups of people get separated, there's huge knowledge loss.
Because before writing, which is a key invention
that changes the dynamic you're describing,
all the information needs to be passed brain by brain.
And so because it can't all fit in one brain,
there's specialization in who passes what information.
And if you get a separation in this group, knowledge is lost.
So there's several cases where we've seen groups
get separated and technology goes backward in time.
But writing changes a lot of that
because writing allows us to externalize information
in a way that was never before possible.
It's a hard drive.
Exactly.
Anthropological hard drive
where I can pass information across generations.
Yeah, do you ever play with this experiment in your head?
I imagine myself going back to the 1880s,
like I've time traveled there. And I have to take what I know
and somehow try to implement that in real time.
I have thought about this.
Yeah, right?
It's like, I think I understand a helicopter.
When it got down to it, all this stuff I inherited,
how much of it could I actually deploy?
Like, could you make an iPhone?
Like, no way. Yeah.
Right, right, right.
There's tons of stuff I couldn't,
and then stuff I really could.
What do you think you could make?
The steam engine. You could on your own. Yeah, right. There's tons of stuff I couldn't and then stuff I really could. What do you think you could make?
The steam engine.
You could on your own.
Yeah, yeah.
Well, I fully understand the internal combustion engine
and the steam engine and yeah,
if I could work with a guy that could fabricate metal,
I could do it.
I couldn't do much.
I think most of us couldn't do shit.
Like we would just be telling people like,
no, you wouldn't believe it.
There's this box and on it,
you see other people doing other things.
Like trying to describe it.
Yeah, it's like, I can't, I can't do it.
I kind of understand how the diode worked,
but not really, you know, like, yep.
You're like, what did you do in that life?
Well, I had a podcast.
I shopped a lot, there's a lot of really great.
Where are the shops on this island?
I think I can shop there.
Yeah, yeah.
What if your first order of business, Monica,
in the 1800s was to create the row, modern day the row.
But in some ways then I guess I would be creating money.
So that's huge.
Sure.
That's a big deal.
So how does then AI kind of fold into this crazy ability
to transfer thoughts and to build on thoughts?
Where is it at in that process?
So the thing that's been so interesting
about the recent developments in AI
is how they've taken almost the exact inverse process
that evolution took.
So all of the big developments over the last few years
have been in what's called language models like ChatGPT.
And these are systems that by and large,
even though language was the fifth thing,
the final thing for us, start from language.
These things don't interact with the world.
They don't see the world, except for newer ones.
They don't hear the world.
All they're done is they're given the entire corpus of text
that humans have made, pretty much.
And they're just trained to reproduce that.
And not to disparage that, an incredible amount
of intelligence emerges just by training to do language.
It's remarkable what ChatGBT can do
having been trained solely on language.
By the way, I'll add, we were with Bill Gates
for a week in India and numerous times we heard him say,
do we know how it works?
And he's like, we really don't know how it works.
That was a good impression.
And I just feel like, wow, if he doesn't know how it works.
We don't.
Well, the interesting thing is we know how it was trained
because what we've done with these systems
is we build an architecture,
we define a learning algorithm, and we just pump it data.
And then it figures out how to match the training data.
And so we don't know how it solves these problems.
In some ways that's great, but in some ways that's scary.
Now, to be fair, do humans always know
why they do what they do?
No.
But we're a little bit better at explaining our behaviors
than it seems, chatgypts.
And so that's sort of an interesting challenge
for us to overcome.
Is there a world in which you could ask it
how it did that and it wouldn't know?
You can do that.
The new model that was released,
if we go back to the system one, system two dynamic,
one reason why we're smart at some word problems
is we pause to think about them first.
And so a lot of these things
that used to trick language models don't anymore
because they actually did something very clever.
So there's something called the cognitive reflection test.
It sort of pits our system one habitual thinking
against our system to deliberate thinking against each other.
So imagine the following word problem.
10 machines take 10 minutes to make 10 widgets.
How long does it take to make for 500 widgets?
This is hard.
Hold on, hold on, hold on.
No, I got it, I got it, I got it.
Hold on, hold on.
We have 10 machines, 10 minutes to make 10 widgets.
How long does it take for 500 machines
to make 500 widgets?
10 minutes.
Okay, smart man.
Because the way I describe it-
Yeah, you'd be inclined to think it takes one minute
to make a widget.
Exactly.
So we want to just sort of fulfill the pattern recognition.
But if you pause to think about it, you realize,
actually you just increased the number of machines
that should just take the same amount of time.
So if you ask questions like that to chat GBT in the past,
it got those wrong.
Oh.
It doesn't anymore.
And one of the main things they do now
is something called chain of thought prompting.
And what you do is you say,
think about your reasoning step by step before answering.
That's one reason why chat GBT is so long.
It's belabored.
It writes its thinking out a lot before answering
because they've tuned it to do that
because its performance goes way up when it thinks.
What's so interesting about this form of thinking
is it's transparent.
You can look at its thinking right in front of you.
And the new version of Chat GBT they just released
does even more of this.
It does a lot of thinking beforehand.
But all it's doing is talking out loud.
In some ways that is recapitulating the notion of thinking
because it's saying things in a step-by-step process
before answering.
Yeah, it's like reflecting before.
Exactly.
Okay, I just wanna flag this one thing
because this is in the book actually,
there's a neat case of a guy who has face paralysis.
He can't move his face.
And so if you ask him to smile, he cannot smile.
But if you tell him a joke, he will smile and laugh.
Because you have motor control in your amygdala
as well as motor control elsewhere, which is kind of nuts.
I would have thought motor control.
And his face is then moving?
Yes, because the amygdala's firing the sequence
of electricity to engage the muscles,
whereas the frontal cortex or whatever area of the brain,
you would decide to make a smile.
That's damaged, but the amygdala's not.
Whoa. And okay, this is myygdala is not. Whoa.
And, okay, this is my own pet peeve now.
You can shoot this down or whatever.
But as I hear the war march towards AI and everyone's freak out about AI, I keep checking
in with where we're at robotics-wise, and we're fucking nowhere.
And we get really, I think, distracted with how special our brain is at modeling and thinking
and communicating.
And we really undervalue our brain's ability to move these quadrillion muscles in an infinite
amount of ways and taking the information.
I mean, what we do physically, to me, is as impressive as what we're doing mentally.
And we're like nowhere near that in AI.
Like when you hear these people going like, oh, we're doing mentally. And we're like nowhere near that in AI. Like when you hear all these people going like,
oh, we're gonna lose eight billion jobs,
like you think there's a robot
that's gonna be able to fix my car.
It can go out in the garage and open the hood and diagnose.
That's fucking hundreds of years of work.
Do you watch South Park?
Do you see the South Park? No.
There's a hilarious South Park episode
where all the lawyers are out of work,
but the mechanics are flying to space.
Yes. Oh, that's great.
There's that whole thing that doctors could be replaced,
but nurses not really,
because they have to like put the band-aid on
and they have to check.
Draw the blood. Exactly.
It's very hard to make predictions on these things.
I think a lot of people thought that visual artists
would be the last thing that AI could do.
And now we have these models
that produce incredible visual art.
And yet, yeah, doing the dishes is still so
hard to figure out how to get a robot to do that. And there's a big schism in the AI community.
I tend to agree with you that I think we're going to need new breakthroughs to figure out how to get
robots to have the motor dexterity of a human. There are other people that are trying to scale
up existing systems to solve these problems. I'm skeptical that approach is going to work.
I just wonder if they'll have to go through the same processes where they identify
when did this motor control start,
how do we build it in a much simpler way
and keep adding as opposed to trying to crack
how our brains do all this.
Totally.
I just think it's miraculous the way we can move our bodies
through time and space and no one's really talking
about that.
Well listen, I was really, really intimidated by all this.
It's a very, very dense in a great way book.
You cover geological evolution, animal evolution,
the AI history, and it's all cohesive
and multi-disciplinarian as you talked about.
It's such an impressive book.
You made it quite easy.
So thank you so much for helping me through this.
Fucking guys, put in your apps to Washington, in St. Louis.
Has that ever been explained?
They have an incredible PT program.
I do know that, because I know a lot of PTs.
Monica's kink is PTs.
This happened to a lot of friends who are PTs.
And it's a very good program.
Yeah, so you're in this space.
What's the big next thing, I guess,
that we need to kind of solve
before this becomes the science fiction
everyone thinks it is already?
I think some of the doom discussions are distracting
from the more immediate challenges that we need to solve for.
These language models are going to be embedded
all over our sort of lives.
You even saw with the recent Apple announcement,
we're gonna have these language models
like ChatGPT directly on our phones.
And I think there's a few things that we should think about
as we roll these systems out.
I think the chance that these things wake up
and take over the world is not the concern
we should be primarily focused on.
Things like misinformation,
things like when you ask ChatGPT
a controversial moral question
that humans don't agree in what the right answer is,
who gets to decide what CHAT CPT tells our children?
And these are tough questions.
And I think these are the types of things
we should think about.
Also a concept called regulatory capture.
When Congress doesn't understand a new technology,
who are the people that are guiding them
in how to regulate it?
If it's the people who get the financial benefit
from deciding how it's regulated, that's a problem.
And so making sure there's more diverse voices
in those discussions,
deciding how do we wanna regulate these systems
to make sure that we capture the upside for humans.
I mean, there's amazing things,
even just the AI systems we have today can do.
For example, imagine someone who doesn't have access
to high quality medical care in developed countries
is able to ask CHAT CPT a question
and get amazing medical answers.
Play that forward five years to when it's actually way better
at answering medical questions.
That's incredible that for free,
you can effectively have access to an incredible doctor.
Think about education.
I was grateful that I really loved my educational program.
My wife didn't really feel like she jived
with the way she was taught in high school
and middle school and so on.
And the reason is because we have to just train
to the average, right?
Or to one group.
We can't give everyone the same pedagogy
because people are all different
and we only have one teacher.
But with these AI systems, you can imagine
that everyone has their own personalized tutor
that helps them learn the way they wanna learn.
Like these are incredible things we can do.
So we don't wanna regulate AI to the point
where we can't capture these amazing benefits,
but we need to protect against the possible downsides.
And we want to make sure the right voices
are part of that process.
Seems really obvious that academia
is going to have to play a huge role in that, right?
Cause who else is going to know about all this stuff
and stay abreast of it in the way
that an entrepreneur is going to?
I mean, I don't even know if academics can even be on par.
The reason I'm hopeful is because social media was kind of like training wheels for this
AI thing.
The way we approach the social media problem is we always post-hoc regulate.
We always say, let it play out.
Let's see what goes wrong and then we'll try and band-aid the problems.
And I think most people agree that approach wasn't a great one with social media.
It didn't really work out.
Right, right.
Although I'll just add into that,
so many of them are absolutely unimaginable.
I think in good conscious,
those people at YouTube designed their algorithm
to give you more of what you wanted,
not anticipating you'd get more radicalized in the process.
But that's what he's saying.
It's after the fact that it's like,
oh yeah, that's bad as opposed to anticipating.
That's why I'm only flagging that is like,
that's what people are calling for
is more anticipatory regulation.
But my argument against that is you actually can't anticipate
what things emerge from these.
I mean, to some degree you can,
but it's like, we just saw it time and time again,
when it's like the best intention thing.
Well, wow, this fucking result was inconceivable.
I'm not advocating for any specific regulation.
I'm not even saying we need lots of regulation.
All I'm saying is we definitely want our eyes on the ball.
The answer might be, you know,
the risks aren't worth doing anything and we wait and see,
but making sure we have our eyes on the ball
is I think important.
And the conversation is not fully complete
unless you also acknowledge the geopolitical pressure.
Of course.
We should regulate this and slow it down.
Okay, but no one else in the world's going to,
well, that's not an option.
So you just kind of go, well, okay.
So really we're just on this fucking missile
because everyone's on the missile
and we're gonna do our best.
But I just think it's a little naive
to think we can either stop in ways it's unfortunate,
but also I think it's the reality.
Well, I don't think anyone's asking for stopping,
but I think regulation can also help actually with progress.
If we have regulations and other countries don't,
we're like, oh no, that actually might implode for them.
Yeah, maybe there'll be one where we go,
we're gonna sit this one out,
they might get the advantage or they might collapse.
Yeah.
It's a little dark.
Yeah.
Anyways, Max, this was awesome.
I love your book.
It's so good. Thank you. I recommend it to this was awesome. I love your book.
It's so good.
I recommend it to so many people.
I'm gonna listen to it probably a third and fourth time.
I hope everyone checks it out.
Thanks so much for coming.
A brief history of intelligence, evolution AI,
and the five breakthroughs that made our brains.
And just all love to Angela Duckworth.
She's the one who turned me on to this book.
Do you know her personally?
Someone sent me the podcast she was on.
It was like, Angela likes your book,
so I reached out to her.
She's so amazing.
Oh, she's so awesome.
She's the best.
So we've become friendly.
We've chatted a few times.
She's amazing.
Oh, I love her.
She is a content mega-bum.
She's so great.
All right, well, good luck with everything.
Everybody read A Brief History of Intelligence.
Until your next book.
Thanks for having me.
Stay tuned for more Armchair Expert, if you dare.
We hope you enjoyed this episode.
Unfortunately, they made some mistakes.
New jumper?
Yes.
Yes, it is.
You know, your question kind of stuck with me last night
and I was ruminating on like,
oh, why don't you ever say anything about JD Vance anymore?
And I had like an answer while we were driving,
but I really thought about it more.
And I think my conclusion is six and a half years ago,
everyone loved him.
Everyone was sucking his dick and loved his book.
And I was like, this guy's full of shit.
Warning, this guy's full of shit.
That was what I wanted to shine a light on.
This guy's full of shit.
So once everyone agrees that he's full of shit,
I'm not inclined.
And I was thinking, yeah, that's my worldview.
So you and I fought in November where I was going,
Biden is not competent.
He's not gonna win an election.
And everyone's mad.
Come July, everyone sees the debate and they're like,
oh yeah, he can't do it.
I'm not, at that time of the debate,
I'm not going, he can't do it.
Like I already, I was sounding the alarm.
You normally do say this though.
You do normally say like,
yeah, I've been saying this for a long time.
Like I've heard you say that many times. Your, I think, worldview, what you scan for
is like who's being oppressed, which is great,
and how do I advocate for people.
Mine is my triggers group think.
When I think everyone's like under the spell of somebody
or there's some weird group think
that I think has gone awry.
That's my calling.
Like that's the thing that'll animate me
and motivate me to be loud.
You also do want people to think,
like when you said he's full of shit,
you wanted people to think that,
that's also group think.
That you're hoping that turns into group think.
No, I'm hoping it shatters group think.
Into a new group think, like it's all group think.
Well, into reality.
Reality was he was full of shit,
like that's been demonstrated.
Yeah, definitely.
And I was screaming at.
Yeah, exactly.
So in essence, like that's been accomplished.
The group think, the spell has shattered on him.
So I don't have a role in that anymore.
Yeah, I was just surprised I never heard you say like,
yeah, I've been calling that for a long time.
Like an I told you so or something?
Yeah, because I do think you can do that.
I have heard you do that in a good way.
When he came onto the political scene
and it was so shockingly-
Abrupt.
Crazy, yeah, and very different from-
The guy in the book.
Exactly.
I said to you, I was like, oh my God, you were right.
You were right.
And I've also told many people,
Dax called this for a really long time.
Thank you for that.
I just was, I realized why,
it was a very legitimate question.
Why did you hate him so much six and a half years ago?
Now the guy's actually gonna maybe be vice president,
you have nothing to say about him.
Well, I feel like you're done saying bad stuff,
which I find kind of, there's a part of me that's like,
well, now that he is associated with this other side
that I do think sometimes you try to protect that or-
Yeah, I think that's what you thought
it was motivated out of,
which is a very legitimate guess at what's going on.
And so I just, I really thought about what's going on.
Cause that's not what it is.
I mean, other than I'm not trying to piss off half the country ever, but at that time,
he didn't represent a political faction.
He just, in my opinion, was a guy everyone bought into that I thought was full of shit.
Yes.
So, and I think the Biden thing's the same thing.
Like I had a lot to say when it seemed to me
like I wanted to go guys, guys, guys, we have to pivot.
Like we have to figure this out.
And then once people were ready to pivot, I was done,
you know, I was past that.
Like the thing I had hoped would,
people would finally get on board with had happened.
And so yeah, my focus is, it's just my, I think my nature.
It's like now I'm looking around, I'm like,
what's another thing that we're all kind of
a little delusional about,
and I think is gonna explode or implode.
And this is my, I think that's my nature of,
that's the thing I'm always looking for in sounding alarms.
Yeah, that makes sense.
If I look at my history of who people I have agendas against,
it's never the popular bad guy.
Right.
I'm not, and then that made me my outsider punk rockie.
I think it has a lot to do with single mom
in a really idyllic neighborhood where everyone was married.
Mm-hmm.
I think that's where the chip on my shoulder comes.
Like, that was the group think.
Right, sure.
And I was like, you guys are wrong.
Like, our old old is actually very truthful
and very, has a lot of integrity and honesty, you know.
So yeah, I think that's my kink
and I think that's why I used to talk about them a lot
and now I'm not interested in talking about them.
And I think if you look at who I'm currently,
like my little things I'm stuck on right now,
if and when those ever turn out, I'm right about those.
I will, I'll have no interest in them anymore.
I'm interesting.
That make any sense?
Yeah, it does.
It was fun to think about.
Well, good.
Yeah.
What did you think about?
Well, I have-
We won an award last night.
We did.
Yes.
Yeah, we won an award.
This was a post award chat on the Car Ride Home.
It was.
Yeah, yeah.
Yeah, we won a really nice award from Variety.
Yeah, greatest podcast of all time award.
I didn't read what the official award was,
but I think it was-
That's what they were indicating.
Greatest podcast in the history of podcasting.
Yeah, and it was lovely.
And we went to Foonke, I think that's how you pronounce it.
Is that how you say it?
I think so. F-U-N-K-E.
Yeah, which is very hard to get in restaurant
and they like did this whole thing for us.
We were on a rooftop.
Yeah, it was very nice.
Rob, will you look up when that official
first podcast ever was?
Ooh, that's a good question.
I just wanna know how embarrassingly short
the history of this medium is.
2003. 2003.
Okay. 21 years.
2003, what was it?
So this is technically radio open source
by Christopher Linden was the first podcast
launched in 2003.
Thank you, Christopher Linden.
I feel like we should have a portrait of Christopher Linden.
I'm assuming he's not a Nazi or something.
I haven't done enough research.
Let's do some deep dives.
Let's just figure out if he's worthy of having.
I feel like he needs to be honored.
I do too.
He gave us our lives.
That's a ding ding ding because I just,
I was editing a one sheet for an upcoming project
that we're all involved with.
That has to do with an early podcast experience,
an early podcast that I found that I loved
and was obsessed with.
But that did have me thinking about podcasting
before I was in podcasting.
Yeah.
Which is-
21 years old, Clive Baranthal.
What was his name?
Last.
It's 10, 10 windows ago.
Yeah.
Shit, we're supposed to honor him and I already forgot his name. Oh, fuck. Fuck, I'm 10 windows ago. Yeah. Shit.
We're supposed to honor him and I already forgot his name.
Oh, fuck.
Fuck, I'm very bad at honoring everyone.
Christopher Linden.
Christopher Linden. Christopher Liden.
Christopher Liden.
Christopher Liden.
Liden?
Yep, L-Y-D-O-N.
Oh, Liden.
Well, that makes me think of Johnny Liden.
I don't know who that is.
Lead singer of the Sex Pistols and then P.I.L.
Cool. Great. We're not here to honor him. I am. I loved P. who that is. Lead singer of the Sex Pistols and then PIL. Cool. Great.
We're not here to honor him.
I am, I loved PIL.
He just moved on so quickly from Clive.
Well, it's gonna help that I can remember his last.
Is that his name? Christopher.
Oh, Christopher.
Leiden.
Leiden.
Let's put a picture of Johnny Leiden up.
No, I'm against him.
That's group think.
What was his nickname in the Sex Pistols?
It wasn't Johnny Lydon.
It was like, it was Sid Vicious
and I think he had like a gross nickname for himself.
Johnny Rotten.
Johnny Rotten.
It's like when you talk about cars.
Speaking of, we're recording with the guests today
that I know, that I love, I'm so excited,
but I know there's gonna be a chunk of time
that's gonna go to some car talk,
prepping mentally.
I don't think we're gonna have time today.
I started the book last night.
It's so good.
It's so good.
It's so fucking good.
I don't normally do that.
I don't like to do that.
Yeah, yeah.
But I couldn't help myself.
How could you resist such a good author?
Couldn't help myself.
Yeah, it's so tasty.
The way this author crafts their stories,
which is very proprietary to them,
which is so interesting,
is laying out like 30 different stories at a time
that are ultimately gonna get all woven together
to this overarching hypothesis.
Hypothesis, it's a singular hypothesis.
He's so playful. So last night, you guys wouldn't have singular hypothesis. He's so playful.
So last night, you guys wouldn't have noticed,
but I was so distracted from the moment we got on Fountain
because we were driving behind a 1988 Mustang GT convertible.
Okay, what color?
Black.
Oh wow.
Of course in high school,
and I'll give him specific credit, Johnny O'Neill.
Johnny Lydon?
Pretty similar, this kid was,
he's the most gorgeous kid.
Colleen's older brother.
Okay.
Colleen who I was in love with.
Salna.
Kawa?
Oh, who is Salna, girl?
There's so many people in your history.
Oh, that's Randy Hamina at junior high.
This is now high school.
Okay, got it.
I thought you were with Carrie the whole time
in high school. No, Randy Hamina was thought you were with Carrie the whole time in high school.
No, Randy Hemenna was off and on
from seventh grade till ninth grade.
She broke up with me when I moved to another town.
Okay.
Very early into ninth grade.
What a day.
I was already such a loser at my new school
and I had a really bad haircut
and I didn't have cool enough clothes
and I had a lot of acne and my nose got big
and I lost weight.
It was rough and then she dumped me.
By the way, she should have dumped me.
It was nothing on her.
We didn't live in the same town anymore.
We were fucking in ninth grade.
It was more of a logistics issue.
That wasn't around.
Longest sense.
You know, how to say it out of mind.
That's ninth grade.
And then I'm pretty much girlfriend-less
as I recall until probably 11th grade.
Okay.
And I had had that hot streak, sixth through eighth grade.
So it was a real adjustment.
It was completely lonely.
Then I don't remember the order.
I think Stephanie in 11th grade for a while.
And then before that fall in love with Colleen,
but Colleen's not having it.
Colleen wasn't having it.
God bless her.
So she was in the 10th grade round?
10th till forever. But her older brother, we. God bless her. So she was in the 10th grade realm. 10th till forever.
But her older brother, we've gotten totally,
so he had a 1988 Mustang GT.
Whoa.
White, white, titanium white.
Wow.
Five speed stick, convertible, very 5.0 with the rag top down
so my hair can't blow in the circle of vanilla ice.
That's cool. And he was gorgeous. a 5.0 with the rag top down so my hair can't blow in the Cirque of Vanilla Ice.
That's cool.
And he was gorgeous.
His whole wardrobe was Zeca Varechies and Gervos.
So they were rich this family?
They were loaded.
The dad did commercial plumbing for fucking assembly plants.
Yeah.
Good for Colleen.
Yeah, yeah, yeah, yeah.
I have since heard updates. I don't want to spread any disinformation,
but Johnny has taken over that company and I think it's a fucking enormous.
And what I love about his giant was a total fuck up.
He was just gorgeous and a good dancer, you know, really cool car.
I thought he was the greatest. He's probably two or three years older than me.
Anyway, so that car for me, I was like, man, if I had that white Mustang,
I'd be just like Johnny if I had that white Mustang,
I'd be just like Johnny O'Neil.
Oh, wow.
I'm surprised you didn't get one.
That's my whole point.
So I have been longing for that car.
By the way, they're not that expensive.
It's not like I want a Mercedes or anything.
It's an 80s Mustang.
It's the Fox mug.
Yes.
Or the Black Bear, I guess.
But I have been longing for it,
wanting it last night, driving for 20 minutes,
completely distracted by the car in front of me.
You know, is that a stick?
Is it this?
Did he keep the factory wheels on it?
Does he have the fan wheels on it?
All this stuff was happening.
And then me going like, why don't I have that car?
I can afford that car.
I should have that car.
We should all be driving in that convertible car right now,
home from this dinner.
Wow.
But I'm like, it's almost more fun to want the car.
I know when I have the car,
it'll be another car that battery is dying
and the air pressure is always wrong
because I don't drive it enough and all this stuff.
But my fantasy of it is really fun
and cruising around like Johnny O'Neil
and being a great dancer
and running a huge HVAC company.
I mean, what was the first one?
Gorgeous.
Is that what I said first?
Maybe, I don't remember.
But I mean, you have achieved all those things.
I do own a HVAC company.
Most people don't know that in the audience either.
You know what I mean.
Yes, yes, yes.
So you don't need to be him, or you became him.
Well, that is the great joke of life,
is that none of the things that you think
are gonna heal the wounds that you have do.
And so yes, I can intellectually acknowledge,
I became Johnny O'Neil in so many ways.
But that car will have the magic it had for my whole life,
which is awesome.
The GT bicycle will have the magic, the horror will.
Like all the things I wanted that I couldn't have
will always have their little magic.
It's kinda neat.
It is neat.
Well then I agree you shouldn't get it.
And I don't like convertibles.
Noted.
Do you understand?
Don't ever buy me a convertible unless it's
Okay, good to know.
An 87 to 90 Mustang.
Oh my God.
So many words.
Do you know how long I listened about mugs
with a smile on my face?
I don't care about mugs.
I care about mugs as much as you care about cars.
Listen, do you think that I talk about mugs
as much as you talk about cars for real?
I think you talk about fashion and clothes and bags
and the row and mugs.
Collectively much more than I talk about cars.
Okay.
That might be true, I don't know.
We'd have to do a...
Rob, go through everything.
I'll have that calculation in a couple minutes.
Ask AI to tell us.
I guess what's esoteric is subjective.
Oh, for sure, because I don't know anything
about these bags.
Like for you, the reason one bag's better than the other,
or we were driving on the way to the event last night,
and from like 200 yards, you're like,
I have that same purse the woman has.
That purse to me, you know,
I wouldn't be able to delineate it between any other purse.
Sure, I get it.
I could describe it as medium, small or large.
Yeah, I wouldn't have expected you to.
Oh right, I'm just saying it's esoteric for sure
from my point of view.
I think they're the same.
Yeah, I get that.
They're probably the same.
So this is for Max Bennett.
He was great.
He looked so much like my friend Max to me.
Cali's Max.
Yes, Cali's husband Max.
To me, they looked very similar
and they both really like sci-fi and they're both tall.
And they're both named Max.
And I thought that was some sort of rip in this-
You did.
The base time continuum. it was a sim glitch.
Yes, and in your and Callie's Max's defense,
because I think he'd be really pissed
if I didn't point this out.
Okay.
They did diverge on their thighs.
Because Max, your Max, Callie's Max
has world-class tree trunk thighs.
That's nice of you to say.
And Max's Bennets were great.
Yeah.
There was nothing wrong with them.
No.
But they were in no way the once-in-a-lifetime thighs
of your Max and Callie's Max.
Well, that's very kind.
I believe Max's parents listened to this, so good job.
Really great station.
Really great.
I've been getting some legitimate complaints
that they're not hearing,
hey y'all really great station enough.
And that's a thank you for calling us out on that
because that is an oversight
and it should be said once every third episode.
It needs to make its way back.
Yes. In the rotation.
Hey y'all really great station.
Do you think that should slide down this way
so your feet can fit under it and it's not right on? Um, yeah probably. Rotation. Hey y'all, really great station. Do you think that should slide down this way
so your feet can fit under it and it's not right on?
Um, yeah, probably.
Yeah, let me see how that looks and feels.
Let's see how that goes, that would be good.
What does it affect your feet though?
No, I can chill with this.
Okay, yeah, that's better I think.
Yeah, yeah, yeah, that's great.
I'm gonna go a little bit this way if that's okay.
There it is.
Oh great.
For anyone that's viewing this, this is a new purchase.
I went out on a limb.
I'm not generally in this trifecta of creatives
allowed to make stylistic choices.
I've been kind of famously with the Lazy Boys,
a lot of criticism.
So I just, despite all that, I'm like,
I'm still gonna take a big swing on a coffee table.
Yeah, you wanted to get a new coffee table.
Yes, I didn't like, that one was too wide,
although this one isn't much narrower.
Yeah, it looks similar.
But I also wanted something dark in the middle,
dork in the middle.
I know, that's our difference.
Yeah. That's our difference.
This rug is too white for me.
So when you looked at, in my opinion, the wide
and you had this very light rug
with that really white coffee table,
it was a washout for me.
I wanted some dimension.
Got it.
Yeah, does that make sense?
It does make sense.
I'm, I-
You like a lighter coffee table.
I don't like it when it's all dark
and the space is on the darker side in a good way.
We have dark green and it's very library-esque.
So yeah, I like a light mix in to that,
but I think it's great.
And I like the length.
The length is good.
It was a little short, that coffee table.
That's a fun gender thing.
I wish that maybe Kat would have brought
some of that into it because most of the wives I know
want a white everything.
They want a white kitchen and they want a white couch
and they want white, white, white, white.
I don't know, it's interesting.
And guys are like, I want a den and I want dark
and I want dark wood.
Why would men be more drawn to darker colors
and women more drawn to lighter colors?
That's fascinating.
It is interesting.
There must be something like cave-like.
Evolutionary.
Maybe.
Yeah, I don't know.
To be clear though, I don't want to white, all white.
That's not for me.
I like color.
My new kitchen is not white.
In fact, it's quite dark.
It is. Yes.
But it, but I need pops of light
and I need like a lot of light coming in.
Yep.
When the guy has his dream space,
it's like a dungeon.
Yeah.
With a dark bar and a dark everything
and dark leather everything.
It's so interesting.
It is.
I mean, I guess the stereotype is like,
it needs to be dark to play your video games.
But I don't know how evolutionary.
I don't play video games though.
I know, I know.
But yeah, my favorite room in the house
is definitely the theater room.
Yeah, you like watching stuff.
I like being in a dark cave,
minimal stimulation.
Yeah. Yeah.
Yeah.
That's soothing.
Yeah, you're also mixed messages.
You like your cave and you like minimal stimulation,
but you also, as you said last week,
you thrive on-
Chaos.
Extraversion on people and stimuli
and those types of things. And stimuli.
So it kinda-
So great.
Yeah. We might be onto something.
Okay, let's figure it out.
So I think because my external presenting real life
is generally I seek out,
stimuli and chaos,
when I'm doing my chill thing,
I want none of that.
And then I think conversely for like,
Kristin who doesn't wanna go out
and be super extroverted and social
and get overstimulated.
Yeah.
In her little world where she's in the house a lot,
she does then need it there.
Sure, that makes sense.
Yeah.
You didn't see when you walked in,
but Alex Reed from Bill Gates's team
sent us a beautiful photograph.
Oh.
Of our time in India.
Oh, wonderful.
Yeah, which is nice.
So we'll have to figure out where to put it.
We got some blank space over there.
Yeah, we'll find a spot.
Yeah, and we also gotta get Chris Leiden up.
That's true.
Gonna make room for, I got it though this time.
Yeah, that was good.
Only because of Johnny Leiden.
Pfft.
Pfft.
Pfft.
Sorry, Max Bennett.
Okay, Max, so how many atoms in the universe?
According to scientific estimates,
there are approximately.
10 to the power of 10?
10 to the power of 78 to 10 to the power of 82.
So that translates to a number
between 10 quadrillion,
vigintillion, wait, 10 quadrillion, ventillion,
and 100,000 quadrillion, ventillion atoms. When do you think we'll have our first ventillion and 100,000 quadrillion ventillion atoms.
When do you think we'll have our first ventillion there?
Yeah.
Ventillion, I don't think I've ever heard that word.
I might not be saying it right.
It's V-I-G-I-N-T-I-L-L-I-O-N.
Vignetillion?
Yeah, the G might be pronounced, I'm not sure,
but I would guess it's not.
And what was chess a 10 to the power of 120?
Say it again, sorry?
I think chess was 10 to the power of 120.
I suppose a 10 to the power of 78 or 10 to the power of 60.
I think it's 10 to the power of 120.
Chess, chess, outcome?
Permutations.
There is, this says.
It's in my notes from the episode.
This says, yeah, the number of possible variations
in chess is so large that it is estimated to be between,
this just says.
Too many?
I say 10 to the power of 120.
Yeah, okay.
But what is this?
Are you in the National Museum's Liverpool?
What is this?
That makes no sense.
Right, exactly.
But it says which is more than the number of atoms in the observable universe.
Number is known as Shannon, the Shannon number.
Oh, how did Shannon get her name on that number?
That's cool for her.
Because I don't recall like there's not a famous grand master named Shannon.
A mathematician, Claude Shannon.
Claude Shannon.
Look, I'm just gonna say the AI overview.
Is a little lacking there.
That was not right.
Yeah, it didn't understand something there.
Yeah, ding ding ding AI.
Hallucination, ding ding ding.
Human intelligence, a brief history.
Okay, now chimps.
So you said they had like 100 word vocabulary kind of,
like dictionary sort of.
Uh-huh, calls.
Mm-hmm.
So it says, they howl, they squeak, they bark,
sometimes they scream, then grunt, then bark,
and then scream all in a row.
And they captured 900 hours of primate sounds.
This is, as it turns out,
chimps are particularly fond of a few combinations.
Hoot, pant, grunt, hoot, pant, hoot,
and pant, hoot, pant,, hoot pant hoot, and pant hoot pant scream.
Sometimes they combined two separate units
into much longer phrases.
Two thirds of the primates were heard belting out
five part cries.
By combining the letters, the chimps had roughly
400 calls in their vocabulary.
When we did the gorilla trek, you're with some,
what we would call here, like DNR people, like state rangers,
Rwandan state rangers, and they speak gorilla.
So as they're coming closer, like they're kind of,
they're telling them it's cool,
they're telling them to back up,
and they do the pant and the hoot and the grunt,
and they fucking speak gorilla, and it's so impressive. That's so cool. Yeah, yeah, and they do the pant and the hoot and the grunt and they fucking speak gorilla and it's so impressive.
That's so cool.
Yeah, yeah, and they're talking a lot to them.
Right.
Yeah, they're really communicating with them.
God, though, is this like Chimp Crazy
where it's like everyone feels
that they know each other so well
and it's a trusting environment
and then the gorilla just will attack.
Well, gorillas aren't like chimps.
They don't do that kind of thing.
Okay.
Because of their structure.
Right. Oh, they're, they're testicles.
The way they're arranged, they're a harem group.
So there's one silverback,
he has access to all the females.
The other males that live there won't get a silverback.
No, weirdly in the Sousa group we were looking at,
there were two other silverbacks, but that's because they were the brother of the main silverback, and weirdly in the Sousa group we were looking at there were two other silverbacks
but that's because they were the brother of the main silverback and they had already gotten their
silver and another troop and then rejoined. But regardless they're not having to fight non-stop.
They do it kind of once they have their tenure then they get overthrown whereas these the chimps
are fucking A they're hunting other monkeys they're hunting other monkeys, they're fighting leopards,
they're fighting other troops.
The guerrilla troops rarely interact and fight each other.
Like, just the lifestyle of the chimp,
they have to be so much more aggressive and wild
and scary as fuck.
I mean, it's the difference between having a cow
as a pet and a tiger.
Like they have different instincts, They do different things in the wild and they make better or worse pets
Interesting. Yeah, but I still wouldn't recommend getting a gorilla as a pet
No, cuz think how much damage the chimp at 200 pounds did Travis Yeah, and a male silverback is 450 pounds
450 as you recall when the one came at me and shoulder checked me in a male silverback is 450 pounds. Jesus Christ. 450.
As you recall, when the one came at me
and shoulder checked me,
I felt like I was in a movie with giants, you know.
Yeah, scary.
Oh, it was so, and it's not,
when we see a 450 pound man,
it's mostly collected all around their abdomen.
Right. The 450 pounds is in their shoulders, You see a 450 pound man, it's mostly collected all around their abdomen.
The 450 pounds is in their shoulders,
chest and their bicep and their lats.
So when they're coming at you,
you're seeing like hundreds of pounds of muscle.
They've never attacked?
They do often.
So 50% of the time that they interact with a group,
the silverback will grab one of the humans.
But it grabs one of the humans
and it drags them out of the circle and drops them.
They don't ever bite them,
they don't ever tear at them.
It's never happened?
According to the Rangers.
Really?
I don't think they could bring groups of people up in there.
Like we also went on a chimp trek in Rwanda
and you can't go interact with them.
You can't get close to them.
You're hoping to hear them.
Maybe you're hoping to spot them.
By the way, on that trek, I was a mess.
I didn't like it.
Cause I know what chimps can do.
But yeah, the silverbacks won't.
Now, if you were to run in and grab one of the babies,
it would kill you.
But if you're just sitting there observing
and it'll just come demonstrate, I'm in charge here,
they drag you out, they drop you, and then that's it.
And they said, it's gonna happen
to half the groups we bring up.
So if it's you, here's what you do.
You just go limp, you submit,
and he'll just drag you, let you go.
Ew, no, no.
And everyone's wearing a backpack, right?
Cause you have your water and shit.
And so they grab you by the backpack.
They generally don't even grab you
by like an arm or anything.
Oof, that's so scary.
Yeah.
It's so scary.
You love it.
But I'm much more afraid of a 160 pound chimp
than I am a 450 pound gorilla.
Crazy.
We've seen this footage where babies fall
into the gorilla enclosure and like the silverback go
and protect the human child.
Yeah, but we're not babies.
No, but they have a much sweeter side.
Yeah, I guess I could, maybe if they grab me
I would say, goo goo gaga.
That's a really good strategy.
Yeah.
Goo goo gaga.
What if I was doing that?
Ha ha ha.
Like what's going on with this human?
We should kill it.
It's acting very weird.
What if he grabbed me and my training took over?
You never know when your training's gonna take over.
No, no, no, no.
If I try to wrestle. No, no, no, no. If I try to wrestle like a girl.
No, no, no.
Oh my God.
Pfft.
Ha ha ha ha.
You know I do have weird desires of testing myself
against other animals.
Yeah, I know you do.
It really all started when working at Chosen Shoots.
Maybe I've told you this in the past or not.
But I worked with a guy whose dad had wild animals
up north in Michigan and they had a cougar.
Oof.
And he was regularly talking about how scary this cougar was.
I said, the cougar is only 110 pounds.
I would, for the right amount of money,
if everyone here at Chosen Shoots
was willing to pay like 20 bucks,
I will get in a full leather motorcycle leathers
and a helmet and something to protect my neck.
And I'll get in the cage and wrestle the thing.
Because as long as I'm protecting myself from like punctures,
I can overpower 105 pound,
I could lift it up and I could do things.
And I was really inching towards maybe that happening.
It just didn't, but certainly on a drunk night,
that could have happened.
Jesus.
But I think I draw the line at the cougar, 110 pounds.
Okay.
Well no, you've been wanting to fight a mountain lion.
Well, that's a cougar, yeah.
Oh, they're the same thing.
Yeah, puma, cougar, mountain lion are all the same thing.
Huh, I thought pumas were black.
There are black pumas.
In fact, I just read, was it in our guy's book?
I think it is in our guy's book.
Okay, which one, Max's?
No, our upcoming guest.
They discovered this insane thing about cheetahs.
Okay.
So the cheetah population was dwindling dramatically.
And at some point in the 80s or whatever,
I don't know what year it was,
they decided, well, there was a general movement
in zoology to go like, let's stop taking animals
from the wild and start breeding programs at zoos
so we don't have to take them from the wild.
So great, I stand by that.
And they were having an impossible time breeding cheetahs.
And they couldn't figure out why they couldn't get
any of these cheetahs to procreate.
And then they discovered at some point, And they couldn't figure out why they couldn't get any of these cheetahs to procreate.
And then they discovered at some point, they did a bunch of work on them and they discovered
that cheetahs experienced this really weird collapse in their population.
And they're pretty sure the entire population came from a single mom with like eight pups.
So it's the most inbred species that they know about.
They talk about skin grafts.
So skin grafts are nearly impossible
to get to take human to human.
Your body will reject.
If I take some of your skin,
it'll reject it immediately, get necrotic.
The only people that can really do skin grafts
are identical twins.
Because they have the same DNA and the body doesn't.
Recognize that it's foreign.
They can skin graft any cheetah to any cheetah.
And they test it, just oh, well maybe cheetahs
accept skin grafts.
No, then they do like a house cat rejects anything
it rejects, but you can skin graft any cheetah to any.
There's no diversity.
So two things, when they would procreate,
it would reject nonstop.
The way our body does, our body is 50%
of all pregnancies self terminate, half.
Because half the time it observes some genetic anomaly
or there's disjunction, all these things happen.
Well, what was happening with the cheetahs
is they're aborting all of them basically
because they're all detecting these genetic abnormalities
because of the inbreeding.
And then what really sucked about it is that
in this population they finally got some ground
with breeding them, they caught a coronavirus
and they all died.
Because they have the same genetics.
So if it's gonna be lethal to one of them,
it's gonna be lethal to all of them, it's gonna be lethal to all of them.
Yeah, interesting.
So they're so vulnerable
because of the lack of diversity.
Ding, ding, ding, ding, ding.
We love diversity.
Diversity.
Yeah, interesting.
This episode was brought, oh, no, I don't say welcome,
I say we are supported by diversity.
Oh gosh, yeah, wow.
Yeah.
Oh, Pumas.
Yeah.
So the Florida Panther, which is a Puma, which is black.
So Panthers, Pumas.
Yep.
Panthers, Pumas, Mountain lions.
No, not Coyotes, Mountain lions.
Cougars.
And cougars are all the same species.
Species, but they're different, right?
Well, they can be black, they can have spots.
In the same way, you can be brown and I can be white,
we're the same species.
I'd say there's more variation in human species
than there are between these pumas.
Huh, okay.
So the Florida panther was getting incredibly rare
and also getting very, very inbred.
And so they wanted to protect the Florida panther,
but in order to protect the Florida panther,
the solution that they had to use
is they had to capture a dozen Texas Pumas
and let them loose in the Everglades
so that they could give some variety in the genes.
And there was a bit of an, I guess,
intellectual or moral quandary.
It's like, well, if we're trying to protect
this specific animal, but the way we protect it
is make it a different animal.
Is that, what are we doing?
You know, they did it though, and it helped.
That's good, that's also diversity.
Yes, yes.
And Texas meets Florida.
And interracial marriage.
Yeah, yeah.
We're pro it.
Yeah, absolutely.
We almost prefer it.
Mixed people are way better looking
than non-mixed people.
Beautiful, beautiful people.
Look at Calvin and Vinnie, my god.
Oh my god, Jesus, peace.
So did they call that a different kind of panther?
No, they call it the Florida panther.
Oh, they still are, okay.
But we all know, wink, wink, it's a bit of Texas kuma.
Oh, oops.
Okay.
Maybe white nationalists now reject the Panther,
but you know, cause it's not pure anymore.
Yeah, probably.
Yeah.
That's the funny thing about this desire for purity.
It's like really a desire for vulnerability.
Sure.
Right?
It is, it's making yourself very vulnerable.
Counterintuitive and dumb.
Yeah, as much of the white nationalist agenda is. Well, true, true.
They're consistently thinking this.
That is very, very true.
Do you think we have any white nationalist listeners?
Oh my gosh, we do an armchair anonymous.
I gotta applaud them if they can get through this show
with such a radically different mindset than ours.
I almost have to applaud the open-mindedness.
You're hesitant to applaud.
I don't think I can allow us to ever.
Just one aspect of their, like here's a great example.
But they could hate watch it, like, and that I don't want that.
Well, if they're hate watching it,
then I don't applaud it.
But if they're like, I totally disagree with this,
but I'm open to hearing what they have to say,
I am gonna applaud just that sliver of their personality.
Kind of like you gotta, this happened,
this is Brene Brown's great example
of during the hurricane in Houston.
Her great example is that when the people in boats
showed up to the people drowning,
no one said, what are your politics?
Nor did they ask before they rescued somebody,
what are your politics?
And so that's a very beautiful part.
And we can just celebrate the heroes who rescued people.
We don't need to, we have room to do that
and then also hate maybe other aspects of them.
Sure.
But I don't, that's not the same.
The white nationalists aren't heroes
because they listen to us.
No, but they're demonstrating an open-mindedness
that is impressive.
Sure.
Yeah, I mean, I hear you, but it's,
their whole being is not open-mindedness.
Right.
Yeah, so if they're taking this radical step against-
Well, maybe they like you.
You're a white, tall, white, Aryan guy. I this radical step. Well, maybe they like you, you're a tall white Aryan guy.
I'm a good mass tech.
Yeah, I don't know that we're, well, I am.
I guess it's like if they listen to the fact check,
then I guess it's interesting,
but also they probably wanna kill me.
How about this?
But they definitely want me to be eradicated.
Let's go broader,
because I think this is a pretty agreed upon parenting technique.
It's like you can spend all your energy focusing
on the things you don't like that they're doing
and have a very negative approach.
Or you can just positively reinforce
and celebrate the things they're doing that you like.
I think it's in that doc that we loved
or that kind man who sat and talked to people.
Think that's what he did.
Yeah, I did that my whole life.
I had to do that my whole life.
There's a lot of people, a lot of people,
a lot of people I love.
A lot of people I love's parents,
a lot of stuff who I still love,
but a lifetime of practice of turning a blind eye
to things that actively are against my best interest,
that are hateful and mean,
I had to find the good in order to survive there.
And it's exhausting to live a life like that.
I'm sure, I can only imagine.
So like, I guess at this stage,
I don't have to do that anymore.
Yeah, yeah.
And I don't, I'm kind of tired of it, you know?
I think I like put my time in on that.
But here's the other, this is the other truth,
those people in my life, they still are, I still love them.
Yes, and your involvement helped bring them closer
towards where you would want.
I wish, but I don't think so.
You don't think so.
Especially like seeing the trajectory over time
and seeing where those people are now
and what they're doing.
All about, yeah.
I don't think so, which is a bummer.
It does bum me out.
So is your take throwing the towel?
Like, it's just not worth, those people can't change, you know, there's no...
Well, I definitely don't think I can make anyone change, ever.
And I think that's like a lesson learned across the board for me over and over and over again.
And I think that's like a, that's a good lesson.
Like you're not going to change anyone.
Right. like that's a good lesson, like you're not gonna change anyone. So yeah, those are their beliefs
and I'm not here to change them.
And so the people that are already in my life,
I love deeply, right?
And like that's not changing,
but I don't know that I'm interested at this stage in life
of adding more people in who I need to do that cycle with.
It's not, well, Renee,
Bryndingdingding again, she said,
yeah, maybe it's not your job to set the table for that.
Totally agree.
I certainly have the capacity to do that.
I haven't been exhausted by that.
I haven't had to do that.
Yeah.
And then also I've watched a lot of those documentaries now.
I've yet to see the one where I thought
I was watching the white nationalist person.
And I was like, yeah, that person's really smart
and they're really educated and they had a lot of opportunity
and they were really loved.
Sure.
And so they have all those things
and they're deciding this clearly.
Well, I'm writing that, fuck them.
You know, I'm writing.
Pretty often I think like as much as I deplore that group
and their thoughts, I also think, yeah, they're probably a victim
of their circumstances.
Of course they are.
Well, not all, by the way.
A lot of the people that I'm talking about
are rich, entitled white people.
Yeah, yeah, yeah.
I'm talking about these weird white nationals
I see in these documentaries.
Sure, sure.
Or the fucking whack jobs who tried to kidnap
the governor of Michigan.
I fully agree with you.
I'm like, they are a product of their background
and their history, but even more so why I'm like,
I'm not gonna be the person to come in and change it.
I'm just not.
It's not their fault.
I still, I do stand by that it's not your fault,
but it is your responsibility.
Right, right, right, right.
And I think the stories I find most inspiring
is the episode of Blaine.
It's the Jewish gentleman who brought
Megan Phelps Roper Jewish food on her hate line.
Yeah.
Like if I talk about who I would most aspire to be
and don't even think I have the capacity to be,
it is those people.
It's the ones that I'm like, wow,
how did you find it in your heart
to be that generous to the enemy?
That's like to me the high water mark
of what a human's capable of or blame.
This man murdered my daughter
and I'm still gonna develop this friendship
with him in prison. Is like, that's who I would pray to be.
Sure.
You know, it's kind of aspirational.
Yeah, it is.
Well, there weren't obviously very many facts.
Well, he knows his facts.
He knows them all.
He knows every fact.
So there weren't many, there weren't many,
there weren't many.
Is there one more you wanna go to?
No, there is something I wrote down,
but I'm not gonna say it.
Okay, because things are too contentious? Well, no is something I wrote down, but I'm not gonna say it.
Okay, because things are too contentious?
Well, no, it's just, it's too esoteric.
Okay.
Okay.
It's about the row.
Oh, I don't care.
I don't have a problem listening to your stuff.
I enjoy it.
Sure, I just, but if it is esoteric,
and so people might not want to hear it.
But there's some interesting stuff in the row.
Oh no, everyone will like to hear about a billion dollars.
The money, this is what makes it not esoteric.
Okay, so.
It's an American business success story.
Yes, the row, Mary-Kate and Ashley, has been valuated at a billion dollars.
Congratulations, Gail.
A huge deal.
Congratulations.
Women be crushing this year.
I know, I love it.
You got Alex Cooper.
Yes.
You got Tay Tay.
Yeah, Beyonce.
You got Beyonce.
You got Barbie.
You got the Ro.
There's so many, like all the, like Pop Girls Summer,
all the awesome women, they're crushing.
It's so cool.
I have a personal stake in this.
Like I, you tell me that, and like I go back
to being in her kitchen in New York
while she's meeting with her first round
of different artists to start working on this stuff.
Yeah, so cool.
Like I feel, it's fun for me
because I saw it in its very infancy,
and it was a very bold move to go from an enormous
clothing line at Walmart to know I'm gonna do
the like Hyacinth premium.
Like what a bold swing.
Well they had some steps in between.
They had like Elizabeth and James that was like
maybe like a rung below, but still fancier.
Actually, what went up-
It'd be like Hulk Hogan going,
I think I'm gonna win an Academy Award in 10 years.
Yeah, I mean, they are so impressive.
Yeah, it's awesome.
Such awesome business people.
And they're also so elusive,
like they don't talk to anyone, which I really like.
They're very, very protective of the brand.
They still own majority stake. Yeah, they're just really protective of it, and I like that. They're very, very protective of the brand. They still own majority stake.
Yeah, they're just really protective of it.
And I like that.
Yeah.
They're cool.
Wow, I wonder if this is the only time
you would ever like cars.
Ooh.
And you already know this.
Sometimes I like cars.
Yeah, but this is one of the things
I thought was coolest about her.
Mm-hmm, oh yeah.
Is that she had like a 2002 Cadillac DTS black
and then she had a G-Wagon AMG.
And I was like, dang, this Gail knows,
she's got good car style.
She's got the style in general.
She knows her shit.
Really, really cool.
Okay, bye. Love you.
Love you. your podcasts.
You can listen to every episode of Armchair Expert early and ad free right now by joining
Wondry Plus in the Wondry app or on Apple Podcasts. Before you go, tell us about yourself by completing a short survey at Wondry.com slash survey.