Hidden Brain - Revealing Your Unconscious: Part 2
Episode Date: March 14, 2023In the second part of our series on implicit bias, we explore the relationship between beliefs and behaviors. We also talk with psychologist Mahzarin Banaji about whether research on implicit bias tel...ls us more about groups than it does about individuals.To learn more:Project ImplicitOutsmarting Implicit BiasHow do your beliefs about the world shape your reality, and your well-being? Be sure to listen to our recent episode about primal world beliefs for insights on that question. And if you enjoy our work, please consider supporting it. Thanks!
Transcript
Discussion (0)
This is Hidden Brain, I'm Shankar Vedanta.
All of us know what prejudice looks like.
We've seen news stories about swastikas pre-painted on synagogues
or news is drawn on classroom walls to terrorize black students.
We have heard xenophobic speeches from politicians
and watched in horror as ethnic groups around the world
have exterminated their enemies.
In the late 1990s, Harvard psychologist Mazurine Banaji and her former PhD advisor, Tony Greenwald
of the University of Washington, developed a test of hidden bias called the implicit association test or IAT.
Unlike the very public spectacle of a burning cross on someone's front lawn, the picture
of bias painted by this test was rather subtle.
By measuring the speed of people's associations, the tests showed that large numbers of Americans
found it easier to associate white faces with positive concepts than to associate black
faces with positive concepts.
Many people were similarly quick to associate men with professional activities and slow to
associate women with such activities. Lots and lots of Americans appear to have negative associations about the elderly,
the overweight, and the disabled.
Crucially, large numbers of the people taking the tests didn't think of themselves as being prejudiced,
many prided themselves on having egalitarian beliefs.
Their test results often came as a shock.
The feeling was a feeling of dread.
I would say a feeling of having had the rug pulled out from under you.
You have to start from scratch
to now rebuild a view of yourself
that will forever be a different view of yourself.
In part one of the story,
which I strongly recommend you listen to before you listen to this episode,
we explored the origins of the psychological test.
Today on the show, we explore what it means that so many people have subtle biases when
it comes to their mental associations.
Are these biases benign and just inside people's heads, or do they cause people to act in
biased ways?
Is there anything we can do to fight our biases?
The surprising connection between our biases and our behavior, this week on Hidden Brain.
In George Orwell's dystopian novel 1984, there were lots of ways to get in trouble with
a totalitarian state.
Protesting in the street was a quick way to get seized by the authorities, but you could
also get in trouble for subtler things, like reading the wrong book or having the wrong
opinions.
The eyes and ears of the state were everywhere, and subjects were expected to not just do the right things,
but to think the right thoughts.
After Harvard University put the implicit association test on its website, you can find it at implicit.harvard.edu.
Interest in the test surged. Many companies began mandating their employees take the test
during diversity training exercises.
As we saw in part one of the story, early studies that Mazarin and others conducted suggested
a connection between implicit biases and the behavior of individuals taking the test.
In one study she conducted with Physician Alexander Greene,
Physicians with higher bias scores on the IAT
were less likely to prescribe clot-busting treatments
to a black patient, relative to a white patient.
But soon, other studies started to come out
that showed no association between the two.
People showing higher levels
of bias on the test did not act in biased ways. Critics of the test, including Phil Tatlock
at the University of Pennsylvania, started to argue that the test was measuring the equivalent
of Orwellian thought crimes instead of judging people on their words and actions. I think the IAT is grounded in a reductionist view of human nature.
I conducted this interview with Phil some years ago in 2017,
as we featured him on another episode of Hidden Brain.
It depicts people somewhat as association driven automaton.
I'm not temperamentally all that comfortable with that format reductionism, but I wouldn't reject the test on that basis. And I don't
reject the test. It's a little bit different. I mean, it's a test that is
enormously intuitively appealing. I mean, I've never seen a psychological test
take off the way the IAT has and the way it's gripped the popular imagination, the way it has,
because it just seems on its surface
to be measuring something like prejudice.
You've got the differential reaction times, right,
between the black and white stimuli, say,
and it just seems to be a bullseye.
And anybody who denies it is engaging
as some kind of scholastic quibbling.
But there is the question of whether or not people
who score as prejudiced on the IET
actually act in the
discriminatory ways toward other human beings in real world situations. And if
they don't, if there is very close to zero relationship between those two things,
what exactly is the IET measuring?
Metaanalyses studies tracking a body of research not just an individual experiment are one
way to tell if something is a fad or a fact.
If you have a number of studies linking a medication with positive patient outcomes, for example,
you become more certain that the medication actually works.
META analysis of the IAT test data found mixed results.
Some studies showed a connection between implicit bias and real-world discrimination, but
plenty of others did not.
For many critics of the test, this was vindication that the test was useless and that the hype
over the test was unfounded. Challenges to the
usefulness of Ma Zareen's research were published in peer-reviewed academic journals.
Soon, Ma Zareen and her colleagues decided to launch a comprehensive meta-analysis of
their own. I asked her if she found there was a correlation between implicit biases
and real world behavior. The strength of the correlation is small.
When scientists measure whether two things are correlated to each other, they use a scale
from zero to one.
Toler people tend to be heavier people, so height and weight are correlated.
It doesn't mean every tall person is going to weigh more than every short person. All it says is that there is a relationship between the two. As height
goes up, you're likely to weigh more. But there are lots of things that are not
correlated. Taller people, for example, are not better at math. The closer a
correlation between two things is to zero, the less likely the two things
are correlated.
And when it came to Maserine's analysis of the IAT?
We published a paper that probably places the correlation at the smallest it has ever
been reported to be, around 0.1.
And that's because we decided to include every single study no matter how poorly it was conducted.
So we decided we will not throw any study out.
There are studies in there that really don't belong.
Somebody did a study looking at where the race bias would predict their degree of smoking.
There should be no correlation, but that study is in there and it counts as a lack of correlation. So we have gone, you know, overboard in making sure that no matter what the study we will include it,
and so we show that even when you do that, the correlation is about 0.1,
within that set, if you begin to use five criteria for what is a good study,
that any scientist would agree, you know, The correlation jumps up to three times that size.
I first read about Maasareen's work and the IAT
around 20 years ago.
It's fair to say I initially thought it would show
that individuals with implicit biases
always act in biased ways.
I wrote about the test in my 2010 book The Hidden Brain.
As mixed data started to emerge in the last decade, I found myself having to question
my beliefs. There was no doubt that large numbers of people around the world showed fast
or slow associations on the tests. And the results of the tests generally matched our intuitions about the nature of prejudice.
Across many countries, people more swiftly associated men, rather than women, with concepts related
to science or leadership. Groups that were in the majority often had negative biases about minorities.
The data coming in were voluminous. This wasn't about a couple of studies and a few dozen subjects.
There were hundreds of studies and millions of test takers.
But if people's test results were only weekly correlated with real world behavior, what
did these results mean?
Did it really make sense for companies around the world to mandate their employees take
these tests?
Some were even using test results to figure out who should be involved with HR and hiring, or rising to leadership positions.
As critics and supporters use the studies as ammunition for their pet beliefs,
Maserine and her colleagues kept doing research.
The volume of data rolling in meant that they could do something that most social scientists working in other areas could only dream about.
They could start to analyze relationships between test results and real-world behavior, not at the level of individuals, or even at the level of companies, but at the level of cities, regions, and nations.
In 2009, they reported preliminary results on an unusual finding.
We had data from a few countries, not as many as we have today, so we are in the process
of replicating that to see if with so much more data, because we did this a long time ago,
in which we knew that there was a standardized test that was taken across countries in eighth grade
and we could get the data on the gender difference in performance on that mathematics test taken
in eighth grade across many countries.
And so for each country, you compute the gender difference.
What is the gender difference in country A?
It could be big.
If boys score a lot higher than the girls, that's a big difference.
There might be countries where boys and girls score about the same.
So you have a lot of countries with a bunch of variation in gender.
And what you're looking to see is whether the countries that show the largest difference
in math performance between girls and boys are also the countries whose people are carrying in their heads
a stronger association of boys with math rather than girls with math.
And we reported that there is a robust positive correlation.
Countries with higher gender bias are also the countries where the girls are underperforming
compared to boys to a greater extent.
I want to take a moment to sit with this finding.
At the time she did this study, Mazarin didn't think much of it.
But what she was finding was that when you give people an implicit association test, measuring
how quickly the associate concepts in mathematics with men, rather than with women, they were
regional differences,
national differences in test scores.
When you averaged implicit bias scores across entire countries, people in some countries
were faster than others when it came to associating men with math and slower in associating women
with math.
Now if you turn to how eighth grade students were doing in the standardized math test, girls
in countries with high implicit bias were doing worse than girls in countries with low implicit
bias.
The interesting thing is that if you look not at a nation but at an individual student
or an individual school, you might see little to no correlation between implicit bias test results
and student performance on the standardized mathematics test. Or you might find it in one school,
but not in the next. It's only when you step back and looked at the big picture
that you saw a robust correlation. It was like one of those pixelated images,
or a painting that uses the style known as pointillism.
Up close, you just see a lot of dots. It's only when you step back, you realize, oh, there's
a picture here.
That paper, while I always thought that was an interesting result, I think I wasn't smart
enough to realize that the reason we were getting a fairly substantial
correlation is because we were wiping out lots of individual-level error.
We were collapsing across many different people to come up with a much more stable score
of what is going on in the larger system, in the larger environment in which people are
sitting.
And when you take the average of that, the average
of a whole bunch of people, you're likely to pick up the actual or true correlation in
a much better way.
When someone gives you a test, you feel the test is saying something about you.
That is true, but also not completely true when it comes to the implicit association
test. Yes, at one level, the tests are telling you about something that is inside your head.
But the tests might be telling you something much more important about the culture in which
you are living. When we come back, what happens when we look at the implicit association test not at the
level of individuals, but at the level of cities, counties and nations?
You're listening to Hidden Brain.
I'm Shankar Vedantam.
The implicit association test became very popular after psychologist Mazurine Banaji and her colleagues
placed the test on the Harvard website.
Millions of people took the test, hoping it would give them a glimpse into their own minds.
I took the race win and to be honest with you I was really surprised at how insightful it was.
Does that score mean that I do not like European Americans? No.
It's my subconscious aware of the condition that African Americans are in in this country
at this particular point.
Is it because I can't come just to say that I'm bad?
And is it just in our nature that there has to be an us in them?
And them is going to be the bad guy.
But as researchers evaluated whether test results revealed real world behavior, they found
mixed results.
Sometimes people who showed high bias on the implicit association test acted in ways that
were biased, but at other times, they didn't.
When you looked at all the studies together, the correlation between individual implicit
bias test results
and individual real world outcomes was small.
But the torrent of data from IAT test takers around the United States and around the world
meant researchers could now start to analyze not just links between test results and individual
behavior, but the correlations between average scores in an area and real world outcomes.
Early on, Maserine and her colleagues discovered a curious result.
The performance of boys and girls in an eighth-grade standardized math test appeared to be linked
to average implicit bias scores in those nations.
Countries where people were quicker to associate men with signs showed a wider gap
in test scores. Girls did worse on the standardized test. In time, other research along these lines
started to emerge. So, Raj Chetty, for example, my colleague in economics, is interested in not only
why upward mobility is so slow these days compared to what it was,
why the American dream has vanished, but he's also interested in for whom is the American dream
more likely and less likely. And so he took our data and said, well, we can look county by county
at IAT. So now forget the individual. Instead, identify the county and take the average IAT of all the people in that county and give it just one score.
So you collapse across all the people in the county and you come up with one I counties, and you look to see if that predicts upward
mobility for black Americans. In other words, he shows that the higher the race bias, average race
bias in a county, the harder it is for black people in that county to be upwardly mobile.
This is just one example. Now that we've understood what the correlation is, I can just rattle off for you.
You know, now I think we're up to about 17 independent studies that have been published,
that show that higher race bias in a county will predict greater lethal use of force by police
against black Americans.
The most recent study shows greater militarization of police
departments in those counties, greater threats to maternal health and infant health in the
counties that have greater bias, school disciplining differences between white and black kids that
are greater in counties that have greater race bias, traffic stops, tickets, et cetera.
And these 17 studies that I'm just mentioning
that look at average IAT scores by county or by state
or by metropolitan region or by country,
they're all averages by region.
They are just predicting up and down the spectrum
in a way in which I would never have predicted,
but it's really exciting to see because these dependent
variables are not simple little things. It's not even how well you do on a math test.
This is about whether you live or die. This is about whether you will get disciplined in
school and get kicked out. This is about whether you will live as a baby.
Can you talk a little bit about why it is we would see a stronger correlation at this
aggregate level?
So in other words, if you're analyzing my brain and saying, here's your implicit bias and
then you're evaluating me to see, do I hire a black person or a white person?
Do I hire a man or a woman?
Why is it that we would see a lower correlation at me and an individual?
But when you step out and look at the aggregate,
you have a higher correlation.
How would that be the case?
That's the power of aggregation.
Any individual score is going to vary
based on lots of things jittering around in that moment.
And more importantly, whether you behave in a way
that is biased or less biased is going
to be multiply determined by little things in the local environment.
For example, my score on race bias may be quite high.
I may be quite anti-black, but it may be that in the moment in which you're testing my
behavior, a smiling person appeared in front of me who wiped out my bias and I responded
positively to that person.
Little things like that in the environment can make the behavior move around and not allow
the particular measure in which you're interested in to show itself.
So as soon as you aggregate it for every person like me, somebody else's similar behavior
will counter it.
And so all you are doing the best way to understand it is that when we aggregate, we are removing
individual level noise in the data.
One analogy to this idea comes from the realm of polling. In the United States, lots of polling
is done by groups that have either a conservative or a liberal bias. Unsurprisingly, polls that
lean conservative are likely to predict conservative victories. Liberal polls are likely to predict victories for Democrats.
But something interesting happens when you average out the polls. The poll that
leans too far right gets balanced out by the poll that leans left. When you average polls,
you are likely to get answers that are much more accurate than individual polls.
The same thing happens if you ask people to make estimates of something,
say the size of the US economy. Low estimates and high estimates cancel each other out when
you average the answers, leaving you with a better approximation of the correct answer.
This phenomenon is sometimes known as the wisdom of the crowd, meaning the average answer
across a group of people is often more accurate than
individual answers. Now there are two views on this. The two views are my view is
that as we remove noise we will see higher and higher levels of correlation.
In other words as the tests get, as studies are conducted more carefully,
Ma Zerina saying she expects the correlations to get better at the individual level,
not just at the level of nations.
But she also cites a second possibility,
that the IAT is really capturing a reflection in people's minds of something that is in the larger culture.
Somebody I admire greatly and agree with in many ways,
and I'm not opposed to this, but this is Keith Payne.
He argues that these things don't operate
at the level of the individual.
You will never, no matter how much error variance you remove,
it will never get better at the individual level
because we become of a certain place
when we go into a certain area.
So he did this remarkable study because we become of a certain place when we go into a certain area.
So he did this remarkable study that if I can just describe really quickly,
I will tell you about it.
Keith obtained a map that had been produced by Abraham Lincoln in the 1860s,
in which Lincoln had his people plot county by county the proportion of
enslaved to free people in every county in the southern states.
And he did this because obviously he was this smart guy, a scientist almost, because he thought if I know
that, if I know the proportion of enslaved people in a county, I will be able to make better military predictions about which counties
are going to fall faster than other counties.
And the simple idea was, the greater the proportion of enslaved people, the harder they will
fight and the more they will resist giving up slavery.
So this map exists even to this day.
You can look at the map, you can see the counties and little numbers that tell us what the
proportions are. So Keith,
you know, in the 21st century goes back to this map and he says, let's correlate these two things.
IAT raised bias in that county today and the number of enslaved to free people, proportions or ratios in 1860. And lo and behold, the correlation is quite
substantial and high. And he will say exactly as we've been discussing he will say,
well how can it be these are not the same people. Notice how surprising this is.
Keith Bain, a psychologist at the University of North Carolina Chapel Hill, looked at implicit
bias test results for people in the 21st century.
Why would these results be connected to policies that existed more than 150 years ago?
Everyone who lived in those counties in the 1860s is now very dead.
The proportions of enslaved to free people
are not the people whose minds were measuring.
In fact, we can't. They're gone.
And it's not even the case
that the descendants of those very people live there today.
In the United States, enough migrations have happened
that that is not the case.
So Keats' point of view, which is very interesting, is that your mind
reflects what is right around you.
And that if you were somebody who lived in Seattle and Microsoft Corporation sent you
to some place in Georgia and you arrive there, you become of that place.
If there are many Confederate statues in the town in which you live, your mind will move
in the direction of more anti-black bias.
Your children will hear certain things in school and they'll bring them to your home and as
you talk about them, you too will start to acquire that.
And these are the things that ultimately create these remarkable
correlations almost unbelievable, and what it tells us is just the long shadow of history
and how psychology is able to pick up this incredibly long shadow of history, that we can look at
data from 1860 and we can predict today what that county's race bias is. Or we can look at the race
bias today and we can predict who those people were. I find this absolutely fascinating.
In some ways, I think I'm hearing three different models. One says bias is produced by active animosity and hostility.
People who act in biased ways mean to be biased.
The second says, no, our minds are mirrors and when we go to different places, we are
going to reflect what is out there.
But I think there's also a third model,
and I might call this the hypertension model. So if you would have measured my blood pressure
right now and measured my blood pressure two hours from now or two days from now, it's
going to fluctuate because blood pressure is not super stable. It depends on what's going
on, what's happening to me physically, my mental state. But if you find that I do have
high blood pressure, it doesn't necessarily tell you I'm going to have a heart attack
next week, or I'm going to have a stroke next month.
So in other words, it's a useful measure,
but at an individual level, it's a somewhat crude measure
of determining short term risk.
But if you would step back and say,
what's the average hypertension of all the people in California
or what's the average hypertension of all the people living in New York.
And let's say the average hypertension in California was significantly higher than the average hypertension in New York.
You could very confidently say you will have many more heart attacks and strokes in California than in New York,
even though you can't predict which individuals are going to be affected.
You can say something meaningful about the group, even if you can't be very precise about individuals.
I mean, the same thing goes for smokers and non-smokers.
I might not be able to tell you which individuals
are going to develop cancer in any given week,
but I know the group of smokers
is going to have more cancers than the group of non-smokers.
So both the mirror model and the hypertension model
suggest that if you want to understand
how unconscious biases caused by its actions, you need to look beyond the individual mind
and look at larger systems and structures.
I think you very much have it right.
And I like the example of hypertension for many reasons, one of which is that the way we
measure hypertension shows that the machine is not
terribly reliable. For all the reasons that you said, if my arm is up or down, if I've
just eaten, if I've walked, it will vary in sometimes substantially. There is error
variance in that measure. The measure is not as good as it could be, same with the IAT.
The IAT is not as perfect a measure as we would think or like it to be. So there's error variance there. However, your blood pressure
does fluctuate. My brain is not the same brain as it was two hours ago. You know, having
talked to you, a bunch of connections have now been made and my bias on some topic because we've been
talking about it could be higher or lower than it was.
In other words, the IAT is actually picking up the real state of your brain now, which
was different than it was yesterday, and therefore the real, what we will say is the reliability
is low.
So I think when we put it all together, so this is one strand of wire like your hypertension
example.
The other reason I like the hypertension example is when I teach, when I say, you know,
hypertension is called the silent killer because you don't feel it.
You know, it's not like osteoarthritis or something where the pain tells you that something
is going on in your body.
But wouldn't we want to know that we have it? And don't we want to
invent gizmos that are not very reliable but can still save our lives? I think of this attempt,
and I'm not speaking about the IAT here, I'm speaking about any attempt to try to get at this kind
of implicit cognition, I think it's exactly the same thing that we're trying to do for our mind
as we do for our body.
We're trying to invent a measure that may not be very reliable, but could give us enough evidence
that we would say, you know what, knowing this, I will change my behavior.
I will do things in a different way.
And so I just love the hypertension example.
You know, one question it does raise, though, is that to the extent that these measures are
in fact telling us something more useful at the aggregate level than at the individual
level, you know, whether that's the key pain idea that our minds are really reflecting
what's happening around us.
In other words, what you're picking up in the measure of me as an individual is really
my reflection of what's in the society and the culture around me.
Or, in the case of the hypertension example, my hypertension is actually more relevant as
a clue when you aggregate it with the hypertension of all the other people around me in terms of
predicting where the heart attacks and strokes are going to be.
In both those cases, does it not raise questions about the model of fixing these biases that
seems to have become very popular, where so much of the efforts to fix implicit
bias has been about trying to eradicate bias from individual brains.
When you think about DEI efforts at various companies and corporations, so much of that
is we'll give you a test, we'll show you through the test that you have bias, and then
we're going to try and train this bias out of you.
If in fact the bias comes from the society, if in fact it's a reflection of the society, or in fact it's part of the larger systems and structures in which we're
part of, is it a fool's errand to try and say we can actually just fix individual minds
and hope to solve the problem? In one sense I couldn't agree with you more. I would
say that it is a fool's errand to think that we can go into a corporation, especially as it is currently done, come and give a talk on implicit this or that,
and then assume that we've checked off our box and now we don't have implicit bias.
In Go Frank Dobin and others and say,
the people who did the NIH training, nothing happened there.
Maasuring is referring here to work by the sociologist Frank Dobin and Alexandra Kallev,
who find that mandatory diversity training, as practiced by many corporations today, is
not only ineffective, but frequently counterproductive.
So, that's not a surprise, because the intervention is not up to the task of actually changing
anything real.
I do believe, though, that that education is necessary.
And it's necessary not because it will change an individual person's bias,
but it will make them open to structural changes their organization will want to make.
So if I work with any group that comes to me and says, what shall we do?
I will say, I will teach them about implicit bias in a scientific way. You can't do this if it's mandatory. I will
only come if it's voluntary. And what I think we will achieve is that when you then go to them and
you say, you know what, the way we run interviews is really bad. Interviews are a terrible way to make
decisions. We are going to terrible way to make decisions.
We are going to start to do something differently.
We are going to get resumes with much harder good evidence.
We are not going to let people write their hobbies on their resumes.
We are going to do these screenings.
We are going to bring interns in for six months
and we are going to pick from that instead of these silly ways in which we did.
I believe that if that education has been done
well, that you will be able to make all these institutional level changes that will ultimately change
the level of bias because you will have fixed it by intervening in the right moments. So
I'm very clear with organizations. If you want to change people, I'm not the right person for you,
I will, in fact, agree very much with you, Shankar, in saying that that would be a fool, Zaryn.
But I don't say don't educate them because I do believe that the education
plays the role of making individuals feel secure as to why we're going about changing our organization. If police officers
in Cambridge who I've worked with haven't been in a session with me and haven't
learned why what I'm saying is in their interest they will resist every little
step of the way wearing a body camera wearing a bulletproof vest and every. But
after a session like this and we have a paper in which we're going to summarize,
the massive shift in attitudes that we've seen in police officers prior to an educational
seminar and post an educational seminar. Now they're saying, yeah, I see why this is good for me.
So I believe in teaching and I believe it's necessary, but nowhere sufficient.
in teaching and I believe it's necessary, but nowhere sufficient. When we come back, how change happens?
You're listening to Hidden Brain, I'm Shankar Vedanta. This is Hidden Brain, I'm Shankar Vedanta. Psychologist Maserine Banaji had a formative
experience growing up in India. She was a member of a minority group that was all but invisible.
So the short version is that Zorastrianism is known today as the oldest monotheistic religion in the world.
Its origins were in Central Asia, in particular in what was then the Persian Empire.
And Zorastrianism was the state-religion.
It was a very successful religion. It spread far and wide until about the 8th century,
when Islamic invasions of that part of the world began,
and over two centuries, somewhere between the 8th and 10th century,
Zorastrians who did not wish to be converted,
took off in little boats looking for asylum,
religious asylum.
And I guess the first country that allowed them
that religious asylum was India.
It was on the west coast of India that they landed
in Gujarat.
And the story goes that the local king met them
and said to them, we are full.
There are many of us here,, we can't take you in.
And at least the apocryphal story is that the captain of this little boat asked for some milk and sugar
because they couldn't speak the same language.
And use this, put the sugar into the milk, stirred it and explained that we would just blend in and that we would
sweeten the milk.
And the king was so happy with this demo that he apparently letters in and said, you can
practice your religion, you can have whatever beliefs you wish, you have to speak our language
and where are dress.
Growing up, Mazarin often felt pulled in two directions. Ever the observer, she noticed
when this happened and what it meant. I'll just give you one example. In my own
community of Zorastrians, but particularly in my family,
I was considered dark-skinned.
I was.
People would say things to me like,
oh, you know, Karice.
Karice means she's black.
But when I would step out of the house,
I would be considered pale-skinned in South India.
So what was I, very early?
It was a conundrum.
Am I dark skinned or am I light skinned? I think it taught me that there was clearly nothing inherent
in the physical aspect of my skin color that made me light or dark. it's the context that made it so.
In India, the Zoroastrian community is known as the Parse community.
Likely because the first Zoroastrians were associated with travelers from
Pars, a region of Iran.
Maserine said she always sensed the Parse community had to stand apart from the
larger Indian
society.
It was communicated in a very sort of sideways way.
Nobody said to us, you cannot do X. But we just knew we couldn't.
If you needed something fixed, you would call the Parseic neighbor who would call, you
know, the Parseic neighbor who would call the Parcy
friend who would come and fix it.
You didn't participate in society.
And I only learned this when I married somebody from the dominant group.
And I watched how his family operated.
And I thought, wow, that's what it means to have access.
It just normal stuff.
My father wouldn't even collect the reimbursements of health insurance that would have come to him
because he was a government servant.
He just wouldn't participate in those sorts of things.
We wouldn't feel we had access, but we knew that we would be safe.
Nobody was going to come kill us or anything.
But we had to just stay outside the mainstream in some way.
Maserine told me that while she was deeply
enmeshed in Parse culture while living in India,
it was only when she came to the United States
that she really started to understand her family's faith.
I learned a lot about my own religion
when I came to Yale as an assistant professor,
and I met Stanley Insler, who was a scholar of Zorastronism, and particularly of the Holy Book, which I can, you know,
I can rattle off many thousands of lines of code in a language called Avastan, but I
don't understand it.
And what I learned by reading Stanley's books is that Zorastronism takes as its core principle the recognition that the world
is constructed of good and evil, and that the job of every Zorastron every day is to ask
on which side am I going to be.
And that when you review your life, that's what you look at.
How many times was I on the side of good or not?
And I think of that as both quite profound, but also somewhat ironic.
I only noticed it years after we had worked on implicit attitudes that the fundamental dimension I study is the dimension of good and bad.
Some time ago, one of Maizarin students came to her with a research idea that had direct
bearing on this question of good and bad. Was it possible Tessa Charles were asked that implicit biases were actually receding,
that the United States was becoming less biased?
I was completely confident, and I even said to her, yeah, you're not going to see any change,
not in our lifetime.
I mean, change will happen, but if you look at the IAT from 2007 to today, my prediction,
no change, it'll be a flat line over time because implicit bias changes,
but not fast.
So Tessa does these lovely analysis.
And what the data show is something quite stunning.
On the sexuality test, the anti-gay bias test, bias
was quite high.
Anti-gay bias was quite high in 2007.
But with every day, every month,
since then, it has slowly been coming down.
So that in 2020, that bias has come down close to neutrality.
What Maserina Tessa found was that between 2007 and 2020,
anti-K bias decreased by nearly two thirds.
It's not yet neutral, but we're predicting our model predicts that in one and a half years,
Americans will be at neutrality on that.
There are also encouraging signs when it comes to biases based on race.
Race bias has also come down on two kinds of measures,
the black white test, and also the dark skin, light skin test,
which is not race, but could be seen as another proxy for something akin to race.
Both tests show exactly the same drop in bias by 25%.
So it is not nothing, but it is not 64%.
Which we know it could be if we were doing things
the way we're doing for sexuality bias.
To recap, Mazurine and Tessa's data
found that race bias has come down by 25%
and anti-gay bias by a remarkable 64%.
That's the good news.
The bad news?
There are three other types of bias where the data haven't budged at all.
Anti-eldrally bias, disability bias, body weight bias. These stigmas, I think, are going to be much harder to change. They're visible,
they're on the body, and we don't talk about them nearly enough. We're not arguing about age,
bias, or disability or body weight. In fact, body weight bias, people express quite explicitly.
That may be a part of it, but also these are going to be harder to change.
Right now if we do nothing, those biases are with us for at least 200 years. That's what our model predicts.
Let me go back to sexuality and just say one thing about it that I think is incredibly interesting. We thought, okay, it's changing, but it must be young people only or
a certain part, a certain demographic group. Gay people only, things like that.
And it turns out, no, everybody is changing. Conservatives are changing, and
liberals are changing. Elderly are changing, and young are changing. Educated and
less educated are changing. Rich are changing, and poor are changing and young are changing, educated and less educated are changing.
Rich are changing and poor are changing.
So I think these results are together really exciting to us.
It tells us that change is possible at this societal level.
As we put this episode together, Republicans and Democrats in the Senate
came together to pass landmark legislation
and shining the right to gay marriage.
I find it difficult to imagine this would have happened
if it were not for a
sea change in public attitudes, a sea change that the implicit bias test seems to have picked up.
Mazarin points to the forces driving the change, there was change at the individual level as
grandparents reconciled themselves with their grandchildren's sexuality, changes at an institutional
level as companies began offering same-sex
benefits to workers, and change at the level of national policy in terms of laws and
Supreme Court decisions.
All three happened within a tight period of time, and when you have changed at this many
different levels of society from the individual human to the Supreme Court, that's when
you can get a 64%
drop in implicit bias, anti-gay bias.
To be clear, the fact that there has been a dramatic drop in anti-gay bias does not mean
the pendulum cannot swing back.
There are jurisdictions across the US and around the world that are actively trying to curtail
LGBTQ rights.
There is a two-way street between what happens in our minds
and public policy.
Laws can change because of the biases in our heads,
but our biases can also change as a result of laws
and cultural shifts.
I know that you don't think of yourself as being a religious person,
but again, I'm struck by something you told me about what it means to be a good par-sea.
You know, the good par-sea recognizes that the world has good and evil and has to try and make a choice every day
with side they're on.
Do you feel like you do that in your own life?
I mean, consciously, yes.
Almost, almost like writing in a book.
One of the things I teach about is that our ancestors
had very clear evidence every day about the harm
that they did to people who were not like them.
They would get on a horse, they would go into some neighboring
village and loot it and bring their stuff back to theirs.
So at the end of the day, if you asked, our ancestors on the Tundra, did you harm somebody who was not like you? They would say,
damn right, I did. They would have direct evidence. You and I live in such a protected and privileged
world that we don't have to do that every day. We don't have the experience of harming people who
are different from us. So how is it that we discriminate and we do?
We do it in a very paradoxical way.
We do it by who we help.
And I think that this is where
have I been a good person is no longer a simple question.
Because if I help people from my own tribe,
which I sure I do, I should
not count that in the good column, until I've done a compensating behavior in the other column,
which is very hard to do. And which is why institutions and governments have to enter? It is because
you and I, as individuals, will help. If my friend calls me up and says,
my son is not doing well, can he come and spend a summer in your lab, I will say yes.
And I don't think I want to be the kind of person who doesn't do that. But if that's happening,
then my institution needs to have a program by which people who are not the children of my friend
can visit my lab.
And this is why I think helping can often be the way in which we keep the world unequal
and yet we don't count it as something that we've done that is not something we should
be proud of. So you see how it's complicated and yet every day,
when I do the kind of work and see the data that come in,
I am being transformed in what I think is to go into the good and bad column of the Zorastrian ledger.
Maasareen Panaji is a psychologist at Harvard University. Along with Tony Greenwald, she's the author of Blind Spot, Hidden Biasis of Good People.
Maasareen, thank you for joining me today on Hidden Brighton.
Thank you for having me.
Always a pleasure.
Some weeks ago, we ran an experiment, and we would like to do it again.
We're exploring the possibility of regular follow-up conversations
in which our listeners can pose their questions to our guests. If you have questions or thoughts
about our series with Maserine Banagji and are willing to have those questions shared with a
larger hidden brain audience, please record a voice memo on your phone and email it to us at ideasathydenbrain.org.
60 seconds is plenty.
Please remember to include your name and a phone number where we can reach you.
Again, email the questions to us at ideasathydenbrain.org
and use the subject line, implicit bias episodes.
Hidden Brain is produced by Hidden Brain Media. Our audio production team includes Bridget McCarthy, Ani Murphy-Paul, Kristen Wong, Laura
Quarelle, Ryan Katz, Autumn Barnes, and Andrew Chadwick.
Tara Boyle is our executive producer.
I'm Hidden Brain's executive editor.
Special thanks for this episode to Sound Designer Nick Woodbury.
Our run sunk hero today is listener and Hidden Brain supporter, Brendan Smith of Oakland,
California.
Brendan says he likes to listen to Hidden Brain while walking with his golden retriever,
Max.
We're really glad to be part of your walks with Max, Brendan.
Thanks so much for your support. If you found this episode thought-provoking,
and you would like to join Brendan in helping us to make more episodes like this,
please do your part to keep us thriving. Help us build more shows for new listeners.
Visit support.hiddenbrain.org.
I'm Shankar Vedantam. See you soon.