Hidden Brain - Encore of Episode 32: The Scientific Process
Episode Date: December 20, 2016There is a replication "crisis" in psychology: many findings simply do not replicate. Some critics take this as an indictment of the entire field — perhaps the best journals are only inter...ested in publishing the "sexiest" findings, or universities are pressuring their faculty to publish more. But this week on Hidden Brain, we take a closer look at the so-called crisis. While there certainly have been cases of bad science, and even fraudulent data, there are also lots of other reasons why perfectly good studies might not replicate. We'll look at a seminal study about stereotypes, Asian women, and math tests.
Transcript
Discussion (0)
So here's the deal. Researchers recently tried to replicate a hundred experiments in psychology
that were published. The Center for Open Science recruited colleagues from around the world
to try and replicate them on the study. And found that most of them could not be reproduced
with the same results. In fact, I'm...
Welcome to Hidden Brain. I'm Shankar Vedanta.
Today we're going to talk about what is being called a replication crisis in science.
The replicators in this recent study fail to get the same findings from the original experiment.
From cancer medicine to psychology, researchers are finding that many claims made in scientific studies
fail to hold up when those studies are repeated by an independent group.
Later in this episode, we're going to explore one provocative study that looked
at stereotypes about Asians, women, and math tests and explain what happened when researchers
try to reproduce the finding. We're going to use this story to explore a deeper question.
What do scientists really mean when they talk about the truth?
Let's take a moment to thank and share a message from our sponsor LearnVest. LearnVest is an online financial advice company focused on empowering people nationwide
to make good decisions with their money.
Studies show that writing down your goals makes you 49% more likely to achieve them.
That's why when you work with LearnVest, you tell them what you want to accomplish,
and they create a customized financial plan
to help you get there.
Plus, they pair you with a financial planner
to help keep you on track.
To see a sample plan and get a $50 credit,
go to LearnVest.com slash brain.
The crisis has actually been a long time coming.
In 2011, for example, Dutch researchers claimed that broken sidewalks and courage racism.
They published their findings in one of the most prestigious academic journals, Science Magazine.
A couple of years later, another article in Science showed that when a gay person shows
up at a stranger's door and speaks openly about what it's like to be gay,
this has an extraordinary effect.
It was the personal connection between the gay person who they were trying to show, you
know, they're in person.
People who are against gay marriage change their minds after these emotional encounters.
It was the combination of, you know, contact with a minority coupled with a discussion
of issues pertinent.
The results were written up in the New York Times, the Wall Street Journal, the Washington Post.
They were featured on public radio programs such as the Clip Your Hearing on Science Friday.
Finally, in 2009, researchers claimed that by lingualism, the ability to speak more than
one language is better for your brain.
All these claims had serious problems.
The Dutch claim was based on fabricated data.
One author of the gay marriage claim asked for the paper to be withdrawn after concerns
were raised about fraud.
Both claims were retracted.
Might be true?
Might not.
We don't know, because it turns out the researchers made up the data.
The bilingual advantage paper wasn't fabricated, but it was missing important context.
The researchers had conducted four experiments. Three failed to show that bilingualism was better for
the brain. Only one experiment showed a benefit. It was the only one that was published.
Angela DeBrown was on the team that worked on the bilingual
Advantage study.
It is troubling because we like to believe that what we see is
actually it's truth.
But if it's only half of the results we find and we're in fact
hiding the other half of the results, then we will never really
find out what's going on.
At the University of Virginia, psychologist Brian Nosec
decided something had to be done.
Brian felt the problem was that too many researchers and too many scientific journals
were focusing on publishing new and unusual findings.
To a few was spending time cross-checking earlier work to make sure it was solid.
One of the key factors of science is that a claim becomes a credible claim by being reproducible, that
someone else can take the same approach, the same protocol, the same procedure, do it
again themselves, and obtain a similar result.
Brian launched an effort to reproduce dozens of studies in psychology.
He published a report in 2015.
We found that we were able to reproduce the original results in less than half of the cases
across five different criteria of evaluating whether a replication was successful or not.
Over the last year, there have been many debates about what this means.
Some critics say it proves that most studies are worthless.
At many universities, researchers feel it's their integrity, not just as scientific conclusions
that are being called into question. At Harvard University, psychologists Dan Gilbert recently
published a paper calling Brian's conclusions into question. So I think we just have to use our heads
to figure out which kinds of things we expect to replicate directly, in which kinds of things we
would only expect a conceptual replication.
And we need to calm down when we don't see direct replication and ask whether we really should have expected it all.
To unpack all of this, let's take a detailed look at one study and what happened when researchers tried to replicate it.
I think the story reveals many truths about the ongoing controversy.
many truths about the ongoing controversy. When there were graduate students at Harvard, Todd Petinsky and his friend Margaret Shee
often went to restaurants together.
They went for the food, but they also spent a lot of time observing human behavior.
We often, after class, would go to the cheesecake factory, and she would order strawberry shortcake.
I would typically order a salad, and the number of times that the salad was delivered to
her, and the strawberry shortcake was delivered to me.
She also likes regular coke, and I'm a diabetic, so I drink Diet Coke, and without fail,
the Diet Coke would go to her, and the regular coke would go to me.
The waiters were stereotyping Todd and Margaret.
The guy was probably ordering the less healthy stuff.
The woman was ordering salads and diet drinks.
Todd and Margaret knew there was lots of research into the effects of such stereotypes.
Now getting a dish you have in order is one thing, but there are more serious consequences.
Stereotypes can be hurtful, they can affect performance.
But as Todd and Margaret observed the waiters, they realized something was missing in the
research.
The previous studies had focused on the negative consequences of stereotypes.
Could stereotypes also work in a positive fashion?
We thought if we really want to understand how stereotypes operate in the world, we can't
simply look at half of it.
The young researchers brainstormed how they might study the other half of the equation.
The answer came to them as they were, yeah, eating together.
We were sitting in Harvard Square over ice cream and we said what we need is a group where
the stereotypes go in very different directions.
They wanted to study a situation where stereotypes could have both positive and negative effects.
And Margotchy happens to be an Asian American and a woman and we were started talking about math identities
and we kept going back and forth and back and forth and then literally at the same moment we said,
well why don't we study Asian women and math?
The experiment the design was ingenious and simple.
There are negative stereotypes about women doing math and positive stereotypes about Asians and math.
So what happens when you give a math test to women
When you give a math test to women, who are Asian? We hypothesize that when you make different identities salient, you should expect different
stereotypes to be applied.
Todd and Margaret figure that if they reminded Asian women about their gender, they would
see the negative stereotype at work.
But what would happen if they subtly reminded the volunteers about their Asian identity?
The researchers recruited Asian women as volunteers and asked some of them to identify their gender
on a form before taking a math test.
Earlier research had shown that when you make gender salient in this way, this triggers the
negative stereotype about women and math.
Todd and Margaret reminded other volunteers, selected at random, about their Asian heritage.
They wanted to make these volunteers remember the stereotype about Asians being good at math.
After all the volunteers finished the tests, the researchers analyzed their performance.
Todd was working down the hall from Margaret one day when he heard her call out to him.
She just shouted, holy cow, it worked.
So I just sort of ran down there and we started looking at the output together.
The study found that when the volunteers were reminded that they were women, they did
worse on the math test.
When they were reminded that they were Asian, they did better.
Same women, same math test.
Negative stereotype, negative result.
Positive stereotype, positive result. The study was an instant
sensation. Psychologist Brian Nozek. This is one of my favorite effects in
psychological science. Something that seems like it shouldn't be flexible, how
well we perform in math, is flexible as a function of the identities that we
have in mind and stereotypes associated with those identities. Asians being good at math, women being not as good at math. The study
quickly became a staple of college textbooks. It says psychologist Carolyn Gibson.
It is a pretty amazing finding and I thought about that study for the first time
as an undergrad. It's been used as an example in social psychology courses for
four years since it was published in 1999 and it's
used as a good example for for stereotype threat and stereotype boost.
But from a scientific perspective there was one big problem. It had never been
replicated exactly. Somebody had never followed their their steps that they
followed and replicated their results but it's been used to support further studies many times over the past 15 years.
Brian Nosek agreed, someone needed to replicate the original study.
He was pure heading a mammoth effort to reproduce dozens of studies in psychology.
Along with a panel of reviewers, he selected this study for replication and asked Carolyn Gibson at Georgia Southern University to conduct it.
Brian wanted the replication to closely match the conditions of the original study.
If you don't do that, you're really conducting two different studies.
After launching the replication, he had second thoughts about its location in the south. And the reviewers thought this looks like a case where the location might matter.
Asians in the southern US might be a more distinct minority than Asians in the northeast or in the west.
And so we recruited a second team to do a replication simultaneously at UC Berkeley in the West Coast University,
where Asians are much more prominent members of the community.
The team in Berkeley was headed by Alice Moon.
Alice was a fan of the original paper.
When I heard about it, I just thought it was like one of the very cool demonstrations in social psychology.
And so that's why I always liked this paper.
She followed the protocol of the original Harvard experiment.
She recruited Asian women, reminded them of the female side of their identity,
or the Asian side of their identity, and then gave them a math test.
So, what happened?
When we compared just as the original paper did, when we compared the participants who were in
the Asian identity salient condition with the participants in the female
identities salient condition, we found that there was no difference in their
math performance. The celebrated study failed to be replicated. When Brian knows
that he can announce the finding about this and dozens of other studies that
could not be replicated, it caused an uproar.
Newspaper articles called it a crisis.
Critics called accusations about fraud and scientific misconduct.
In a 6,000-word cover story, the conservative magazine Weekly Standard said that liberals
had been making up research into how stereotypes affect women and people of color.
The Berkeley study however was not the only replication of Todd Petinsky in Margaret
Shee's paper.
Remember how Brian had two groups conduct applications?
I asked Carolyn Gibson at Georgia Southern University what she found when she ran the experiment
on Asian women and math.
When primed with Asian identity, Asian females did better on a math test compared to those
who had been primed with their female identity and then those primed with their female identity
did significantly worse.
Carolyn has no doubt about the meaning of what she found.
I believe that it further supports the original finding and that it gives even more robust
evidence to this idea, mostly because we followed the same method as the original study
and because we collected more participants.
And so we have a more powerful study.
At Berkeley, Alice is unsure.
I do believe that stereotypes in general do have effects on our lives.
But in terms of this particular finding about whether stereotypes can facilitate people's
academic performance, I guess it has made me question
whether or not that finding is true. Okay, so which is it? Should we trust the results of
the Berkeley study and say that Todd Patenske and Margaret Shee's finding was disproved,
or should we trust the Georgia study and say the finding was confirmed? What happens when
scientific studies disagree with one another?
The popular narrative of the replication crisis suggests that scientists are like dueling
gladiators.
If two scientists come up with different findings, it must mean one of them is wrong or worse,
one of them must have faked her data.
When we come back, we'll take a look at why this idea misunderstands how science is
supposed to work.
Our statistical techniques are probabilistic and not definitive.
And so we absolutely need replications.
But replications in our current academic climate
are also serving the purpose of trying
to vet out academic fraud and are serving as a detection
technique.
And those two are very different missions for replications.
Stay with us.
Support for NPR comes from Eli Lilly and Company.
For 140 years, Lilly has united caring with discovery
to make life better for people around the world.
Today, they're working to discover
a life-changing medicines in the areas of diabetes, cancer,
autoimmune diseases, and Alzheimer's disease among others.
Learn how the people of Lili turn inspiration into action at lilyforbetter.com.
Support also comes from the Amazon original series, The Man in the High Castle, which
imagines a world where the Allies lost World War II,
and America is ruled by Nazi Germany and imperialist Japan.
But revelations in secret prophetic films prove our future belongs to those who change it.
Based on the award-winning book by Philip K. Dick, executive produced by Ridley Scott
and winner of two Emmy Awards, streamed the new season now on Amazon Prime Video.
This is Hidden Brain, I'm Shankar Vedantam.
We're taking a look today at how science works
and the so-called replication crisis
in the social sciences.
As I listen to the news reports about the controversy,
I found myself drawing an analogy with my own profession.
Journalism.
Here's what I mean.
A few years ago, a reporter for The New York Times was caught fabricating stories.
Instead of traveling to various locations and interviewing people, he simply made stuff up.
The newspaper went back and re-reported the story Jason Blair had written.
When the facts didn't match, the reporter was fired.
Imagine for a second what would happen if we re-reported every story by every reporter
at the New York Times. Even when reporters are doing a perfectly good job, the older new
stories might not match. A source might not see exactly the same thing again. Sometimes
if the circumstances have changed, a source might say something completely different. So when two reporters don't produce the same story,
it could be that one of them is making stuff up. But much more likely, is that both of them are right.
Now, I know what you're thinking. Journalism is storytelling, science is about data.
I know what you're thinking. Journalism is storytelling, science is about data. But let's look closely at what happened in the replications that Carolyn Gibson and Alice
Moon did of Tarpitansky study. In the original study, women administered the experiment.
In Georgia, the facilitator was also female. But in Berkeley, where the replication failed,
both male and female facilitators administered the study.
Could that have made a difference?
Let's be clear. So it was not an exact replication. So here's an example.
It mentions clearly in the paper that, and I don't know whether this factor is important or not, that in one study,
the experimenter gender were males, and another study, the experimenter gender were females.
I have no idea whether that's a factor
that could explain the difference between the two studies.
And so, let's be clear about what,
it's not an exact replication.
This is Eric Bradlow from the University of Pennsylvania.
He's eminently qualified to talk about this stuff.
Spent four years here at Wharton studying
statistics and mathematics.
Went on to get my PhD in statistics.
And for the last 20 years, I've been applying statistical methods to lots of problems, but
I consider myself a mathematical social scientist.
Eric believes that requiring studies to achieve statistical replication, to match more or
less perfectly, before you conclude that either is true, is like requiring two reporters to cover a basketball game and come back with nearly identical stories.
Exact replication is one of those mythological ivory tower things that doesn't exist.
What we really need to think about is if the study doesn't replicate, why doesn't it replicate,
and even if it doesn't replicate exactly, it may actually reinforce the original finding.
In other words, you may be more certain.
This isn't just true about studies and psychology.
Eric told me that NIH researchers once found that lab mice, given a sedative, took 35 minutes to recover.
When the experiment was repeated, the mice took 16 minutes to recover.
The scientists scratched their heads, it made no sense.
It took a while to figure out that something that shouldn't have made a difference did.
In between the two experiments, wood shavings in the animal cages were changed.
Turns out that red cedar and pine shavings step up the speed at which the sedative was
metabolized.
Birch or maple don't.
This is not to say that repeating experiments
is useless or pointless.
It's incredibly valuable.
But replications primarily help us understand
the nuances around a phenomenon.
They're not very useful as a tool to detect fraud.
Just because you get different results doesn't mean
you shouldn't trust them.
How much are they with an margin of error of each other?
Are there other variables that would make it so that study done at University A and the
study done at University B wouldn't yield exactly the same thing?
I think that's a better way of looking at it than say, if you don't get exactly the same
results or even results that are very nearly the same, you can't trust them.
I think that's a superficial level of science.
I think you need to go below that
so when when you yourself look at a study that has not replicated or you
looked at what sometimes called a failed replication
do you not at the back of your mind say well this disproves the first
study do you actually never think that way
uh... oh what
never's a long word never says never said never said strong word you know i'm
thinking i have to think of that j Bond movie when Sean Connery said,
I will never do James Bond again.
And then 15 years later, he came out with a movie called Never Say Never Again.
No, no, I would never say that, but I would say the following.
Let's imagine that you do a study and that you find that, you know, people that take an
SAT prep course do 15% better on the SAT.
And let's imagine that someone else does a study
and the answer's only 3%.
Now, there's two possibilities.
One is the first study for whatever reason,
overestimated the effect.
That's entirely possible.
And therefore, 3% is less than 15%.
But note, if you combine those two studies together, your finding might
actually be stronger in the sense I'm now more sure that SAT prep helps performance on
the SAT.
Now, the effect size may shrink from 15% to 11%, but also notice I've possibly now doubled
or tripled my sample size, so my uncertainty goes down, and now I may even be more sure
that SAT prep helps, maybe not to the degree that it helped in the first study, but still
I'm more sure that it's actually effective.
When you think about different branches of science, though, aren't there branches of
science where you can expect the same thing to happen very predictably over and over again
when you look at particle physics, for example, you would expect that if you fire, you know, 20,000 protons out of a gun, that they're basically
going to do the same thing pretty much every time.
Well, it's been all, you're testing the boundary of my memory of my particle physics class
when I took it here at Penn, but my understanding is, of course, and this is what statistician
study, right?
We study the concept of randomness.
And so every science, every discipline, unless you're talking about an equal sign, like
E equals MC squared.
E doesn't approximately equal MC squared.
It actually equals.
Most physical laws and things aren't equal signs.
There's approximate signs, and so that means there's randomness to it.
I think if you fired 20,000 protons,
you would see that there's a deviation
in the way they collide with other particles
and there's randomness.
I think the same thing is true in the social sciences.
You bring in 500 subjects, you bring them in at university A,
you bring them in at university B.
There's randomness in people's answers.
Of course, you would hope the overall patterns
would be similar, but the fact that this belief
that you're gonna get exactly the same findings, I'm not sure that something science should
be striving for. Any individual study is just that, an individual study. It isn't the truth.
Every observation is a point. It's a dot. And we observe dots. And then we observe more dots.
And if those dots replicate, great, then we have more belief.
Science is about the evolution of knowledge, right?
But the process is never ending.
There will always be more things to uncover, more nuances.
We get more certain about what it is we know,
and we also get more certain about what are called boundary conditions or moderators like, for example.
Maybe this effect holds in urban areas versus not.
Maybe it holds in California and not in Alabama.
Maybe it holds for people that are hold these stereotypes
and maybe it doesn't hold for people that don't.
That's, to me, that's an advance of science.
We have found what's called a main effect,
which is, you know, stereotypes have an effect on outcomes
or priming has an effect on outcomes.
And then we say, oh, and by the way, it doesn't hold in these conditions. That's not a failure to replicate.
That's a more nuanced view of the original finding.
At Harvard, Dan Gilbert says you can expect some studies to replicate nearly perfectly every time,
but in other cases, the very thing your studying is changing. So exact replications aren't possible.
There are many findings in psychological science that we would expect to replicate quite
exactly years later and on different populations.
I-blink conditioning is a very nice example.
If I blow in your eye enough, you're going to start blinking as I purse my lips.
And that's not going to be very different across cultures, across times, across age groups.
Other kinds of findings certainly are.
There's one of my favorite experiments in social psychology shows that when young men who are from the north or the south of the United States are insulted,
they react very differently because northerners andoutherners have very different codes of honor.
Now you can't take that experiment and expect to do it in Italy or expect to do it 25 years
from now.
It's an experiment that's of its moment and of its time.
Every researcher I spoke with told me there's lots of agreement within the scientific community.
There are certainly many scientific studies that are poorly designed.
There are researchers who do shoddy work.
There is great pressure at universities and scientific journals to publish striking findings.
But the solution to all these problems, say Eric Bradlow, Brian Nosek and Dan Gilbert,
is more and better science.
Eric Bradlow.
The truth will come out.
More dots will come out. More dots will come out. And if it turns out that what I published,
it's not because I did anything fraudulent, just isn't true because of sample size the
way I collected the data, then you know what? Science will eventually figure out that
what I'm saying is not true. So if you'd like a more...
Brian Nosek is bemused that his findings about replicability have been taken to mean that
the studies that fail to reproduce are worthless.
He started a new system where researchers register protocols for their studies and commit
to sticking to them.
Scientific journals commit to publishing the findings of these studies, regardless of whether
the results are sexy.
Science is the slow march of accumulating evidence, and it's very easy to want a simple
answer, is it true, and it's very easy to want a simple answer.
Is it true? Is it false? But really, replication is just an opportunity to accumulate more evidence
to get a more precise estimate of that particular effect.
To most people, the debate over scientific truth is an abstract issue. Most of us turn to
scientists for answers.
Should I drink a glass of red wine in the evening?
Is this drug safe to give to my ailing mother?
Should I give my kid a dollar every time she does something
while it's cool?
In reality, science is more in the question business
than the answer business.
There's a reason nearly every scientific paper ends
with a call for more research.
Especially when it comes to human behavior, nearly every conclusion you can draw about human beings has tons of exceptions.
Are people selfish?
Yeah, except millions act altruistically every day.
Are humans kind?
Yes, except that few species are capable of greater cruelty.
If you want answers that never change, definitive conclusions and final truths, odds are, you
don't want to ask a scientist.
This episode of Hidden Brain was produced by Karamurk-Allison, Max Nestrak and Maggie
Penman. Our staff includes Renee Clar, Jenny Schmidt, and our supervising producer Tara
Boyle. Our unsung hero this week is Camille Smiley, who truly lives up to her name. Camille
is the executive assistant for our department and one of our favorite people. Whether you
want to bounce around ideas, order office supplies, or just procrastinate next to her
desk, Camille is always game to help out.
For more Hidden Brain, you can find us on Facebook and Twitter and listen for my stories
on your local public radio station.
And speaking of your local public radio station, this is the season for giving.
If you enjoy this program, please consider supporting your local station and tell them
hidden brain sent you.
It'll take just a few minutes and it stacks deductible.
Go to stations.npr.org.
And thanks.
I'm Shankar Vedantam and this is NPR.
As you look back on the past year, you can listen back to.
New NPR podcasts and old favorites are all waiting for you on the NPR One app.
Dive back in and listen to embedded, invisibility, code switch, or even the hidden brain archive.
It's perfect for a long road trip or a break from the holiday parties.
Listen anytime on the NPR One app or at npr.org slash podcast.