Hidden Brain - Episode 32: The Scientific Process
Episode Date: May 24, 2016Lots of psychology studies fail to produce the same results when they are repeated. How do scientists know what's true? ...
Transcript
Discussion (0)
So here's the deal. Researchers recently tried to replicate a hundred experiments in psychology
that were published in India. The Center for Open Science recruited colleagues from around
the world to try and replicate on one side. And found that most of them could not be reproduced
with the same results. In fact, I'm...
Welcome to Hidden Brain. I'm Shankar Vedanta. Today we're going to talk about what is
been called a replication crisis in science.
The replicators in this recent study fail to get the same findings from the original experiment.
From cancer medicine to psychology, researchers are finding that many claims made in scientific studies
fail to hold up when those studies are repeated by an independent group.
Later in this episode, we're going to explore one provocative study that looked
at stereotypes about Asians, women, and math tests and explain what happened when researchers
tried to reproduce the finding. We're going to use this story to explore a deeper question.
What does scientists really mean when they talk about the truth? Before we get to that story, I want to give you some context. The crisis has actually been a
long time coming. In 2011, for example, Dutch researchers claimed that broken sidewalks
encourage racism. They published their findings in one of the most prestigious academic journals
Science Magazine. A couple of years later, another article in Science showed that when a gay person shows up at a stranger's door
and speaks openly about what it's like to be gay, this has an extraordinary effect.
So was the gay, it was the personal connection between the gay person who they were trying to show, you know, they're in person.
People who are against gay marriage change their minds
after these emotional encounters.
It was the combination of, you know,
contact with a minority coupled with a discussion
of issues pertinent.
The results were written up in the New York Times,
the Wall Street Journal, the Washington Post.
They were featured on public radio programs,
such as the clip you're hearing on Science Friday.
Scientists.
Finally, in 2009, researchers claimed that by lingualism,
the ability to speak more than one language is better for your brain.
All these claims had serious problems.
The Dutch claim was based on fabricated data.
One author of the gay marriage claim asked for the paper to be withdrawn
after concerns were raised about fraud.
Both claims were attracted.
Might be true? Might not. We don't know, because it turns out the researchers made up the data.
The bilingual advantage paper wasn't fabricated, but it was missing important context.
The researchers had conducted four experiments.
Three failed to show that bilingualism was better for the brain. Only one experiment showed
a benefit. It was the only one that was published. Angela De Brown was on the team that worked
on the bilingual-adventored study.
It is troubling because we like to believe that what we see is actually the truth. But
if it's only half of the results we find and we're effect-hiding the other half of the
results, then we will never really find out what's going on. But it's only half of the results we find and we're in fact hiding the other half of the results.
Then we'll never really find out what's going on.
At the University of Virginia, psychologist Brian Nosec decided something had to be done.
Brian felt the problem was that too many researchers and too many scientific journals
were focusing on publishing new and unusual findings.
To a few was spending time cross-checking earlier work to make sure it was solid.
One of the key factors of science is that a claim becomes a credible claim by being reproducible.
That someone else can take the same approach, the same protocol, the same procedure, do
it again themselves and obtain a similar result.
Brian launched an effort to reproduce dozens of studies in psychology.
He published a report in 2015.
We found that we were able to reproduce the original results in less than half of the cases
across five different criteria of evaluating whether a replication was successful or not.
Over the last year, there have been many debates about what this means.
Some critics say it proves that most studies are worthless.
At many universities, researchers feel it's their integrity, not just their scientific
conclusions that are being called into question.
At Harvard University, psychologists Dan Gilbert recently published a paper calling Brian's
conclusions into question.
So I think we just have to use our heads to figure out which kinds of things we expect to replicate directly,
in which kinds of things we would only expect a conceptual replication.
And we need to calm down when we don't see direct replication and ask whether we really should have expected it all.
To unpack all of this, let's take a detailed look at one study and what happened when researchers
tried to replicate it. I think the story reveals many truths about the ongoing controversy.
When they were graduate students at Harvard, Todd Petinsky and his friend Margaret Shee often went
to restaurants together. They went for the food, but they also spent a lot of time observing human behavior.
We often, after class, would go to the cheesecake factory, and she would order strawberry
shortcake.
I would typically order a salad, and the number of times that the salad was delivered to
her, and the strawberry shortcake was delivered to me.
She also likes regular coke, and I'm a diabetic,
so I drink Diet Coke,
and without fail, the Diet Coke would go to her
and the regular Coke would go to me.
The waiters were stereotyping Todd and Margaret.
The guy was probably ordering the less healthy stuff.
The woman was ordering salads and diet drinks.
Todd and Margaret knew there was lots of research into the effects of such stereotypes.
Now getting a dish you have in order is one thing, but there are more serious consequences.
Stereotypes can be hurtful, they can affect performance.
But as Todd and Margaret observed the waiters, they realized something was missing in the
research.
The previous studies had focused on the negative consequences of stereotypes.
Could stereotypes also work in a positive fashion?
We thought if we really want to understand how stereotypes operate in the world, we can't
simply look at half of it.
The young researchers brainstormed how they might study the other half of the equation.
The answer came to them as they were, yeah, eating together.
We were sitting in Harvard Square over ice cream and we said what we need is a group where the
stereotypes go in very different directions. They wanted to study a situation where stereotypes
could have both positive and negative effects. And Margotchy happens to be an Asian American
and a woman and we were started talking about math identities
and we kept going back and forth and back and forth and then literally at the same moment
we said, well why don't we study Asian women and math?
The experiment they designed was ingenious and simple.
There are negative stereotypes about women doing math and positive stereotypes about
Asians and math.
So what happens when you give a math test to women who are Asian?
We hypothesize that when you make different identity salient, you should expect different
stereotypes to be applied.
Todd and Margaret figure that if they're reminded Asian women about their gender, they would
see the negative stereotype at work.
But what would happen if they subtly reminded the volunteers about their Asian identity?
The researchers recruited Asian women as volunteers and asked some of them to identify their gender
on a form before taking a math test.
Earlier research had shown that when you make gender salient in this way, this triggers the
negative stereotype about women and math.
Todd and Margaret reminded other volunteers, selected at random, about their Asian heritage.
They wanted to make these volunteers remember the stereotype about Asians being good at math.
After all the volunteers finished the tests, the researchers analyzed their performance.
Todd was working down the hall for Margaret
one day when he heard her call out to him. She just shouted, holy cow, it worked. So I just
sort of ran down there and we started looking at the output together.
The study found that when the volunteers were reminded that they were women, they did worse
on the math test. When they were reminded that they were Asian, they did better. Same women,
same math test. Negative stereotype, negative result. Positive stereotype, positive result.
The study was an instant sensation. Psychologist Brian Nozek.
This is one of my favorite effects in psychological science. Something that seems like it shouldn't
be flexible, how well we perform in math, is flexible as a function of the identities that we have in mind and stereotypes associated
with those identities. Asians being good at math, women being not as good at math.
The study quickly became a staple of college textbooks, says psychologist Carolyn Gibson.
It is a pretty amazing finding and I heard about that study for the first time as an
undergrad.
It's been used as an example in social psychology courses for four years, since it was published
in 1999, and it's used as a good example for stereotyped threat and stereotyped boost.
But from a scientific perspective, there was one big problem.
It had never been replicated exactly.
Somebody had never followed their steps that they followed and replicated their results,
but it's been used to support further studies many times over the past 15 years.
Brian Nosek agreed, someone needed to replicate the original study.
He was spearhead heading a mammoth
effort to reproduce dozens of studies in psychology. Along with a panel of
reviewers, he selected this study for replication and asked Carolyn Gibson at
Georgia Southern University to conduct it. Brian wanted the replication to
closely match the conditions of the original study. If you don't do that, you
really are conducting two different studies. After launching the replication, he had second thoughts about
its location in the south.
And the reviewers thought this looks like a case where the location might matter. Asians
in the southern US might be a more distinct minority than Asians in the Northeast or in the West.
And so we recruited a second team to do a replication simultaneously at UC Berkeley in the West Coast
University, where Asians are much more prominent members of the community.
The team in Berkeley was headed by Alice Moon.
Alice was a fan of the original paper.
When I heard about it, I just thought it was like one of the very cool demonstrations in social psychology,
and so that's why I always liked this paper. She followed the protocol of the original Harvard
experiment. She recruited Asian women, reminded them of the female side of their identity,
or the Asian side of their identity, and then gave them a math test.
So what happened? When we compared just as the original paper did, when we compared the
participants who were in the Asian identity salient condition, with the participants in the female identities salient condition, we found that there was no difference in their
math performance.
The celebrated study failed to be replicated.
When Brian Nosek announced the finding about this and dozens of other studies that could
not be replicated, it caused an uproar.
Newspaper articles called it a crisis.
Critics held accusations about fraud and scientific misconduct.
In a 6,000-word cover story, the conservative magazine Weekly Standard said that liberals
had been making up research into how stereotypes affect women and people of color.
The Berkeley study, however, was not the only replication of Todd Potinsky in Margaret
Shee's paper.
Remember how Brian had two groups conduct replications?
I asked Carolyn Gibson at Georgia Southern University what she found when she ran the experiment
on Asian women and math.
When primed with Asian identity, Asian females did better on a math test compared to those
who had been primed with their female identity and then those primed with
their female identity did significantly worse.
Carolyn has no doubt about the meaning of what she found.
I believe that it further supports the original finding and that it gives even
more robust evidence to this idea, mostly because we followed the same method as the original study
and because we collected more participants.
And so we have a more powerful study.
At Berkeley, Alice is unsure.
I do believe that stereotypes in general do have effects on our lives,
but in terms of this particular finding
about whether stereotypes can facilitate people's academic performance, I guess it has
made me question whether or not that finding is true. Okay, so which is it? Should we trust the
results of the Berkeley study and say that Todd Potinsky and Margaret Shease finding was disproved?
Should we trust the Georgia study and say the finding was confirmed?
What happens when scientific studies disagree with one another?
The popular narrative of the replication crisis suggests that scientists are like dueling
gladiators.
If two scientists come up with different findings, it must mean one of them is wrong, or
worse, one of them is wrong or worse.
One of them must have faked her data.
When we come back, we look at why this idea misunderstands how science actually works.
Our statistical techniques are probabilistic and not definitive, and so we absolutely
need replications.
But replications in our current academic climate are also serving the purpose of trying to vet out academic fraud and are serving as a detection technique, and
those two are very different missions for replications.
Stay with us.
Support for this podcast and the following message come from the all-new Toyota Prius,
thanks to a new double
wishbone rear suspension and a lower center of gravity, the 2016 Prius makes any getaway
more thrilling, and with modern striking lines and standard by LED headlights, it has an edge
at every angle. With sleek styling and a surprisingly exhilarating ride, the 2016 Prius is
anything but expected. The all-new 2016 Toyota Prius,
it's what's next.
Support also comes from Weebly. You don't have to be a web designer or no code to create
a fantastic website with Weebly. Created for people with the courage to start their own
business and the dream to be their own boss. Choose from professionally designed mobile
friendly themes. then just drag
and drop to quickly build and publish your site and update your site on any device.
Get started for free at weebly.com slash brain.
W-E-E-B-L-Y dot com slash brain.
This is Hidden Brain, I'm Shankar Vedanta.
We're taking a look today at how science works and the so-called replication crisis in
the social sciences.
As I listened to the news reports, I found myself drawing an analogy with my own profession,
journalism.
Here's what I mean.
A few years ago, a reporter for the New York Times was caught fabricating stories. Instead of traveling to various locations and interviewing people, he simply made stuff up.
The newspaper went back and re-reported the story's Jason Blair had written.
When the facts didn't match, the reporter was fired.
Imagine for a second what would happen if we re-reported every story by every reporter
at the New York Times.
Even when reporters are doing a perfectly good job, the older news stories might not match.
A source might not see exactly the same thing again.
Sometimes, or the circumstances have changed, a source might say something completely different.
So when two reporters don't produce the same story, it could be that one of them is making stuff up.
But much more likely, is that both of them are right.
Now, I know what you're thinking. Journalism is storytelling, science is about data.
But let's look closely at what happened in the replications that Carolyn Gibson and Alice Moon did of Todd Petinsky's study.
In the original study, women administered the experiment.
In Georgia, the facilitator was also female.
But in Berkeley, where the replication failed,
both male and female facilitators administered the study.
Could that have made a difference?
Let's be clear, so it was not an exact replication.
So here's an example. It mentions clearly in the paper that, and I don't know whether this factor is important or not, that in one study the
experimenter gender were males, and another study the experimenter gender were females. I have no idea whether that's a factor that could explain the difference between the two studies. And so let's be clear about what it's not an exact replication.
This is Eric Bradlow from the University of Pennsylvania.
He's eminently qualified to talk about this stuff.
Spent four years here at Wharton studying statistics and mathematics.
Went on to get my PhD in statistics.
And for the last 20 years, I've been applying statistical methods to lots of problems,
but I consider myself a mathematical social scientist.
Eric believes that requiring studies to achieve statistical replication, to match more or
less perfectly, before you conclude that either is true, is like requiring two reporters to
cover a basketball game and come back with nearly identical stories.
Exact replication is one of those mythological ivory tower things that doesn't exist.
What we really need to think about is if the study doesn't replicate, why doesn't it
replicate, and even if it doesn't replicate exactly, it may actually reinforce the original
finding.
In other words, you may be more certain.
This isn't just true about studies and psychology.
Eric told me that NIH researchers once found that lab mice, given a sedative, took 35 minutes
to recover.
When the experiment was repeated, the mice took 16 minutes to recover.
The scientists scratched their heads.
It made no sense.
It took a while to figure out that something that shouldn't have made a difference did.
In between the two experiments, wood shavings in the animal cages were changed.
Turns out that red cedar and pine shavings step up the speed at which the sedative was metabolized.
Birch or maple don't.
This is not to say that repeating experiments is useless or pointless.
It's incredibly valuable.
But replications primarily help us understand
the nuances around a phenomenon. They're not very useful as a tool to detect fraud.
Just because you get different results doesn't mean you shouldn't trust them. How much are they
with an margin of error of each other? Are there other variables that would make it so that study
done at University A and the study done at University B wouldn't yield exactly the same thing?
I think that's a better way of looking at it than say, if you don't get exactly the same
results or even results that are very nearly the same, you can't trust them.
I think that's a superficial level of science.
I think you need to go below that.
So when you yourself look at a study that has not replicated or you looked at what sometimes
called a failed replication
do you not at the back of your mind say well this disproves the first
study do you actually never think that way
never's a long word never says never said never said strong word you know i'm
thinking i have to think of that james bond movie when shan connery said i will
never do james bond again and then fifteen years later
he came out with a movie called Never Say Never Again.
No, no, I would never say that, but I would say the following.
Let's imagine that you do a study and that you find that, you know,
people that take an SAT prep course do 15% better on the SAT.
And let's imagine that someone else does a study and the answer is only 3%.
Now, there's two possibilities.
One is the first study for whatever reason
overestimated the effect.
That's entirely possible.
And therefore, 3% is less than 15%.
But note, if you combine those two studies together,
your finding might actually be stronger in the sense,
I'm now more sure that SAT prep helps performance on the SAT.
Now, the effect size may shrink from 15% to 11%, but also notice I've possibly now doubled
or tripled my sample size, so my uncertainty goes down, and now I may even be more sure
that SAT prep helps, maybe not to the degree that it helped in the first study, but still
I'm more sure that it helped in the first study, but still I'm more sure
that it's actually effective.
When you think about different branches of science, though, aren't there branches of
science where you can expect the same thing to happen very predictably over and over again
when you look at particle physics, for example, you would expect that if you fire, you know,
20,000 protons out of a gun, that they're basically going to do the same thing pretty
much every time.
Well, it's been a you're testing the boundary of my memory of my particle physics class when I took it here at Penn
But my understanding is of course, and this is what statisticians study right?
We study the concept of randomness and so every science every discipline unless you're talking about an equal sign like
E equals MC squared.
E doesn't approximately equal MC squared. It actually equals. Most physical laws and things aren't
equal signs. There's approximate signs and so that means there's randomness to it. I think if you
fired 20,000 protons, you would see that there's a deviation in the way they collide with other particles
and there's randomness. I think the same thing is true in the social sciences.
You bring in 500 subjects, you bring them in at University A,
you bring them in at University B,
there's randomness in people's answers.
Of course, you would hope the overall patterns would be similar,
but the fact that it's this belief
that you're gonna get exactly the same findings,
I'm not sure that something science should be striving for.
Any individual study is just that, an individual study.
It isn't the truth.
Every observation is a point, it's a dot, and we observe dots, and then we observe more
dots, and if those dots replicate great, then we have more belief.
Science is about the evolution of knowledge, right?
But the process is never ending.
There will always be more things to uncover, more nuances.
We get more certain about what it is we know, and we also get more certain about what are
called boundary conditions or moderators like, for example, maybe this effect holds in
urban areas versus not.
Maybe it holds in California and not in Alabama.
Maybe it holds for people that are hold these stereotypes and maybe it
doesn't hold for people that don't.
That's, to me, that's an advance of science.
We have found what's called a main effect,
which is, you know, stereotypes
have an effect on outcomes,
or priming has an effect on outcomes,
and then we say, oh, and by the way,
it doesn't hold in these conditions.
That's not a failure to replicate.
That's a more nuanced view of the original finding.
At Harvard, Dan Gilbert says you can expect some studies to replicate nearly perfectly every time,
but in other cases, the very thing your studying is changing. So exact replications aren't possible.
There are many findings in psychological science that we would expect to replicate quite exactly,
years later and on different populations.
I-blink conditioning is a very nice example.
If I blow in your eye enough, you're going to start blinking as I purse my lips.
And that's not going to be very different across cultures, across times, across age groups.
Other kinds of findings certainly are.
There's one of my favorite experiments in social
psychology shows that when young men who are from the north or the south of the United States are
insulted, they react very differently because northerners and southerners have very different
codes of honor. Now you can't take that experiment and expect to do it in Italy, we're expect to do it 25 years from now. It's an experiment that's of its moment and of its time.
Every researcher I spoke with told me there's lots of agreement within the scientific community.
There are certainly many scientific studies that are poorly designed.
There are researchers who do shoddy work. There is great pressure at universities and scientific
journals to publish striking findings.
But the solution to all these problems, say Eric Bradlow, Brian, Nozek, and Dan Gilbert,
is more and better science. Eric Bradlow. The truth will come out. More dots will come out.
And if it turns out that what I published, it's not because I did anything fraudulent,
just isn't true because of sample
size the way I collected the data, then you know what, science will eventually figure
out that what I'm saying is not true.
So if you'd like a...
Brian Nosek is bemused that his findings about replicability have been taken to mean that
the studies that fail to reproduce are worthless.
He started a new system where researchers register protocols for their studies and commit
to sticking to them.
Scientific journals commit to publishing the findings of these studies, regardless of whether the results are sexy.
Science is the slow march of accumulating evidence, and it's very easy to want a simple answer, as a true as a false.
But really, replication is just an opportunity to accumulate more evidence to get a more precise estimate of that particular effect.
To most people, the debate over scientific truth is an abstract issue.
Most of us turn to scientists for answers.
Should I drink a glass of red wine in the evening?
Is this drug safe to give to my ailing mother?
Should I give my kid a dollar every time she does something while it's cool?
In reality, science is more in the question business than the answer business.
There's a reason nearly every scientific paper ends with a call for more research.
Especially when it comes to human behavior, nearly every conclusion you can draw about human beings
has tons of
exceptions.
Are people selfish?
Yeah, except millions act altruistically every day.
Are humans kind?
Yes, except that few species are capable of greater cruelty.
If you want answers that never change, definitive conclusions and final truths, odds are, you don't want
to ask a scientist.
The Hidden Brain Podcast is produced by Karam Agarakaleson, Maggie Pennman and Max Nestrak.
You can follow us on Facebook, Twitter and Instagram and follow my stories on your local
public radio station.
If you like this episode, consider giving us a review on iTunes.
It will help other people find the podcast.
I'm Shankar Vedantam and this is NPR.
Thanks for listening to Hidden Brain.
If you haven't already, please try the NPR-1 app
for your phone. You can hear Guy Razz's interview with Ted Curator Chris Anderson, where they
discuss the art of public speaking. You can search for shows, find stories from your local
station, and listen to great podcasts. NPR-1 is in the App Store now. Also, Invisibility Season 2 returns on June 17th.
This season, the show goes to a prison, an oil rake,
a McDonald's in Russia, and a beach in New Jersey
to explore the worlds of work, family, and government.
Catch up on season one anytime and listen
to the season two preview at npr.org slash podcast
and on the NPR1 app.