The Journal. - The End of Facebook’s Content Moderation Era
Episode Date: January 9, 2025Meta CEO Mark Zuckerberg announced this week that Facebook, Instagram and Threads would dramatically dial back content moderation and end fact checking. WSJ’s Jeff Horwitz explains what that means f...or the social media giant. Further Reading: -Social-Media Companies Decide Content Moderation Is Trending Down -Meta Ends Fact-Checking on Facebook, Instagram in Free-Speech Pitch Further listening: -Meta Is Struggling to Boot Pedophiles Off Facebook and Instagram -Is Fighting Misinformation Censorship? The Supreme Court Will Decide. -The Facebook Files Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
[♪THEME MUSIC PLAYING
So are you aware that there's like this trend at the New Year
where people come up with lists for what's in and what's out?
I am aware of this. It is a very sad phenomenon, Ryan.
You really don't have an in and out list for yourself?
I don't have an in and out list for myself.
Well, if you were to come up with one for, let's say, Facebook, what would be on its
in list and on its out list?
I mean, in would be getting along with the new administration and out would be content
moderation.
Our colleague Jeff Horowitz covers Meta, Facebook's parent company.
For nearly a decade, Meta and many other
social media companies have taken an active role
in policing content on their platforms,
taking down posts with hate speech,
or sticking fact-checking labels on viral content.
But now, for Meta, that era is over.
Hey everyone.
I want to talk about something important today, because it's time to get back to our roots
around free expression on Facebook and Instagram.
Earlier this week, Meta CEO Mark Zuckerberg posted a video announcing big changes to how
Facebook, Instagram, and threads will moderate content.
The bottom line is that after years of having our content moderation work focus primarily
on removing content, it is time to focus on reducing mistakes, simplifying our systems,
and getting back to our roots about giving people voice.
So it's sort of this wholesale departure slash rejection of the idea that the platform
should even govern itself, rather than a question of, is it doing a good enough job?
It's like, nah, this isn't our job.
Welcome to The Journal, our show about money, business, and power.
I'm Ryan Knudson.
It's Thursday, January 9th.
Coming up on the show,
Meta's major retreat on content moderation. Let's go back in time for a minute.
Can you tell me the story of where Facebook's content moderation system came from?
I mean, in any truly substantive sense, it was post 2016.
So there was this succession of scandals and crises from fake news in the United States.
Facebook now under fire. Critics say it allowed fake news to spread on the platform, potentially...
To Russian election interference.
ABC News has obtained some of the posts and ads congressional investigators say the Russians planted on Facebook as part of the Kremlin...
...to the genocide in Myanmar.
Facebook has been accused by U.N. investigators and human rights groups of facilitating violence
against the minority Rohingya Muslims in Burma.
All of these things were sort of pointing in the direction of the company needs to do more.
And for a few years, the company really did.
Initially, Zuckerberg was reluctant to moderate content beyond what was legally required.
In November 2016, he said,
about becoming arbiters of truth ourselves. But there was so much pressure on Facebook,
from lawmakers, legacy media outlets, and advertisers,
that Zuckerberg eventually gave in.
And the company spent billions of dollars
building out a massive content moderation machine.
At the peak of Facebook's content moderation system,
like when it was doing the most, what
did that look like?
What did it actually do?
A few things.
There was first just an army of human moderators, mostly outsourced.
Then there was efforts to improve the quality of news on the platform, like to sort of weight
the system in favor of outlets that users said
they trusted rather than like kind of random fly-by-night headlines. And there were also
automated systems sort of scouring the platform for things that were violations of the platform's
rules and demoting them if not outright removing them.
In addition to those outsourced moderators, who reviewed user posts for things like hate speech,
Facebook also started using fact checkers
to identify and label false or misleading articles.
Fact checkers were in some ways the beachhead for all of this stuff,
and ironically, I think it was one of the things
that they felt was the least controversial originally,
which is that they were going to contract with like
respected known entities that adhered to journalistic standards, had accreditation.
News organizations like the Associated Press and others.
Yeah, like the Associated Press, like PolitiFact, Reuters I believe was for a while in there,
Snopes.
So these were all entities that like, were basically
supposed to just find viral content that was false and flag it. And that would
serve two purposes. One is it would notify users who saw the post that they
were at the very least some serious reservations about its accuracy. And the
second thing is that it was gonna get used
by Facebook to sort of slow down the spread of things.
Didn't mean it would go away or let, you know,
they take it down.
It just meant that like,
they weren't gonna actively promote it.
And so this was like, I think a big part
of the company's efforts to deal with fake news
on its platform was this initiative.
How effective was this system?
Did it seem to actually work to slow down the spread of misinformation and hate speech
on Facebook?
There's two ways to answer that, right?
The first one is that, no, it never worked great.
The second answer is that it was all the platform had and it was therefore invaluable.
Fact checks were always slower than virality.
By the time a fact checker got around to publishing something saying, this is false, here are
the citations, usually that thing had gotten most of the traffic it was going to get.
The lie gets halfway around the world before the truth puts its pants on, right?
Yeah.
And that also applied to Facebook.
And the lie travels a lot faster when it's running on,
you know, a social media platform's recommendation system, right?
But what you're saying is that it was still better than nothing.
It still did...
This was the core of the defense.
Yeah.
The other issue with Metta's content moderation systems
was that the rules around what people
could and couldn't say became very detailed and nuanced, and the company struggled to
make rules that prevented hate speech but still allowed users to express themselves.
For instance, Facebook's rules didn't let users say things like, I hate women.
But such a comment might be okay if it was referring to a romantic breakup.
Adjudicating this stuff was like really difficult.
And also whatever a human might decide about things like that, trying to get an algorithm
to make a similar decision is like was like nearly impossible.
So it was this I think, a very frustrating process,
particularly for a bunch of very tech-minded people,
because these were not problems that lent themselves
to just building a solution, a technical solution,
and setting it loose and moving on to the next problem.
They were kind of chronic societal context-based,
all the things
that tech does not succeed at, if that makes sense.
Inevitably, the system made a lot of mistakes.
Many people had posts taken down or suppressed
that actually didn't violate the rules
and it frustrated users.
Even Zuckerberg himself got frustrated,
according to people familiar with the matter.
In late 2023, he posted about tearing his ACL and got far less engagement than he expected,
because the platform deprioritized viral posts about health.
Conservatives, especially, felt like Facebook's content moderation systems unfairly targeted
them.
There's the question of like, why are Christian posts about abortion getting taken down by fact checkers?
Why are so many conservative news outlets being penalized for false news?
Why wasn't I allowed to say this particular word about that particular group when I can
say it on the street and it's perfectly legal?
And so there was a lot of concern that Facebook was over-enforcing on the right.
And every time there was a moderation mistake, and a lot of them happen, you know, there
was the implication that this was Facebook acting to harm conservatives.
These social media outlets censor public figures who are conservatives.
Facebook had purposely and routinely suppressed conservative stories from trending news.
If you share my podcast on Facebook, I got hundreds of emails from people who said that
this post has been marked as spam.
People try to type my name into the search box on Instagram
or the Discover box.
My account does not show up.
It's relentless. I'm thinking about quitting Facebook.
Over the years, Facebook made lots of tweaks to its content moderation system.
For example, in 2019, it said it would stop fact-checking political ads.
But overall, the system stayed in place.
How did Mark Zuckerberg seem to feel about all these efforts?
Mark never loved this stuff. Mark I think was always deeply skeptical of human intervention of any variety in the system.
And then Donald Trump's victory in the 2024 election gave Zuckerberg an opportunity to
make a big change.
I think it's kind of part of an overall, I would say, shift in Silicon Valley in which
dominant companies have all been
making extremely nice with the incoming administration.
And obviously Donald Trump had made clear that he very much cared about whether Facebook
was stacked against him and that he did not trust the company and its liberal employee
base to be fair, and that he might go after Mark Zuckerberg personally
if Mark Zuckerberg didn't oblige.
Trump writes in the caption that Zuckerberg plotted against him during his reelection
campaign and warns Zuckerberg, quote, if he does anything illegal this time, he will spend
the rest of his life in prison.
Again, I don't know that anyone had to twist Mark Zuckerberg's arm about this.
He was personally ag personally agreed by it.
He didn't, it was costly and expensive
and he never had an interest in doing it in the first place.
So, what about the content moderation system
was there to like?
What these new changes mean for Meta's platforms?
That's next.
I started building social media to give people a voice.
In his video earlier this week, Zuckerberg said that politics was a factor in the company's
decision to dial back on content moderation.
The recent elections also feel like a cultural tipping point towards once again prioritizing
speech.
So we're going to get back to our roots and focus on reducing mistakes, simplifying our
policies and restoring free expression on our platforms.
And he is explaining that the company spent a number of years trying in good faith to
placate its critics in the legacy media over things like fake news, et cetera, and that,
you know, it did its best, but that it really went too far.
We built a lot of complex systems to moderate content.
But the problem with complex systems is they make mistakes.
Even if they accidentally censor just 1% of posts, that's millions of people.
And we've reached a point where it's just too many mistakes and too much censorship.
Zuckerberg said Facebook would end its fact-checking program and replace it with Community Notes,
a user-generated system similar to what X does.
We're going to simplify our content policies and get rid of a bunch of restrictions on topics
like immigration and gender that are just out of touch with mainstream discourse.
What started as a movement to be more inclusive has increasingly been used to shut down opinions
and shut out people with different ideas, and it's gone too far.
There are some very specific changes to rules that are hard to read as anything other than
the company trying to get out of the way of future controversies.
So for example, the question on whether you can call transgender people it.
That is specifically a carve out that you can now do on, it doesn't violate the
hate speech rules on Facebook anymore.
But it used to, previously it did.
Yeah, it used to.
Likewise, you can say that homosexuality is a mental illness.
That previously would have been a violation.
Likewise, you can liken women to household objects and liken them to personal property,
right?
And like you could see where some of the cultural fights are that the company is like very directly
responding to in those specific rule changes.
And the specific rule changes are like the least important part of all of this.
But I think they're a pretty good indication of the idea that the company is very happy
to sort of bend not just its moderation structure, but also the rules themselves
to this new era as Mark Zuckerberg referred to it.
Meta isn't getting rid of content moderation entirely.
There are still rules that prohibit what Zuckerberg called, quote,
legitimately bad stuff,
including terrorism and child exploitation.
Zuckerberg also said that the team that reviews U.S. content
will be relocated from California to Texas
because Zuckerberg said there was less of a concern
that the employees there would be biased.
So what do all these changes at Metta say about
just how difficult it is to moderate
content on the internet?
I mean, it's obviously difficult and these are not, you know, you don't solve misinformation,
right?
You know, trying to like somehow make the internet always factual is like clearly not
going to work out.
But you know, there's the question of like, do you keep trying at this stuff? Like, does the platform sort of, like, take some responsibility for efforts to mislead
and try to address those?
Or like, is that just kind of not in scope?
So I think this is, like, less of a question of, like, the feasibility of doing it than
it is a question of whether anyone wants to do it at all.
I see.
And so for Zuckerberg at Metta, it seems like he's decided that this is something that he
doesn't really want to do.
Exactly.
Cutting out fact checking, getting away from content moderation, prioritizing not making
false moderation calls, overdoing effective moderation in the first place.
These are things that, like, Mark actively wanted to do.
Honestly, from the first Trump term, not, you know, this wasn't a new sudden desire or willingness.
It was that I think the circumstances were such that it was time to make this happen.
Where do you think Facebook is going to go from here?
Do you think this is the end of this story when it comes to
Facebook's content moderation efforts,
that this will be the regime that will last long into the future?
Or is it possible that this could all change again in a few years?
Of course this will all change again at some point.
The EU is going to have strict rules,
so how much of this stuff applies to the EU is kind
of still to be determined.
Whether getting rid of fact checkers is something that the EU will tolerate.
And I think something else that is a question is what the user experience is.
I think something that a lot of people who worked at Facebook on integrity issues have
long believed is that as much
as the company resented basically having this sort of bunch of safety minded nerds telling
them you shouldn't do that, that the safety work was essential to making the platform
like a livable place.
Not necessarily always clean and well lit, perfectly civil, but livable.
And I think that it will be interesting to see to what degree they were correct and to
what degree, you know, this was all just sort of vanity and ineffectual.
I think we'll certainly see what Facebook and Instagram look like in this new era as
well and, you know, see how far it goes and what that means for users. That's all for today, Thursday, January 9th.
The Journal is a co-production of Spotify and The Wall Street Journal.
Additional reporting in this episode by Alexa Kors and Megan Bobrowski.
[♪ music continues to play throughout the video.
Thanks for listening. See you tomorrow.