Factually! with Adam Conover - Why Facebook Refuses to Fix the Misinformation Crisis It Created with Karen Hao
Episode Date: April 21, 2021Facebook pushes dangerous misinformation to billions of people every day. So why can’t it… stop? This week, MIT Technology Review’s Senior AI Reporter, Karen Hao, joins Adam to detail h...er blockbuster report on how Facebook’s internal AI teams were instructed to stop fighting misinformation because doing so interfered with Facebook’s growth. Read her reporting at: https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/ Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
You know, I got to confess, I have always been a sucker for Japanese treats.
I love going down a little Tokyo, heading to a convenience store,
and grabbing all those brightly colored, fun-packaged boxes off of the shelf.
But you know what? I don't get the chance to go down there as often as I would like to.
And that is why I am so thrilled that Bokksu, a Japanese snack subscription box,
chose to sponsor this episode.
What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill grocery store finds.
Each box comes packed with 20 unique snacks that you can only find in Japan itself.
Plus, they throw in a handy guide filled with info about each snack and about Japanese culture.
And let me tell you something, you are going to need that guide because this box comes with a lot of snacks.
I just got this one today, direct from Bokksu, and look at all of these things.
We got some sort of seaweed snack here.
We've got a buttercream cookie. We've got a dolce. I don't, I'm going to have to read the
guide to figure out what this one is. It looks like some sort of sponge cake. Oh my gosh. This
one is, I think it's some kind of maybe fried banana chip. Let's try it out and see. Is that what it is? Nope, it's not banana. Maybe it's a cassava
potato chip. I should have read the guide. Ah, here they are. Iburigako smoky chips. Potato
chips made with rice flour, providing a lighter texture and satisfying crunch. Oh my gosh, this
is so much fun. You got to get one of these for themselves and get this for the month of March.
Bokksu has a limited edition cherry blossom box and 12 month subscribers get a free kimono
style robe and get this while you're wearing your new duds, learning fascinating things
about your tasty snacks.
You can also rest assured that you have helped to support small family run businesses in
Japan because Bokksu works with 200 plus small makers to get their snacks delivered straight
to your door.
So if all of that sounds good, if you want a big box of delicious snacks like this for yourself,
use the code factually for $15 off your first order at Bokksu.com.
That's code factually for $15 off your first order on Bokksu.com. I don't know the way. I don't know what to think. I don't know what to say. Yeah, but that's alright. Yeah, that's okay. I don't know anything.
Hello, welcome to Factually, I'm Adam Conover, and let's talk a little bit more about misinformation today. You know that it's out there. You know that it's bad for you. You know that you're
getting it anyway. Social media spits a constant stream of vaccination misinformation, lies about
a democracy, hate speech, and garden variety people yelling at each other into our eye holes
and ear holes every single day. Misinformation are the chunks in the
toxic soup we sip every time we open our apps. Now, we've talked about this on the show before.
A couple of weeks back, we had Mike Caulfield, incredible, wonderful media literacy researcher
and educator who told us how we can fight back against it in our own lives, how we can use his
sift method to separate good information from bad
information and help our friends and neighbors do the same. But let's talk now about who's
responsible for this misinformation. There's a little bit of a question about that, isn't there?
I mean, these social media companies that we're getting all this misinformation from, well,
they don't want us to think they're at fault, right? I mean, they'd like to avoid responsibility
for the horrible effects of the content they promote.
And, you know, why wouldn't they?
Not taking responsibility is way easier and cheaper than, you know, doing the right thing.
But I'd argue they are at fault and they are responsible because the amount of control they have over what vast numbers of Americans and people worldwide see and believe
is truly stunning. See, the internet today is way different than it was when it got started,
or at least when I got started on it a couple decades ago. When companies like Facebook or
YouTube were getting going, well, you'd just post a thing. You'd upload a video, you'd make a post.
Some people would watch it. If they liked
it, they'd email it to their friend and say, hey, check out this video of this guy singing the
Numa Numa song in his house. It's funny. You'd watch it. That'd be it. It was hard to imagine
any single post being that big of a deal, right? But today, social media is increasingly dominant.
Today, social media is increasingly dominant.
People spend, on average, about two and a half hours on social media every day across the planet.
That's a worldwide number.
That includes about 40 minutes of Facebook per person and a whopping billion hours of YouTube every day with a B.
Okay? So social media companies aren't just neutral platforms where we share funny videos.
They are now essential players
in where people get and share important information.
About one-fifth of Americans rely on social media
to get their news.
A fifth, that's more than rely on local or network TV news.
But needless to say,
no matter what you think about the 11 p.m. if it bleeds, it leads,
man shot his wife and ate his dog, live at 11, whatever you think about that kind of news,
the news on social media is a lot worse. Pew found that those who rely on social media for news
are less likely to get the facts right about the coronavirus and politics and more likely
to hear some unproven claims. That's a quote from Pew.
So even though more people are getting their news from social media than TV, the quality of that
news is, in a word, shittier. And unlike the TV news, social media companies are constantly trying
to avoid responsibility for the garbage they promote on their platforms. See, these companies would have
us believe that they're just platforms, right? They just give us a way to upload a video and
host it for free or to start a group to chat with our friends. Oh, they don't control what we see.
They just give us a way to talk to each other. But this is disingenuous at best and a lie at worst.
It might have been true in 2005, right?
Pneuma Pneuma guy uploads the video,
people email it around, that's all that happens.
But today, these sites work very differently.
Today, all of these companies monitor
the content that's posted.
They monitor how we engage with it
and they make deliberate choices
to push forward some posts and bury others.
Whatever makes the user spend more time on the platform, that is what they push us.
And sure, some of this is done by algorithms.
It's a computer doing it, yes.
But those computer programs are not forces of nature.
They were written by people at the companies
and prioritizing business goals that the companies have, okay?
They didn't wash up on a beach one day.
The people at these companies are responsible for the algorithms
and thus responsible for the results.
But these few massive companies, Facebook, Amazon, YouTube, Twitter,
they dominate our media ecosystem.
They are media companies just like the giants of a couple decades ago were.
These are the ABC, NBC, CBS, and Fox of our present day age.
So when they say that it's very hard or close to impossible to stop hate speech and misinformation,
that's not true.
They can control what's on their platforms.
They just choose not true. They can control what's on their platforms. They just choose not to.
Now, that is my personal opinion based on the facts that I have seen and my own judgment of
the matter. I hope you found it convincing, but you don't have to take my word for it to quote
LeVar Burton, a man who means a great deal to me. No, instead, you can just listen to the direct
evidence we've got for you here on the show today. My guest today
is Karen Howe, a reporter at MIT Technology Review. She got tremendous access to a Facebook
AI team trying to fight misinformation on the platform, which was then directed away from that
task in order to keep Facebook big, growing, and profitable. This is about as close to a smoking
gun as we're going to get.
It is an incredible story, and she's an incredibly talented and brave reporter who wrote about this
to great acclaim just a few weeks ago. We are so excited to have her on the show. Please welcome
Karen Howe. We're here with Karen Howe. Karen, thank you so much for being here.
Thank you so much for having me.
So tell me about, let's jump right into it.
Tell me about this piece that you have in the MIT Technology Review about Facebook and AI, how you came to write it.
And what was the big surprise for you when diving into the piece?
So this piece is a nine-month investigation into the responsible AI team at Facebook.
nine month investigation into the responsible AI team at Facebook. And what is interesting is when I spent the nine months trying to figure out what this team does, what I realized was the story is
actually about what it doesn't do. So I thought, you know, if Facebook has a responsible AI team,
it must be working on the algorithms that have sort of been criticized over the years for amplifying
misinformation, for exacerbating polarization, these kinds of things. And the team doesn't
do that. And so the crux of the piece is sort of about this team and its failures.
But it's also about this revelation that Facebook has studied and known about the fact that
its recommendation algorithms promote and amplify misinformation, hate speech, extremism, all these
things for years. But its team doesn't do anything about that. And in other parts of the company,
it's sort of halted or weakened initiatives that were actively trying to address these issues,
specifically because addressing these issues would hurt the company's growth.
So that's kind of like, it kind of encapsulates what I was surprised by is I just thought going
into this story, when I learned that Facebook had a responsible AI team, that there was a good faith effort at the company to address many of the challenges that
it's publicly been talking about as very technically challenging and things that they're
hard at work on. But there just, there is no real coordinated effort to actually do this.
Yeah, it's pretty stunning. I mean, in the piece you're speaking with, you have a lot of
access to the head of the Responsible AI program or the extremely highly placed folks in AI at
Facebook. And it's sort of, if I can get into the meta piece a little bit, it sort of sounds like
Facebook was like, oh, this is a great opportunity to show how great this program is and how seriously
we're taking it. We're going to talk to this journalist and let them know that we
really care about, you know, safe AI, responsible AI, fair AI, whatever you want to call it.
But then when you actually engaged with them, you realized, wait, but this team is not doing
the thing that we all think that they're supposed to be doing, that they were sort of saying that they were going to address at some point.
Exactly. Exactly. I think, and the challenge of writing this piece was actually coming to that
realization because it's hard to really identify when something is missing. It's much easier to
write about the things that are present. And so, but I, but I kept having,
while I was reporting the piece and talking with this team, I kept having this nagging feeling that,
you know, if I talk to the average person in the street and say, Hey, Facebook has a responsible ad team, what do you think they do? That it would be completely disconnected from the way that they
were describing their work and their responsibilities. And it was through,
it was like eight months into my nine months of reporting
that it finally clicked for me that,
wait a minute,
that there is,
it's not,
I'm not going crazy by thinking
that the average person would completely misinterpret
the Responsible AI team.
Responsible AI teams work.
There's actually legitimate reasons
why people would think that the Responsible AI teams work. There's actually legitimate reasons why people would think that the
responsible AI team does one thing. And there are legitimate reasons why Facebook is not actually
doing that thing, but still using the term responsible AI as a branding mechanism.
Wow. There's so many angles we could get into this from, but let's start at it from the one
you just said. I mean, if you asked me walking down the street, if you ambushed me with a
microphone and said, Facebook's got a responsible, I team, what are they working on?
I would say they're probably working on misinformation, QAnon, you know, people
undermining, you know, election results. You know, maybe I've heard about the fact that the UN has
implicated Facebook in like the, you know, a genocide in Myanmar that like misinformation being spread.
I mean, you can tell me more about that than I probably know.
But I would think, OK, you know, we know we know that there are these algorithmic problems and that that would be what Facebook is trying to address.
And those are real. So let's start there. Those are real problems, correct?
Like, I'm not making that up.
That's not.
Yes.
OK, tell me a little bit.
Yeah, those are those are real problems that Facebook itself was grappling with when they
created this team.
So this team was created in the aftermath of the Cambridge Analytica scandal.
And at that time, there were multiple angles at which Facebook was being criticized.
One for the actual scandal.
Can you remind me what that was? Yeah.
Yeah, there was this political consultancy that was using the personal data of tens of millions of Americans without their consent to influence how they voted.
And specifically, they were using the user targeting algorithms
that Facebook already had on their platform and weaponizing them to get the right content,
often misleading content, in front of very specific people so that they could sway how
they thought about different political candidates. And most infamously, they did this for Donald
Trump's campaign. but there was also
this conversation around russian interference at the time like russian hackers are also weaponizing
these user targeting algorithms to sway the election in trump's favor and um there were also
the conversations around filter bubbles like the fact that um a lot of like half of America was shocked that Trump was elected in the first place and people realized that they were completely unaware of some of the conversations that were happening in the other half of America.
That was also all about these algorithms kind of tailoring the content so specifically to you and your interests that you kind of lose awareness of other people's interests and the other debates that are happening.
So that was the bigger context in which the Responsible AI team was then created.
So this was very much on Facebook's radar when they decided to put resources into a
so-called Responsible AI team.
decided to put resources into a so-called responsible AI team. And there's also the issue of polarization, right? That the more that, you know, we've realized and according to reporting,
Facebook themselves know that when you are trying to maximize engagement, that their algorithms,
which are designed to maximize engagement, our time spent on the site, end up pushing people more polarized content
and actually make the people themselves more polarized at the end of the day. Is that correct?
Yeah, exactly. Like there have been efforts, not coordinated, but sort of bottoms up efforts at
Facebook where individual employees or teams will start studying what actually is the effect of Facebook's
algorithms on this question of polarization. And I spoke with an engineer who was on a team that
was studying this problem and conducted myriad studies on this thing and basically found that
because of the way that Facebook's content recommendation algorithms will tailor things to
what you like, what you want to share, what you click on and maximize that kind of engagement,
it will just keep feeding you content that gets you pigeonholed further and further into your
beliefs and like really helps you dig your heels into your beliefs on things like he was saying,
like this isn't just a presidential election,
something big like that.
This can be like a local school board election.
Yeah.
Like they could like measure
that you would get more and more polarized
on your local school board election
because the content that you kept being fed
was sending you into a rabbit hole
where you weren't actually getting
other information, other signals that might challenge those beliefs anymore.
Yeah. I mean, it's just that basic thing of, and this has always been my intuition. And then I've,
you know, you write it really starkly in your reporting. So I feel a little gratified,
but this idea that, you know, what we engage with tends to be the things that make us angry or upset that piss us off. Like I get, you know, I'm interested in labor issues.
So I get mad every time I see an article that says Jeff Bezos, you know, the National Labor
Relations Board just said that Amazon fired a bunch of workers for organizing or something
like that. Right. Those stories always make me angry. And I always click on them. I click on them. I retweet them. I'm not on Facebook personally,
but whatever platforms I'm on, I share them, et cetera. And, you know, they get me agitated.
And so therefore I get more of those things which make me agitated because that's what the algorithm
is designed to do. It's it's giving me whatever I interact with. I interact with the things that
make me mad, gives me more of those things. And then since I'm always going in the angry direction, I'm like
falling sort of down a rabbit hole. And this is exactly, but this is what's actually happening.
Like Facebook themselves know about this dynamic. This is, I'm not making this up.
Yes, this is actually happening. This is like internal studies, internal research that has
been done that has repeatedly confirmed
that this is a thing that happens.
Wow.
I mean, yeah, there's this graph here in your piece
that you write that Mark Zuckerberg himself was using
that shows that like the engagement of,
as it becomes closer and closer
to what Facebook prohibits goes up.
Like it's like a line graph that's flat with the level of engagement.
And then right before it becomes something that Facebook would ban,
presumably because it's Holocaust denial or something like that.
The things that are most engaged with are the things that are almost like
Facebook illegal because they're so inflammatory.
That's wild that they know that there.
Yeah. And it's interesting because when
Mark actually published that chart, he published it in 2018 when he did a series of public Facebook
posts that were about how he's going to fix Facebook. And then this particular installment
was focused on like, how am I going to use content moderation to fix Facebook? And he published this chart and basically said, I mean, this is just human nature. People like engaging in outrageous
stuff. So regardless of where we place this policy line, regardless of where we draw the line for
what content is banned on the platform, it's always going to show this swoop upwards of engagement as
we approach that line. But what he doesn't really acknowledge and
is sort of the way that Facebook often talks about these things is there's this implicit
assumption that there's no other way to design Facebook other than to maximize engagement. So
they're like, oh, yeah, this is just a human nature problem and there's nothing we can really
do to solve it. So this this is how it is. And it's like, wait a minute, you were the one that chose to maximize engagement, which, which
then your is what incentivizes your algorithms to keep propagating this like hateful extremist
misinfo content to more and more people, because that's the content that gets the most engagement.
content to more and more people because that's the content that gets the most engagement.
So they always kind of like use, I don't know, they just talk about these things in ways that shirk their own responsibility in the matter and pretend that it's nothing that they can do.
There's nothing that they can do about it.
Right. Oh, this is just what people like to do for eight hours a day to the exclusion of
everything else because of the slot machine system that we've created specifically in order to keep
them sitting sitting in their chairs on their phones in front of their computers for that
period of time they just like doing this thing that we've designed to do exactly this.
Yeah, it's a little, it's kind of a fucked up point of view.
You don't need to editorialize.
I'll editorialize.
You're the journalist.
You keep playing close to the vest. But Facebook, in their public announcements,
in this blog post that Zuckerberg made in presumably him talking to Congress
and all this sort of thing,
they talk about taking this issue seriously of misinformation, algorithmic polarization, these problems that are.
And by the way, I think all of us have experienced the negative effects of this in our own lives.
All of us have a relative or a neighbor who's been sort of like driven mad by the algorithm and
is like ingesting these weird ideas. And we all,
you know,
this is like a,
an issue of national concern.
So Facebook says that they want to address this.
They then have bring on an AI team or they say,
we're going to solve this with our AI team and they proceed to not solve it.
What happened instead?
Um,
it's a complicated question.
So Facebook, I, just to take a step back, like Facebook has three AI teams. Um, and I think part of, part of Facebook's, I don't know if this is intentional
or unintentional, but it seems to me that Facebook's tactics, um, around communicating
about its company involves some organizational confusion,
where it can sort of just evoke like our AI team is working on this, but they won't really specify which AI team,
what they're actually doing, how it relates to the other teams.
But they have three AI teams.
One that is a fundamental AI research lab that just does basic science and has absolutely nothing to do with the platform. It doesn't actually work on any platform issues. They also have an applied
research team that's supposed to, when the basic science research team serendipitously comes across
some kind of AI technology that might be useful for Facebook, the applied team kind of is supposed to then pluck that out of the lab and
put it into Facebook's products. So there's like the example that Facebook loves to give is they,
the fundamental lab had figured out some way to translate languages really well using AI.
And then now that is the main thing that Facebook uses to translate. Like when you, when you're like scrolling through and your friend posts something in a different language and it says like translate this text, like that is the AI that's powering that feature.
But the responsible AI team is the third team. We've already talked about everything that doesn't do, but it basically is now specifically working on fairness, transparency, and privacy.
These three things that they've deemed as responsible AI.
Those are all good nouns, but they don't seem to be the nouns we were talking about.
so so what's interesting is like fairness and privacy are both things that um there is sort of impending regulation to address and actually transparency as well like gdpr um which is like
the european union's big regulation for how to think about regulating AI, how to think about regulating data systems.
They kind of evoke these three ideas, like these systems should be fair, these systems should be
transparent, these systems should be private. And so it's not actually a coincidence that Facebook
is like working on these three specific things. But the earliest thing that they started working
on that I was kind of digging into was their fairness work.
And fairness in the AI context refers to the fact that algorithms can unintentionally be
discriminatory. And Facebook has actually been sued by the US government for its ad targeting
algorithms perpetuating housing discrimination, where its ads will learn that they should only show houses for sale to white users and houses for rent to black users.
And that is illegal and very clearly a violation of equal access to housing opportunities.
These are very real problems and they're legitimate problems that Facebook has,
but it's not an either or situation where you can only work on one thing and not the other.
You can definitely work on fairness issues and you can work on misinformation. And so there's
like a very clear reason why Facebook chooses to work on one versus the other versus not the other.
And that's because they work on things that really support Facebook's growth,
but they don't work on things that undermine Facebook's growth. Right. So what I what I kind
of realized with this fairness stuff is they really started ramping up this work around the
time when a Republican led Congress was starting to escalate their rhetoric around tech giants having anti-conservative bias.
And like Trump was like tweeting hashtag stop the bias in the lead up to the 2018 midterm elections.
And these tech companies were starting to get overwhelmed by attacks from the public,
the conservative public, conservative user base saying like, you're censoring us,
your ranking algorithms aren't promoting our content.
Your content moderation algorithms are deleting our content.
And so Facebook then Mark Zuckerberg, then like a week after Trump tweeted this hashtag
stop the bias, he was he called a meeting with the head of the Responsible AI team and was like,
we need to figure out this AI bias thing. We need to figure out how to get rid of any kind of bias
in our content moderation algorithms. And for me, I mean, it was, Facebook never admitted that Mark
asked anything related to anti-conservative bias in that meeting. But for me, the timing of the
meeting was just like so perfect because it's the first time that he ever met with the head of the responsible
AI team. And this was seven, six or seven months after it had been created. And after that, they
just basically started really aggressively working on this thing. So, so, so too, I imagine to then be able to
definitively say we aren't, we do not have anti-conservative bias because we, our algorithms
are fair. But, okay, this is a long way from, from the, the issue at hand that, you know, again,
everyone is talking about, Congress is concerned about misinformation,
polarization from Facebook's algorithm. They create an AI team that they say that is going
to work on that problem. And instead, what that team works on is making sure that their AI is
unbiased. And then specifically, it's focusing not on the issue of racial bias, gender bias or anything else, but bias against conservatives on Facebook, which is we're now very far away from the original idea.
And in fact, doesn't that goal actually conflict with the original goal of misinformation? Because I'm not going to say that every conservative who is concerned about, you know, their views being suppressed is spreading misinformation.
But I know for a fact that some people who spread misinformation on social media, when they are stopped from doing that, they say, well, there's a bias against conservative speech.
It's like, no, no, no, you were spreading misinformation about the election or about QAnon.
That's what the QAnon people say when they are kicked off a platform.
They say this is an example of anti-conservative bias.
So it seems like this is now Facebook working on the opposite of what the problem was.
Yeah.
Okay.
Pretty, pretty much.
Yeah.
I mean, so like going back to the, like so the the funny thing is facebook never has never
actually really said that the response way i came specifically is working on misinformation it has
said the ai team or we we are building ai to work on this stuff and that kind of goes back to what
i was saying of like it doesn't really specify which team is working on what um and then you
just automatically assume the response way i team is working on it because the name is responsible AI. But there is another team that it's applied
applied research team that is working on catching misinformation. And we can get into that later.
But then the responsible AI team, yeah, they are working on bias and it is from the upper levels of management is motivated based off of my reporting.
I believe it was motivated by this anti-conservative bias.
But for the people on the team, I think they kind of perhaps also saw an opportunity of, well, if we build tools to get rid of anti-conservative bias, then we might also, it's the same tools to then uproot,
to try and get rid of racial bias, try and get rid of gender bias.
So they sort of had good intentions of like, well, let's just like hitch on to the ride
and try and like do something good now that we have the leadership buy-in to do this.
But then the issue is what you get at where there are legitimate ways
that this like notion of fairness or this like pursuit of fairness for growth or for
ridding anti-conservative bias will then also undermine efforts to clean up misinformation
on the platform. So there were other parts of the company outside of the Responsible AI team
that sort of around the same time
that the Responsible AI team was working on this
were already using the idea of fairness
or the idea of anti-conservative bias
to stop efforts to use the AI algorithms
to get rid of misinformation.
So there's this policy team led by Joel Kaplan.
And there was this one engineer who described to me
or one researcher who described to me,
they would work on developing these AI models,
these AI algorithms for catching misinformation,
like anti-vax misinformation.
They would test it out.
It would work really well. It measurably reduced
the amount of anti-vax misinformation that was on the platform. They would then go to deploy it.
And then the policy team would say, wait a minute, this specific algorithm is affecting
our conservative users more than liberal users. And that is anti-conservative bias. So you need to change the algorithm so that it affects both groups equally
so that it's a fair algorithm.
And then the researcher was like,
that just made the algorithm meaningless.
So we did all this work and it doesn't,
it results in nothing.
It means it does nothing if it if it
treats every if it treats every single person exactly equally on the platform well the whole
point of it is to suppress misinformation and some people spread more misinformation than others if
it doesn't penalize users who spread more misinformation because it's trying to quote be
unbiased it is going to literally do nothing.
It's like giving every student in the class a C
rather than picking, like giving the better ones a next.
I mean, it's the participation trophy of algorithms
is what it is.
How about that?
To take a popular conservative talking point.
It goes beyond what I was saying before.
You're saying that they literally created
a useful bit of AI that started weeding out dangerous misinformation, medical misinformation, for example, about vaccines.
And then a different unit in Facebook that was concerned about the reaction in the conservative community said, let's not use this algorithm, you know, canceled because we're worried about how conservatives will react.
Like, that's what happened at Facebook.
Yes.
And this is just one example.
There were many, many, many examples.
And this was such a huge problem that, like, the team that worked on creating these algorithms had serious retention issues because
their work was never being was never being used they would they would do all this work put all
this investment in and then it would be scrapped because it was demonstrating quote-unquote
anti-conservative bias which by the way like there have been studies since since that have looked into does Facebook actually have anti-conservative bias?
And from the assessment of what kind of content thrives on Facebook, there's no actual evidence to suggest that there's a suppression, a systematic suppression of conservative content.
Conservative content actually thrives more on Facebook than liberal content. So the top 10 Facebook publishers are what?
It's like Ben Shapiro, Dan Bongino or whatever his name is.
Like all the, you know, those Fox News does extremely well.
Those are the most successful pieces of information.
It's just that, you know, the people who are publishing them
are also constantly claiming that they are,
help, help.
I'm being oppressed. Like, you know, and Facebook seems very reactive to that, perhaps because again, maybe this is me editorializing, perhaps because that is where they a there's very strong reason to believe that that's relevant. And also the fact that, you know, it took a lot of like very wishy washy stances on
moderating away certain certain types of misinformation or hate speech when Trump was
in office. And then they made their biggest content moderation moderation decision when
it became
clear that trump was leaving office aka removing trump from the platform um so so there is a lot
of evidence that like facebook has sort of played this dance of just keeping the people in power
happy so that they don't they they don't make themselves vulnerable to regulation and that
that would hinder its growth right like okay we'll we'll
finally remove the president once the president's no longer in power because now he doesn't have
any power to actually penalize us like now there's been a regime change so but hey maybe
you know maybe if he wins again if he runs again and oh, back on the platform he goes because they'll, you know, be, be, you know, yeah, obeying power once again. Yeah. Tell me about the piece of it, though,
where in addition to, you know, the the Facebook is focused so much on anti-conservative bias,
opposing anti-conservative bias, that they kneecap their own effort to, you know, make sure
their algorithms aren't
polarizing people and spreading misinformation. That's one piece. But it seems to me the even
bigger piece is Facebook's addiction to growth that you write about, that they constantly want
to grow. They constantly want more misinformation. Actually, you know what? We have to take a really
short break. So I want you to tell me about this right after we get back. We'll be right back with
more Karen Howe.
Okay, we're back with Karen Howe. So before I so elegantly went to break in a way that was completely pre-planned and not at all chaotic. I was asking you about how Facebook's addiction to growth gets in the way of them fighting algorithmic misinformation and polarization.
Can you tell me about that?
Well, going back to this chart that Mark Zuckerberg published where he was showing that like grow, then maybe it should just not clean up the misinformation. And so that's,
that's sort of like, there's this like pervasive issue where a lot of employees at Facebook,
it's not like people are evil at Facebook. It's not like there are people intentionally being like, we're like destroying society. It's like Facebook is a very metrics
driven company. And there are a lot of employees that are doing their small part of the puzzle
in this like giant corporation. And the goals of like how they're rewarded,
how they're paid, how they're promoted,
all of those things are tied to engagement metrics
or business metrics that the company maintains.
And so when you have like each employee
that's like working on these,
on trying to optimize for like the specific metric
that they've been told will help them get promoted.
It sort of creates this like mass emerging effect across the company of the company just like doing everything like growth at all costs, pursuing growth at all costs.
incentives then for people who work on misinformation to maybe not do it sometimes, or people who want to genuinely do good on the platform and like fix some of these issues when
they're told by leadership, that's not really a good project for you to pursue. It's very reasonable
that then they would be like, okay, well, I'm not going to keep bashing my head on something that
leadership has actively told me not to pursue. I'm going to like switch to working on something else so that I can achieve my quarterly goals and
get promoted. So yeah, there's this whole culture of growth. I think it causes a lot of
this, a lot of people at the company to just end up working on things that are not actually core
to the issues of the
platform, but on more like tangential things that the leadership directs them to do. Yeah. I mean,
the old adage, right, is that you get what you measure and Facebook measures growth above all
else and engagement as a way to get to that growth. And they don't really seem to measure
like algorithmic misinformation or polarization.
They're measuring those things to a certain extent.
But if their number one priority
is gonna continue to be growth
and then someone is working on,
okay, I'm working on a project
that's gonna stamp out misinformation,
but then that project is also reducing growth a little bit
or reducing engagement a little bit,
then that is not going to be prioritized.
They're going to say, you know, that's really interesting,
but maybe don't work on that. Is that sort of what you're saying?
Yeah. Yeah.
So like to be more concrete about like how this happens on like a day-to-day
level there.
So engineers at Facebook have sort of the ability to create algorithms that they deploy onto the platform for various things, whether that's like cleaning up misinformation or changing the way that content is ranked in your newsfeed or like targeting you with ads. are all have the ability to train these algorithms, deploy them, and then
kind of tweak and keep optimizing the way that the platform works.
And there's a very rigorous process for evaluating these algorithms and which algorithms actually
make it into the live production of the platform. And the primary evaluation is how does it
actually affect the company's top line engagement metrics? How does it affect likes, shares,
comments and other other things? And the way that they do that is they will create a training
algorithm. They'll then like test it on a subset of users on Facebook and then use that experiment to measure whether or not those
particular users then had reduced engagement. And if there's like, if there's reduced engagement,
then most more often than not, the algorithm is completely discarded. And sometimes there will be
discussions where, okay, it reduced engagement, but it like did really, really well on reducing misinformation.
So like that trade-off is a good trade-off
and we're going to make that trade-off.
But like when the algorithm does that,
it's no longer this automated process of like,
okay, check, we're going to deploy it.
There's actually like a conversation with like multiple stakeholders
in different parts of the organization
that then have to like hash out whether or not this is worth it.
And then different people
will have different opinions.
And most of the time,
the conclusion is it's not worth it.
And then the team has to go back
to the drawing board
and train a new algorithm
that will try to achieve
all the same things
as its first algorithm
without actually depressing
the engagement.
I mean, the picture
that you're painting is that
the algorithms can't be the
solution to this problem because the problem at root is that the same thing that we're begging
facebook to address is the exact thing that their business model produces i mean you're what we've
said the chart that zuckerberg showed everybody shows us that like the exact,
the exact shit that we want to stop is what brings them the most engagement and growth.
And so like, it seems like to an extent, it is a zero sum game that by reducing the stuff that
we don't want to have the misinformation, the polarization, we're going to be reducing their
engagement. And they have specifically constructed a business model that relies on
maximizing engagement. And so to a certain extent,
are we asking a crack dealer to stop selling crack and saying, Hey,
this crack is killing people. And the crack dealer is like, Oh, I agree.
I agree. I got to get a handle on that. And then they're like, well,
I'll put a task force together and see if I can study, you know,
but at the end of the day, it's like, no, you need them to stop selling crack and they're not going to i mean
i'm not sorry it's i don't want to bring the language of the war on drugs into this i now
feel a little bit you know conflicted about that but you see the the point i'm making yeah i it
it's a good analogy i think what i sort of realized in the process of reporting this particular story is
like self-regulation, it just doesn't work because it's not that like, you know, I think the way that
people often cover Facebook is like, Mark just gets to make whatever the, whatever decisions he
wants. And then like the company moves the way that he moves, which is true to a certain extent, but also Facebook exists within its own system,
which is capitalism.
And the way that capitalism incentivizes companies to operate
is very much to continue growing
and to continue pursuing profit.
So if we only have certain incentives
that make Facebook do certain things
and we don't have counter incentives
from regulatory bodies to then give Facebook a different signal for what they should be doing, then it's just
going to keep chasing growth and chasing profit. That's I mean, yeah, there's not there's not
anything. Yeah. What like what else would they do? But yeah, we need to like we need to do it.
They're not going to do it themselves. We need to, as a society, make some rules around, you know, what this thing is, this new pernicious thing that they've created. But is that not why Facebook is now trying to change the subject? They're saying, oh, they're seeing, all right, there's going to be regulation. We see it on the horizon. It's happened in Europe around privacy. What if it happens around misinformation too?
around privacy? What if it happens around misinformation too? So let's make a big deal about how we're doing something about it, but shift the conversation. So we're not actually
talking about misinformation. We're working on AI bias, which is a comfortable topic that there's
been a lot written about that conservatives are mad about too. And maybe we can just direct
everybody, oh, look what we're doing with AI ai bias they can avoid regulation on the issue that is the real issue but if we addressed it it would actually
reduce their growth and their profits exactly yeah and i think facebook does this a lot they
like kind of redirect the public's attention and talk about things in a way that makes very simple
problems sound very complicated.
Like when I was writing this piece, my editor-in-chief said this really good point,
which is like, this piece is, I was getting, I was like, oh my God, this is so convoluted.
Like I'm trying to explain to people what AI bias is,
but then how it's like different from misinformation, blah, blah, blah.
And he was like, actually, it's quite simple.
And the only reason why it feels complicated is because Facebook is trying to overcomplicate
it.
Facebook has just had certain problems for years now that people have been criticizing
it about and it's not doing anything about it.
That's like very simple.
Yeah.
If it's not as difficult as Facebook is making it seem, do you feel that if they really wanted to, they could address misinformation on the platform?
Because there is the issue of if they're trying to do it with AI in the first place, well, can't misinformation peddlers just get around the AI, learn, oh, if I, you know, instead of QAnon, I say Pianon.
Instead of QAnon, I say Pianon, and now we'll get ahead of the algorithm for a little bit, or whatever it is.
Is there a way to moderate their way out of the problem with AI or not, or is there a more fundamental problem at play here?
I think, so to answer the first question, could Facebook actually fix this problem?
Yes, I absolutely think that they could.
Does their current approach of using AI to try and moderate away the problem actually work?
No, I don't think it ever will.
And that's just because of, like, the fundamental limitations of AI itself.
Like, you would need to have a nuanced understanding of human language in order to effectively moderate misinfo and if you were to survey ai experts about this the average amount
of time that they they believe it'll take for us to get to ai that actually has nuanced human
understanding um it's like upwards of decades so i I don't think we have time.
Not just to understand like how human language works,
but also like, say you're trying to make an AI
that's going to stamp out vaccine misinformation.
Well, it needs to not only understand human language,
it needs to like understand how vaccines work
so that it can say,
oh, vaccines can't actually change your DNA
because it's an RNA vaccine.
And here's how RNA works. And I've read all the papers on this and I know that, you know, this is not true and
that this is the new tactic that, you know. And yeah, yeah. And it needs to understand like
culture and history because people use cultural and historical references all the time in their
language that then insinuate certain things that are not explicitly said. It needs to understand
sarcasm, which is when like, you know, like from an AI's perspective,
it's like, what do you mean that you're saying literally the opposite of what you mean?
How do you actually like it's just that's not possible.
But I think the way that Facebook would address this issue, first of all, I think I sort of
increasingly started to believe that it's just not possible for it to address it at the current scale that the company exists. But also it's,
it's the business model. It's the fundamental assumption that they need to keep maximizing
engagement. That is the root of these problems. And if it were to change that assumption and
change the way that it recommends content on
the platform, whether that's the post in your newsfeed or the ads that you click or the groups
that you're recommended to join, like all of those recommendation systems, if the fundamental
objective of those recommendation algorithms was not engagement, but something else, then
they would significantly reduce a lot of the hateful content and misinformation content
spread on the platform.
Yeah, but they're not about to do that because they're going to, I mean, that's what they're
focused on.
Is there a point at which they could ever not be focused on engagement and growth above all else? I mean, they already have like, what, a good third to half of
people in the world on Facebook. Yeah. I mean, I mean, if they didn't, if they stopped focusing
on that, I think the company would sort of cease to exist. It would just, yeah, I don't know. Or
it would be smaller. I don't know. It's would be smaller i don't know it's like what how would
facebook actually work if it didn't focus on that who knows but yeah probably be a lot healthier for
everyone so you feel that what we need is some outside like rules of the road like regulation
of some kind or that is the way to address the problem, uh, to some degree.
Yeah.
I do think that there needs to be external regulation of this issue.
Um,
what that regulation might look like is definitely outside of my expertise.
Um,
but it's,
I think I, I'm,
I'm optimistic that,
um,
it seems like there's,
there's now enough political will on both sides of the aisle to actually think about how do we, whether it's antitrust law, whether it's rewriting Section 230, like how do we actually regulate Facebook in a way that will allow the company to still exist and provide us the services that we enjoy without all of the bad stuff.
Yeah.
It's endlessly fascinating to me,
like the,
cause you know,
I grew up in the,
in the early internet boom,
you know,
I was on the internet starting like 1996 and oh my God,
there's so much possibility.
Anything can happen on here.
And I came to realize,
oh,
that feeling was just because it was,
it's an entirely new area and there were no laws about anything.
And now we've been doing it for, you know, 30 years and we're starting to realize, oh, it looks like we kind of need some laws about just like you do with anything.
You know, we invented railroads and after a while we need some laws about the railroads to make sure shit doesn't go really
bad.
We're sort of in the same place again.
And to a certain extent,
it seems like Facebook and these other companies are trying to pretend that
we're not and trying to like stave off the inevitable as long as possible.
So no,
no,
no,
we'll,
we'll do it.
We'll fix it.
We'll fix it.
But unless they actually do,
which they seem incapable of,
yeah, we're going to need to we're going to need to like have a conversation about it and figure
out, OK, we can't have people trying to undermine our elections. We can't we can't have a company
whose entire business model mainlines the distribution of misinformation about public health and democracy that we can't have that.
Yeah. Yeah. I think the point that you made about like when the Internet first started, people were like, this seems fun.
Like that's that's actually so true, because at the time, like the people who were founding the Internet,
their philosophy was that the virtual world existed separate from society and therefore there didn't need to be rules of the road you know
it's a virtual environment it's a sandbox whatever happens in this universe is not going to affect
the physical world and obviously that's become increasingly untrue like we've realized that
that's just a faulty assumption yeah and that the virtual stuff that happens and translates into physical world things like a genocide or like the capital riots.
And those are very legitimate reasons now that I think lawmakers are finally like it's finally a concrete enough thing that lawmakers are like, oh, yes, this is territory that we need to be regulating.
Yeah. And, you know, we have a culture and a constitution of free speech in America. We need
to not be interfering with that in a way. But there needs to be a balance here between, you
know, making sure that we're not programmatically causing bad things to happen while, you know, people can people
can say their piece, but that we're not like pushing harmful misinformation to people. Does
did you get a sense in your reporting that people at Facebook actually care about this issue? Like,
do you feel Mark Zuckerberg cares about it? I think that's probably
a separate question from, you know, do you feel he cares about it? And do you feel that like,
you know, there are folks working in on this problem at Facebook who are like,
God damn it, this is a real problem, but my hands are being tied here.
Yes, I think there are a lot of people that really care and whose hands are tied.
It's interesting because I think there are sort of like three profiles that I've sort of found of the type of person that works at Facebook, which is I think like an endlessly fascinating question is like, why do people work at facebook in the first place and one of the one of the categories is like people who genuinely believe that change can happen more effectively from the
inside and there are a lot of people at facebook that that very much believe that and um are
working really hard to try and change things but then many of them ultimately leave because then
they become cynical and realize that they're not actually changing things from the inside.
With the question of whether Mark cares about this, I don't think he doesn't care about this. But I think the way it's been described to me is that Mark is just, in general, very libertarian
and is much more nervous about Facebook being, quote unquote, an arbiter of truth than the fact that there's rampant misinformation.
Like, I think it's more terrifying to him to give Facebook the powers to arbitrate truth than to just leave it in a bad state.
And so it's not that I don't think he actively doesn't care. It's just his value system
is sort of different from many other people in society. But in my view, that's an abdication,
right? That these companies, Facebook more than any other, but also Twitter and these other
companies, they have a belief that is
incorrect that they are not media companies. They see themselves as platforms where anybody can post
anything and like, oh no, you can say what you want to say and then people will see it and we're
just the pipes, but they're not. They exert massive influence. In fact, they are the only
ones who exert any influence on what people see. I can post whatever I want on Facebook.
The only thing that determines who sees it is Facebook's algorithm.
And that is not in substance different from NBC in 1970 deciding who sees what on television.
And the difference between NBC in 1970 and Facebook today is that NBC, the people who ran it, believed that they had influence over
what the public saw and they gave a shit about it. And part of the reason they gave a shit was
the government was like, you're going to lose your license to broadcast unless you do this in a
responsible manner. There are a lot of problems in the way they did that gatekeeping too. Back then,
there were a lot of problems with the media environment then, but that is the analogous, you know, position that
Facebook is in today, but it's not a much like a 10 times bigger scale because they're global.
People are spending a lot more time on it. Like my view of all these companies got a lot more
simple once I realized, oh, YouTube, Twitter, Facebook, these are media companies, but the
difference between them is they get all the media for free.
People just post it.
They don't have to pay anybody, right?
Yes.
They just get it all for free, but they're acting like that means that they don't distribute it to the public and they are therefore responsible for it.
They're like, oh, no, the person who posted it did.
But, yeah, it's like a fundamental misunderstanding of what the fuck it is they're doing.
So I'm on a rant here.
What sort of reaction did you get to this piece?
I mean, this was a fair bit of a blockbuster, I feel like, when it came out.
Did you get a reaction from Facebook to the piece?
I'm curious.
I did.
So the CTO of Facebook started responding to me on Twitter.
Really?
And yeah.
And his first response, which I thought was really funny, was I'm afraid that this piece will deter, will convince people that AI bias is not an issue and deter them from working on it.
And there was this other Twitter user that then like later commented, it's really weird that your
piece calls out the fact that Facebook is using AI bias as a fig leaf to cover up the fact that
they're not doing anything else. And then in response to that, the CTO was like, but we're
doing AI bias work. And I was like, yes, correct. Like that is very weird.
But it's sort of, I mean,
speaking with some former employees at Facebook,
executives only engage on things
when they feel genuinely threatened.
So it was basically a confirmation to me
that A, I'm on to something.
Like the CTO actually felt the need to respond.
And B, he wasn't able to say anything that undermined my reporting.
Yeah.
And so it kind of just reinforced the fact that, like, it is true.
Yeah.
That's a weird trend right now in, you know, the covering of these companies.
Same thing happened to Amazon where executives start replying to people on Twitter and saying, well, that's not true.
And then it's quickly shown to be true.
The peeing in bottles thing on Amazon.
Yeah.
Yeah.
Like someone needs to tell these executives, stay out of your mentions. Like you don't need to, you don't need to get into it on Twitter of all places.
You guys, I thought they were, why didn't they post, why didn't they Facebook you about it?
Why didn't they tweet at you about it?
Yeah, it's also interesting.
I think they did.
So the CTO also like did an interview with Casey Newton afterwards to try and like present their narrative in like a more formalized, respected, like journalistic way.
And the narrative that they then painted there or the CTO painted there was, oh, like I was I was so upset at this piece because the one like if you attack any team at Facebook facebook please don't make it the responsible ai team
um and it was like a complete mischaracterization of my piece as well where i was like i actually
did not attack this team at all i talked about how they was composed of people that are genuinely
trying to do the right thing but whose hands are tied so um yeah it's it's been interesting to just see the way that in the aftermath, like the way that Facebook's PR machine works, which is sort of like part of my story is that like they have this very carefully crafted PR machine that tries to mislead the public.
And it was just another demonstration of that.
Yeah, they were they were trying to sell you a specific story of what it is that they were doing,
of we are taking this problem seriously
and the problem is AI bias
and look at what a great job we're doing.
And you saw through that and told an actual story,
did your job as a journalist,
told an actual story about what's going on there.
And they weren't happy about that
is what it sounds like.
They were very unhappy. They were very unhappy. And yeah, I and it's interesting. I had like a
lot of other journalists reach out to me afterwards who had also covered Facebook and sort of face
these things. And they were like, yeah, this is just a pattern like Facebook will give you lots
of access and then be extremely displeased with you
when you don't actually write their exact narrative down on paper.
And I don't know if that's because Facebook is aware that it's doing that
and just that's part of their PR tactic
or if they fundamentally misunderstand what independent journalism means.
But yeah, it's just like the nature of covering that company it reminds me
it's very funny this memory flashback to my head but a scene from a saved by the bell episode that
always stuck with me is i this is completely random but like there's this scene where jesse
spano is interviewing principal belding for the newspaper and he's like thinks it's going to be a
really nice interview and she goes like what happened to the missing petty cash that was siphoned from the school budget and he his face
gets really sad and he says i thought this interview was going to be about my pet turtle pokey
and and for some reason that stuck with me that's what happened they were like we thought it was
going to be about ai bias we didn't know you were gonna talk about the real problem at Facebook. We thought it was
going to be a nice interview. In terms of how this issue and this, you know, specific story that you
wrote about Facebook plays into the, you know, the larger questions, you know, among other internet
companies, among AI in general, like, how do you, how do you feel about that? Are there larger issues that this points to?
Yeah, there's been this ongoing conversation
within the AI community,
which is the community that I cover
and sort of live and breathe,
about, you know,
we're building this very powerful technology
where we're just beginning to see
some really dire unintended consequences of it.
And yet, this space and our understanding of this technology is very dominated by the tech giants.
Because in order to even build this technology, you need a lot of resources.
you need a lot of resources, both a ton of cash to actually hire the people who have the expertise to build this technology, as well as a ton of computational power, like massive computers,
massive servers that can actually crunch the data to then train these algorithms. And
right before my piece published in December of last year, there was this whole fallout around Google and their AI efforts and their equivalent of their responsible AI team, which is called the ethical AI team.
actively censors their ethical AI teams work and other researchers work at the company that has criticisms of the technology that Google is building. And so then like when this,
my piece came out, there was sort of this additional evidence that another, yet another
tech giant is sort of like actively trying to distort our understanding of this technology
and what it means to build it ethically, what it means to build it responsibly.
And even when there are good, well-intentioned people at these organizations that are leading
these efforts, they either get fired or they're completely hamstrung and can't make the progress
that they need to make.
hamstrung and can't make the progress that they that they need to make so i think um to me it sort of demonstrates for for like the scientific community and for regular people who where
algorithms are affecting a lot of things in our lives now um there's a little bit of this scary
thing that's happening behind the scenes that we don't
actually have full transparency into the way that this technology is going to shape us and the way
that it could harm us because of the very carefully, closely kept research and communication
about this research at these companies. I mean, AI, the nature of the research, the nature of what it produces is often AI algorithms that
produce results that are surprising to the people who made them because of how opaque AI can be.
You train an algorithm and you find out what it does. And so there's that level of opacity. But then there's
the fact that all the places that are working on AI are places like Google, Facebook, presumably
Apple, Microsoft, the Department of Defense. These massive organizations that are working on AI for a very specific purpose to maximize ad revenue,
to kill people better.
You know,
I'm sure there's work being done at universities,
but you know,
the fact is that like,
Oh,
Tesla is another example,
right?
Where they,
they talk a lot about here's what the AI does,
where they talk a lot about here's what the AI does,
but the way that they present what the AI does is very at odds with its actual purpose
and its actual capabilities.
You know, Tesla's an example where they've promoted this idea
that, you know, fully self-driving cars
are right around the corner.
And then as soon as you look at what the cars actually do
and what their technology that they're developing
actually does, there's a huge gap there.
Yeah.
You know, where they're promoting a certain idea of to the public here's what you should think of when you're
thinking of ai um elon musk saying oh we should be worried about killer robots and i'll make sure
we don't have them um but what is the actual development that is being done on these things is like behind the most closed of all closed doors, it's being done by a couple of massive companies and organizations that, you know, have a very specific interest at heart and it's not necessarily societies.
just misinforming the public. It also misleads policymakers who are actually trying to figure out how to regulate this technology because there are very few people that they can go to
that are actually independent researchers not being paid by tech companies or employed by
tech companies. Even in academia, there's like so much influence from these tech giants, Google, Facebook, Apple, Microsoft, IBM. But because this technology requires so much
money and so many resources to develop, universities cannot actually fund it themselves.
So they have to seek funding from other places, aka the tech giants. And so for policymakers to
actually get a good understanding of what is this technology actually,
and what should we be concerned about so that we can literally codify guardrails to prevent that.
They don't, like, who are they talking to? It's like really hard for them to actually
talk to someone who is not, doesn't have that conflict of interest.
Yeah. Well, what would you like to see happen around these issues vis-a-vis Facebook
or the broader AI culture community in general? I know you said it's above your pay grade to come
up with what the actual policy would be, the federal policy that we would hope Congress would
make. It's above my pay grade too. But what would you like to see happen in the next year or two, you know, on a lower level
that just would like improve a couple of these problems?
Do you have any wishes or hopes for this?
So this is how I like to try to end the interview
is to come up with something that, what can be done?
I think, okay, so this is like a little bit far flung
from our conversation, but I think the thing that I would love to happen in the next year is if the Biden administration put up funding for AI research through the National Science Foundation, like through the arm of the government that is focused on basic science research and not defense and not like other other things like just put up money that doesn't have strings attached
that's really focused on actually understanding this technology and the effects of it so that
researchers can be independent and independently scrutinize this stuff without working for tech
companies um and i think then what i kind of assume will happen based off of my general reporting is we'll start to, our understanding of AI will start to shift pretty dramatically because we will start to have more people, more like papers being produced, more research being done that will actually show what this technology is and what we need to be concerned about.
And that then provides the scientific foundation for addressing all these problems that we're talking about, regardless of if they are or aren't at tech companies.
Yeah, that would be the government taking the role in scientific progress that it traditionally has taken of like really
studying the issue the nsf is it's either politicians make decisions but it has scientific
leadership who would who could be setting priorities that would be a huge improvement
absolutely uh well my god thank you so much for coming on the show to talk to us about this and
for doing the independent reporting that that pissed Facebook off.
If you made the CTO of Facebook a little uncomfortable, I think that's probably a good day.
And we can we can thank you for doing a service.
I think at the very least, make them sweat.
You want to make them sweat a little bit.
And so thankful for you for doing that and for coming on the show to talk to us about this and would love to have you back next time you uh you know blow the lid off thank you so much
adam it's been great talking to you well thank you once again to karen how for coming on the show
if you enjoyed that interview as much as i Hey, please leave us a rating or review wherever you
subscribe or go to factuallypod.com slash books to check out the books written by our past guests.
Purchase one or two. If you do, you'll be supporting the show and you'll be supporting
your local bookstore. I want to thank our producers, Chelsea Jacobson and Sam Rodman,
Andrew Carson, our engineer, Andrew WK for our theme song, the fine folks at Falcon Northwest
for building me the incredible custom
gaming PC that I'm recording this very
episode for you on.
You can find me at Adam Conover wherever you get
your social media. If you have a suggestion
of a topic you'd like to hear on the show,
shoot me an email at factually at adamconover.net.
I do read your
emails and it is one of the joys of my
day. Until next week, we'll see you on Factually.
Thank you so much for listening.