Who Trolled Amber? - Who Trolled Amber: Episode 3 - Into the dark
Episode Date: March 5, 2024Disinformation experts come back with some startling results. The story begins to cross continents as the team find evidence of a global campaign.The first four episodes of Who Trolled Amber are now a...vailable and further episodes will be released weekly. To binge listen to the entire series become a Tortoise member or subscribe to Tortoise+ on Apple Podcasts.To find out more about Tortoise:Download the Tortoise app - for a listening experience curated by our journalistsSubscribe to Tortoise+ on Apple Podcasts for early access and ad-free contentBecome a member and get access to all of Tortoise's premium audio offerings and moreIf you want to get in touch with us directly about a story, or tell us more about the stories you want to hear about contact hello@tortoisemedia.comReporter and host: Alexi MostrousProducer and reporter: Xavier GreenwoodEditor: David TaylorNarrative editor: Gary MarshallAdditional reporting: Katie Riley Sound design: Karla Patella Artwork: Jon Hill & Oscar Ingham Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
ACAST powers the world's best podcasts.
Here's a show of glamour and scandal and political intrigue
and a battle for the soul of a nation.
Hollywood Exiles, from CBC Podcasts and the BBC World Service.
Find it wherever you get your podcasts.
ACAST helps creators launch, grow, and monetize their podcasts everywhere.
Acast.com
Tortoise
Just a warning before we start.
This series contains strong language and descriptions of violence.
Oh yeah, I think it's 100% inauthentic.
I have no doubt about that.
We're back in London and we've got Ron Schnell's dataset.
Ron collected almost a million tweets in the run-up to the DepHerd trial.
I hope this trove of information will reveal clues about who trolled Amber.
And my producer Xavier and I have finally found an expert who is willing to look into it.
My name is Zhou Han.
Zhou Han Chen. I'm the founder of the SafeLink network where we're building
technologies to detect anomalies on the internet. During my PhD I got to work as a security team
with Twitter, Amazon and Google detecting different bad actors. Johan is the man behind something called Information Tracer,
this pretty cool website I've been using a lot on this investigation.
Information Tracer is an AI-powered tool
to help people search and investigate different social media campaigns.
Johan has the credentials we need, but I also want a second opinion.
Hey, Kai Cheng, how are you?
Great, how are you?
I'm okay, happy new year.
I have a PhD in informatics.
Right now, I'm a postdoctoral researcher.
Kai Cheng Yang is a big name
in the world of misinformation.
He created a program called Botometer.
A machine learning algorithm that can find social bots on Twitter.
Do you remember that's what Ron Schnell used when he was working on the case?
I fed in hundreds of thousands of anti-Amber Heard tweets and got a lot of data.
Kai Cheng's program is pretty well known. Elon Musk used Botometer in a lawsuit
against Twitter's former owners
when he was trying to back out of the deal to buy it.
It's in their court document
because they were really ready to go to trial.
But I have to say,
the way they use the tool is not,
how to say, correct.
Both Johan and Kaicheng
are experts at detecting misinformation.
If anyone's going to find bots, if anyone can tell us whether they were working at scale
or even acting in coordination, it's these guys.
But it's going to take them a while to properly analyze so many tweets.
A couple of weeks, at least.
So for now, you've got Alexei and Xavier instead.
We don't know how to code or run algorithms,
but that's okay,
because this story has never just been about data.
Johan and Kaicheng are looking for patterns across the dataset.
Their analysis will show us the overall role
that bots played in the Amber case.
But I'm just as interested in the specifics.
I want to find individual users who are not who they say they are.
So Zav and I lock ourselves in a dark room,
we switch on our monitors and crank open Ron's database.
And soon, one particular account stands out. An account which doesn't just tweet
about debt, but about politics. It's not a real user, it's not a real person, but I know for sure,
yes. I'm Alexei Mostras, and this is Who Trolled Amber? Episode 3, Into the Dark.
This is what happens.
I'm looking through the Ron Schnell data to find accounts which have tweeted a lot about Johnny Depp.
And one of these stands out as particularly interesting.
This account isn't behaving like a typical bot account.
It's not posting ads for cryptocurrencies
or retweeting other people's hashtags.
Instead, it posts hundreds of tweets in Spanish about Chilean politics.
Until November 2020, when it suddenly switches topics
and starts tweeting about Johnny Depp.
The account's name is Felipe Irara Zaval.
He says he's a graduate of the University of Chile and he has
a master's degree in law. Except a couple of years after the account is created, Felipe changes his
profile to say he studied commercial engineering at a completely different college. That's red flag
number one. Felipe is pretty right-wing. You can see from his tweets that he's
obsessed with law and order in Chile. And of course, that's not unusual in itself,
but I can't find any trace of the real Felipe outside of Twitter.
There are a couple of people with the same name living in Santiago who have very similar profiles,
but when I contact them, they say they have nothing to do with the account. If the Twitter Felipe is a real person,
then off the platform, he's a ghost. That's red flag number two.
I've been studying for a while now all the dark side of digital technologies regarding politics.
So that's the misinformation that's coordinated in authentic activities such as bot networks and stuff like that.
I call Professor Marcelo Santos.
He's an expert in both misinformation and in Chilean politics.
I want to see what he thinks of Felipe.
Can you just give me your impressions of this account? and in Chilean politics. I want to see what he thinks of Felipe.
Can you just give me your impressions of this account?
Okay, first of all, a couple of things.
We were talking about when... A few years back, Marcelo was the head of this big study
into political disinformation in Chile.
He uncovered thousands of bots and trolls
trying to manipulate public opinion before big votes.
There were sets of bots or bot-like users that I called the amplifiers.
They were just, you know, rooting for the conservative users.
And some other sets of bots that were the polarizing bots
that were rooting here and, you know, talking shit.
Most of these suspect accounts were created
just after a series of protests erupted in the country.
June 2019, we had this, what we call,
the social uprising or social outburst,
where people went to the streets,
first triggered by erasing in the subway fair.
The protests start with students, but end up encompassing every part of society.
For weeks, the streets are alive with rioting and tension.
I believe that five subway stations were destroyed. I mean, they were burned.
So we had curfew, we had military on the streets, there were clashes with the police every
day. Every Friday, some of us went to this square, and I'm talking that ranged from 200,000 to 1
million people, with very violent repression by the police. We didn't go to the protests
because it was really violent.
Misinformation ramps up when Chile stages a referendum and then a general election.
Most of the bot accounts that Marcelo identifies support right-wing candidates.
And specifically, one man called José Antonio Cast.
José Antonio Cast is the representative of the far right. Kast had this big network of bot-like activity, promoting him and demoting the others. Kast is particularly opposed to a new constitution which would make Chile a much more liberal society.
His posts on this topic are retweeted by a network of suspected troll and bot accounts,
along with a particular hashtag marking opposition to the constitution.
Rechazo.
Rechazo is rejected in Spanish.
Approves, apruebo.
So that was a campaign of apruebo against rechazo.
So that hashtag rechazo, which appears in the user you were monitoring,
is related to the right-wing sympathizers.
When I look at Felipe's account,
he fits this pattern exactly.
He posts dozens of pro-cast tweets
and he liberally usesimiento. Rejection.
To Marcelo, all of this is very suspicious.
And he notices other clues about Felipe which I'd missed,
like the date that the account was created.
The D-Day, or the social uprise, was the 18th of October. Two days later, this account is created. The D-Day for the social uprise was the 18th of October. Two days later
this account is created. So that is quite suspect. And the language
used in the tweets themselves. It doesn't add up from
whatever perspective you see.
In Marcelo's opinion, Felipe is not
who he says he is.
Not a lawyer or an engineer from Santiago, but someone else entirely.
Possibly someone sitting in a troll farm in Mexico or in the Philippines,
being paid to secretly sway opinions online.
How likely, roughly, do you think it is,
from everything you know about this account, that it is some sort of inauthentic account?
Oh yeah, I think it's 100% inauthentic. I have no doubt about that.
Up until this point, Felipe has never tweeted about Johnny Depp or shown any interest in celebrity stories.
But on November 6, 2020, he stops posting about the politics of Chile and out of nowhere joins the Amber Heard hate party.
He starts tweeting the same hashtags in English again and again and again. Justice for Johnny Depp. Justice for Johnny Depp.
What a great indignation.
Johnny Depp is in the position of his wife.
Amber Heard is an abuser.
It has very low activity for a long time,
and then all of a sudden it has 55 tweets about, you know,
how Amber is super guilty.
Almost all the tweets have the same structure.
A couple of hashtags, basically hashtags like
John Depp is innocent, Amber Heard is an abuser,
and then a link for a tweet.
My question is, if Felipe was sent to stoke culture wars in Chile,
why would he suddenly start tweeting about a celebrity trial in the United States?
Marcelo can only speculate.
I have no evidence. I have just inferences.
So it could be perfectly an agency in Mexico that has a lot of accounts
that was hired by the right wing to do that specific work,
and then was hired by another agency to do that other specific work. That's not unusual.
Not only is Marcelo saying that Felipe is a fake account, one of hundreds created to stoke up
tensions in Chile's politics, he's suggesting that at some point at the end of 2020,
the fake Felipe was repurposed,
away from right-wing rabble-rousing and towards Johnny Depp.
This story's horizons are broadening.
What started as a celebrity story
now seems to be crossing over with international politics.
I suppose when it comes to online manipulation, the target doesn't really matter.
Whether you want to promote a celebrity or a politician, the tools are the same.
And I had one more slightly unsettling thought.
If it hadn't have been for Ron's data, Felipe's activities would still be hidden.
He deleted his account in 2021, wiping away all his tweets.
Which raises the question, how many more Felipe's are there out there, shaping events and then covering up their tracks?
I get off the call and I go and find Xavier to tell him about Marcelo.
As it turns out, he has found out a lot more.
The first thing that I found were all these accounts posting in Thai.
So normally when a tweet goes viral, you're expected to have lots of retweets,
lots of likes, lots of replies.
What's happening with these accounts is that they're only ever posting once about Depp
and they're getting a huge number
of retweets, thousands and thousands. And the crucial bit is that the accounts have basically
no followers, and barely anyone is actually engaging with the tweets. No one is replying
to the tweets. So if you take a look at this one, it's congratulating Johnny Depp after the US trial.
The tweet has 22,000 retweets, only 11 replies, and the account has only a couple dozen
followers. Here's another one, an account which has two followers, tweets once about Deb, gets 9,000
retweets. And then this one, 6,500 retweets, no replies whatsoever. My best guess is that these
accounts are being retweeted in an automated way, but by bots that are possibly not sophisticated enough
to actually engage with the tweets or reply to the tweets.
Another thing I found was a network of accounts tweeting about Depp in Spanish.
They're all liking and retweeting the exact same anti-Amber Heard posts.
And the most interesting thing I spotted in those accounts
was that they would post the same tweet again and
again. And the only thing they would change is the string of numbers and letters that they'd added to
the end of the tweets. So to show you an example here, 8th of November 2020, this account posts
hundreds of identical tweets on exactly the same day, all pro Johnny Depp, anti Amber Heard. And
the only thing that is changing in those
tweets is the string of numbers and letters at the end. I talked to a disinformation expert,
and he said that he sees this kind of thing a lot. And what accounts are normally doing,
he thinks, is trying to avoid spam filters. So they will slightly alter the tweets to get around
the filters. And he actually sent me a link to a YouTube video in Arabic, which shows how you can automate this process. So I think actually with both the
Thai accounts and with the Spanish accounts, what we're seeing is automation.
That's amazing. I mean, I also found some really weird stuff. There's this one account
which has tweeted 370,000 times since 2021 and has liked more than 400,000 posts. I've worked out that's
a post every two minutes for 24 hours a day for three years. There's this other account that's
tweeted 7,000 times in a short period, wasn't following anyone at all, but it was part of a
network that I found that seems to be using
fake pictures all of these pictures could be traced back to instagram models uh this network
of pro johnny depp accounts had taken those pictures there was so much weird stuff now i
also wanted to look outside of twitter because there's a lot of um anti-amber hate uh on different
platforms i found this video on on YouTube of Amber giving an interview
to Access Hollywood. And the video had 120,000 comments. And I asked a data journalist that
works at Tortoise called Katie Riley to download all of those comments so we could see what was
being said. She found that 60,000 of these 120,000 comments had exactly the same phrase in them. So
the phrase was, she is not a victim. And that exact phrase appeared in more than half of the
120,000 comments posted under this video. In my mind, there's no way that that could have happened
organically. Okay, here's some of what we found.
A political troll who suddenly switches allegiances to attack Amber Heard.
Spanish-speaking bot networks posting hundreds of pro-deb tweets.
Thai accounts which tweet once and go viral.
And tens of thousands of identical messages left under Amber Heard videos on YouTube.
To me, this all points towards a global campaign.
If that's right, it either means it was coordinated by someone pretty powerful,
someone with access to disinformation resources in different countries,
or it means that multiple people were responsible, and that these guys somehow
moulded together and merged with genuine Depp fans to create a hybrid campaign.
But our examples are still a bit piecemeal. I feel like we're standing in front of a
giant screen that's shielding us from the truth, and we're poking little holes in
it, trying to see through each of them one by one. When what we need is someone to pull back the whole sheet.
Hello, I'm Giles Whittle, Tortoise's Deputy Editor. On the Newsmeeting Podcast, we try to
make sense of what should be leading the news with three guests who each pitched the story
they think matters most. And once a month, we record a live episode in our newsroom. The next
one is on the 27th of March, and I'm going to be joined by the brilliant author and podcaster
Elizabeth Day. To come to the event and tell us what you think should lead the news, go to
tortoismedia.com forward slash book. That is tortoismedia.com forward slash book.
ACAST powers the world's best podcasts. Here's a show that we recommend.
Hi, I'm Una Chaplin, and I'm the host of a new podcast called Hollywood Exiles. It tells the
story of how my grandfather, Charlie Chaplin,
and many others were caught up in a campaign
to root out communism in Hollywood.
It's a story of glamour and scandal and political intrigue
and a battle for the soul of a nation.
Hollywood Exiles, from CBC Podcasts and the BBC World Service.
Find it wherever you get your podcasts.
ACAST helps creators launch, grow, and monetize their podcasts everywhere.
ACAST.com
Tortoise Investigates is sponsored by Wondery.
If you're enjoying this show, we think you'll love The Spy Who,
a new podcast from Wondery.
The life of a spy is anything but glamorous.
It's a place of paranoia and infiltration, sabotage and manipulation.
From Wondery, the network behind British Scandal,
Doctor Death and Ghost Story, comes The Spy Who.
A new narrative podcast series exploring covert spy stories that you were
never meant to hear and revealing the incredible true tales of operatives playing to very different
rules. Narrated by Game of Thrones actor Indira Varma and Homeland's Raza Jafri, each season goes
deep into the real-life story of a special agent,
unearthing daring missions, double-crosses and dangerous liaisons.
Like Dushko Popov, the spy who inspired 007,
or Amen Dean, the spy who betrayed Bin Laden.
To learn more, search and follow The Spy Who on the Wondery app or wherever you listen to podcasts.
follow The Spy Who on the Wondery app or wherever you listen to podcasts. Or you can binge full seasons of The Spy Who ad-free with Wondery Plus. My wife, she's from Myanmar. We met in the United
States. A year into COVID, there was a military coup in Myanmar and the country went into civil unrest.
Johan has seen firsthand the impact that misinformation can have in the real world.
And what happened is before the coup, the military in Myanmar was spreading misinformation on Facebook.
on Facebook, but no one really called them because they were tweeting in Burmese language, which is not analyzed by the system that Facebook built at the time.
In 2018, military personnel in Myanmar created hundreds of fake accounts on Facebook.
They posed as pop stars, models, or other celebrities.
Their aim was to secretly spread hatred
against the country's Rohingya minority.
Since that point,
Johans dedicated his working life
to fighting misinformation and manipulation online.
And now he's got some answers for us.
There are some common indicators of suspicious behaviors in
the entire data set. When you get a data set like this, as a misinformation researcher,
what are your first steps in terms of analyzing it? Just talk us through that process. I want to look globally who is the top
spreader and what tweet is the most retweeted or quoted or mentioned. From those top tweets,
I start to investigate who are interacting with those tweets. As I drill down, I start to investigate different campaigns.
Johan found that the dataset contains a lot of anonymous accounts.
Which means each account, they never use real names or real profiles.
He also found a lot of suspicious amplification. Which is when a tweet receives far more engagement than
it normally would have received. Johan is talking about accounts which have almost no followers,
but when they start tweeting about Johnny Depp, they suddenly get thousands of retweets.
Like the Thai account Xavier found. Johan identified other indicators too, like common content.
When different accounts use identical tags,
such as hashtags or URLs to promote a certain narrative.
A change in language.
Where those accounts were tweeting on one topic
and suddenly they're talking about Johnny Depp.
And finally, spaminists.
And finally, spaminists.
In Johan's opinion, the Ron Schnell data is full of inauthentic activity.
And here's the big takeaway.
Across the whole data set,
I think at least 50% of tweets were generated by inauthentic accounts.
This is pretty astonishing.
According to Johan, 50% or more of anti-Amberherd tweets came from bots or trolls,
either from automated accounts or from accounts pretending to be someone they're not.
If that's right, it means that bots and trolls played more than a minor role in DepthV Herd.
Johan's analysis suggests they were driving the debate.
I should pause here to say something about dates because I think it's significant. Johan's analysis suggests they were driving the debate.
I should pause here to say something about dates, because I think it's significant.
The tweets in Ron's database date from April 2020 to January 2021.
That's about 15 months before the US trial begins in Virginia.
And I'm thinking, if half of all these tweets are inauthentic,
that means that bots got involved well before the jury took their seats in that Virginia courtroom.
It seems like someone laid the groundwork in advance of the US trial,
so that by the time it began and people really started paying attention,
Amber had already lost the case in the court of public opinion.
Johan found something else too. The inauthentic accounts he identified,
they don't just sit in their silos and encourage each other. They get real fans involved too.
I think the inauthentic accounts will start the campaign first.
The bots will tweet a hashtag so much that it's picked up by Twitter's algorithms.
And then it will show on the what's trending panel.
Then genuine accounts will see the hashtag trending.
And then they will start to engage with those tweets.
But of course, Johan isn't the only person who's been looking at the dataset.
Kai Cheng has too, the founder of Botometer, and his conclusions appear on the surface to be a little different. I found a lot
of bots, right? By a lot, I just mean the numbers, right? But compared to the total number of accounts
involved in this discussion, maybe it's a small portion, but there are still a lot. To remind you, this is Johan's take. At least 50% of tweets were
generated by inauthentic
accounts.
Contradictory, right? But actually,
I think it makes sense.
What seems to have happened is that
a lot of real accounts, run by
real people, tweeted about the
Depp case, which is exactly what
you'd expect given the insane amount
of media coverage it got.
But most of these genuine accounts only posted once or a couple of times,
whereas the bots and the trolls were much more prolific.
I see a lot of the bots that's like super active. The contents of a message are the same,
right? They're just replying the same message to different accounts.
That's how you can get to a situation where the majority of accounts in Ron's database were likely authentic,
while the majority of tweets were not. Kai Cheng also spotted particular examples of inauthentic
activity, accounts we'd never seen before, like Karlyn Vincent. It has roughly 1,000 posts.
All those posts are actually replies
to different official accounts of brands,
especially Warner Brothers,
posting something like,
Warner Brothers support abusers such as Amber Heard
and then have the hashtag saying justice for Johnny Depp.
So the same type of message, one thousand times.
So that's a typical bot behavior, right?
He found accounts which developed a sudden interest in the debt case out of nowhere.
This one is also very interesting.
If you look at the timeline, it has three stages of different behaviors.
So before 2019, it was mainly posting about Japanese anime,
and then it went dormant.
And then in 2020, November,
all of a sudden, these accounts start to post
and retweet large amounts of pro-Jungin app tweets.
If you hire those bots for a certain purpose,
these bots will start to post the account you request.
When you stop paying,
then the bots stop. And here's what's really significant about what Kaicheng found.
Examples of coordinated behavior. Suspect accounts working together to troll Amber.
We identified individual accounts, but this goes a step further. I found another example, which is a group of accounts,
I think based on their behavior, those are all bots.
And what they do was just reply the same message.
This brand supports domestic violence against men to different brands.
All these accounts tweet the same message. They send this message
to anyone associated with Amber. Her studio, Warner Brothers, her main sponsor, L'Oreal,
The Sun newspaper, even JK Rowling. Repeatedly. And usually it's on the same day. This is a
typical what we call coordinated inauthentic behavior. I ask Johan to run the phrase,
this brand supports domestic violence against men,
through Information Tracer,
his tool designed to detect social media manipulation.
That night, he sends me a voice note.
There are about 100 accounts generating more than 1 tweets and retweets that include this identical
term within a matter of hours and all of those accounts look suspicious.
Johan's tool shows that over a thousand identical tweets are sent in a matter of hours on the 7th of November 2020.
And that's super interesting.
Because that's not the only suspicious activity happening around this time.
It's also when Felipe, the Chilean right-winger, suddenly develops his interest in Depp.
It's when the Japanese anime accounts and the Spanish bot accounts
start tweeting hundreds of times about the case.
And it's when Carlin Vincent
begins relentlessly trolling companies like Warner Brothers.
All this activity happens in the same 48-hour period.
And there's something that happens
right at the beginning of this that I think might be relevant.
On the 6th of November, Depp announces on Instagram that he's been fired from the Fantastic Beasts movie,
a week after losing the UK trial.
Why was there so much coordinated bot activity in the 48 hours following this announcement?
Was this just an angry fan
creating a bot campaign on his home computer? Or was this Depp's team launching a fight back
against the judgment? In the early 19th century, a French criminal called Eugène-François Vidocq
decided to become a private detective.
Vidocq was possibly the world's first ever example of a sleuth.
He went on to found France's first criminal investigative agency,
and his story inspired several writers, including Victor Hugo.
He also pioneered the use of agent provocateurs,
secret agents who would go undercover to encourage suspects to commit a crime.
And that's how I've come to see the bots and trolls in the Amber Heard case.
They didn't create all the hate towards her, but they were in the mix, stoking things up, encouraging and inciting
ugly elements which were already present, making sure that by the time you became aware of the
story, your social media feed was already filled with one-sided vitriol.
I feel like I've learned a lot from Johan and Kaicheng.
We've got much closer to answering the question of what happened.
But that still leaves the other question.
Who trolled Amber?
Who commissioned these bots and trolls in the first place?
Ron's dataset has given us almost too many leads.
It shows that the campaign against Amber has come from all over the world.
I feel a bit stuck, so I do what I normally do when I need an answer.
I ring Daniel Mackey, a former spy who put me on this story in the first place.
How are you? Happy New Year.
Happy New Year. I'm good. I'm very well.
I mean, work stress is work stress, but, you know, life overall is pretty good.
Basically, the top line from Johan, one of the researchers, was that, in his opinion,
more than half of all the tweets in the database using anti-Amber Heard hashtags were posted by inauthentic accounts.
Wow. Wow. And what led him to that conclusion?
And that doesn't surprise me in the slightest,
but that's interesting. That's a substantial finding.
Both Johan and Kai Cheng found suspicious behavior
across the entire data set.
Maybe I can go through a couple of examples quickly.
Yeah, absolutely. There were quite a few repurposed accounts that they found.
Right, so the accounts are either being leveraged by the same actor working for different clients,
or they were being bought, sold, and used by different actors.
That is bread and butter disinformation and
authentic activity and and then they found something that they called specifically
coordinated inauthentic behavior so there was this network of accounts that posted
exactly the same message about amber heard being at the time or a thousand times in one day or
whatever yeah yeah exactly it was literally actually you got it almost bang on it's like
a thousand two hundred times on one particular day these guys that you got to do this analysis sound
like they know what they're doing that's uh that's how we used to do that analysis back in back in my
old life that's that's pretty cool that's like very validating to hear that because this is the
theory that we discussed a while back and so to hear that their analysis so directly speaks to
the thing that we were in our gut saying, this looks really weird.
And like, yeah, that's that's really satisfying to hear that, to be honest.
If more than 50 percent of overall conversation is potentially inauthentic or coming from inauthentic sources, does that suggest to you that it is significantly higher than the sort of level of bot activity that you would normally expect to see?
Absolutely. 100%. We're looking at something here that feels beyond just the general din of the crowded bar.
This is somebody getting up on stage, ripping off their pants, throwing eggs at people in the audience. I suppose a related question, do you think that this sort of inauthentic activity would
have the power to influence the real world debate?
100%. Absolutely. I mean, this was a case that was in part prosecuted in the court of public
opinion. It was a sort of modern day OJ or Lewinsky situation where these new media platforms are being used.
Even in my personal life, I had people flagging.
I saw this and I saw this piece of content or whatever.
They show it to me and I go, that is inauthentic.
Why it is so divisive is because it takes natural human interaction, social connection and community making.
And it basically amplifies the interests of a few
specific parties. It's actually fundamentally anti-democratic. It's truly, it is like pouring
gasoline on a fire. Okay, so here's an idea. You should look into other inauthentic campaigns.
I'm willing to bet that if you look at other inauthentic campaigns,
you will find that there are commonalities between them and this one. None of what these guys have done here is reinventing the wheel. The internet is mostly imitative rather than
iterative. And so by examining who they're imitating, you might actually be able to
narrow down who's running this. As soon as I get off the phone, I start to look for other cases.
I want to find one where online bots have imposed themselves onto a narrative.
That's when I come across one of the weirdest stories I've ever heard.
And I didn't expect it would give me a clue about who trolled Amber.
Turns out, I was wrong.
Next time on Who Trolled Amber, we go through the looking glass.
I call it my Alice in Wonderland life, since it started.
A homicide case gives us new clues.
Do you think that anyone connected to you might have paid for bots or trolls to support
your case? Yeah, I think so. Yeah. And a former FBI agent raises an intriguing possibility.
They want to showcase the failures of the US. That's why you would have state actors
or other groups in this.
This is a propaganda war.
We contacted Johnny Depp
while making this podcast,
but he didn't respond.
Thank you for listening to Who Trolled Amber?
Who Trolled Amber is written and reported
by me, Alexi Mostras,
and by Xavier Greenwood.
The producer is Xavier Greenwood.
Additional reporting by Katie Riley.
Sound design is by Carla Patella.
The narrative editor is Gary Marshall.
The editor is David Taylor.
Thank you for listening to Who Trolled Amber?
If you're enjoying the series, please take a moment to give it a rating
and recommend it to your friends and family.
It really does make a difference.
While you wait for next week's episode,
search for Tortoise and hear more from our award-winning newsroom
wherever you get your podcasts.
You can binge the entire series by subscribing to Tortoise Plus or the Tortoise app.
Tortoise.
Hello, I'm Giles Whittle, Tortoise's Deputy Editor. On the News Meeting podcast, we try to make sense
of what should be leading the news with three guests who each pitch the story they think matters most. And once a month,
we record a live episode in our newsroom. The next one is on the 27th of March,
and I'm going to be joined by the brilliant author and podcaster Elizabeth Day. To come
to the event and tell us what you think should lead the news, go to tortoismedia.com forward
slash book. That is tortoismedia.com forward slash book.