Russia’s information war is being waged on social media platforms

0

Within days of Russia’s invasion of Ukraine, several social media platforms, including Facebook, Twitter and YouTube,announcement they had dismantled coordinated networks of accounts spreading disinformation. These networks, which were made up of fabricated accounts disguised with fake names and AI-generated profile pictures or hacked accounts, shared eerily similar anti-Ukrainian talking points, suggesting they were controlled by centralized sources. linked to Russia and Belarus.

The Russian Internet Research Agency used similar disinformation campaigns to amplify propaganda about the US elections in 2016. But their extent was not clear until after the election – and at the time they were carried out with little decline of social media platforms. “There was a feeling that the platforms just didn’t know what to do,” says disinformation researcher and PhD holder Laura Edelson. candidate in computer science at New York University. Since then, she says, platforms and governments have become more adept at combating this type of information warfare and more willing to misrepresent bad actors who deliberately spread misinformation. Edelson spoke to American Scientist about how an information war is waged as the conflict continues.

[An edited transcript of the interview follows.]

How do social media platforms combat accounts that spread misinformation?

These types of misinformation campaigns – where they specifically mislead users about the source of the content – ​​are very easy for platforms to fight, because Facebook has this real-name policy: mislead users about who you are is a violation of Facebook’s Platform Rules. But there is [other] things that shouldn’t be hard to weed out – which Facebook has always struggled with – and those are players like RT. RT is a Russian state-backed media. And Facebook really struggled to figure out what to do with that. That’s what was so impressive to see [Facebook and other platforms] really started taking action against RT last week because it’s been going on for so long. And also, frankly, [social media platforms] have had government coverage, where governments in Europe have banned Russian state media. And it’s allowed Facebook, YouTube, and other big platforms to do the same. In general, banning anyone, but especially banning the media, is not a step one should take lightly. But RT and Sputnik [another Russia state-backed media outlet] are not ordinary media: they have polluted the information space for so long.

What else can be done to combat harmful misinformation?

One of the things the United States did very well at the start of this conflict – and why, at least because of misinformation [controlling] point of view, the first week went very well – is that the US government was very aggressive in releasing information about what it knew about the realities on the ground in Russia and Ukraine. It was really helpful in creating a space where it was difficult for Russians to spread misinformation about these same things. Because the US government was very open, it didn’t leave much room; there was no information vacuum that the Russians could fill. And then the Ukrainian government has been extremely wise in telling the story of the Ukrainian resistance. There are certainly times when he crossed the boundaries of propaganda. But in general, he made sure that the world sees the Ukrainian resistance and the fight that the Ukrainian people are ready to fight. This [helps] people see what is happening and understand that the people fighting there are real people who not so long ago were not fighters. They were civilians, and now they are defending their country.

I think those two things are going to be difficult to sustain over time. But if they are not maintained, the window of Russian disinformation will open. A challenge we are all going to face is that this war will not be over in the next few days, but the news cycle cannot sustain this level of focus on these events. It’s shocking to say, but in three weeks you’ll be spending hours without thinking about it. And that’s when the people’s guards will come down. If someone tries to broadcast some kind of [disinformation]- maybe the Russians are making up a fake Ukrainian atrocity or something – that’s when the world will be sensitive to this stuff. And that’s where we’re going to have to remember all that “Who told you the story? Do we trust them? How verifiable is this account? This is going to be part of how the conflict is conducted in the future. But this is something new for all players, and everyone is going to have to get used to maintaining their ground game in information warfare, not just kinetic warfare.

Some people have also pointed to an apparent reduction of other forms of misinformation, such as vaccine-related conspiracy theories, since Russia’s internet infrastructure and payment networks were limited by the sanctions. What’s up with that?

I haven’t seen any large-scale analysis published on this. That said, there have been quite a few anecdotal reports that misinformation in other sectors has decreased markedly over the past week. We cannot say for sure that this is due to the lack of Internet access in Russia. The conclusion is not that everything removed came from Russia. The conclusion that is reasonable to draw from these anecdotal reports is that Russia’s internet infrastructure was an essential part of the toolbox of those spreading disinformation. There are a lot of parts of this economy that are running out of Russia: botnets, for example, networks of people who sell, buy and resell stolen credit card information, a large part of the economy around the purchase of stolen cards [social media] accounts, because Russia has always tolerated a lot of cybercrime. Either he turns a blind eye or many of these groups work directly or are contractors of the Russian state.

How to avoid falling into the trap or spreading false information?

The bottom line is that people shouldn’t have to. It’s like saying, “My car doesn’t have a seat belt. What can I do to protect myself in the event of an accident? The answer is: your car should have seat belts, and it shouldn’t be your job. But unfortunately it is. With that little caveat, you have to remember that the most successful misinformation succeeds by appealing to emotions rather than reason. If misinformation can tap into that emotional pathway, you’ll never question it because it feels good, and if it feels good, it’s next to being true. So the first thing I recommend is this: If something makes you emotional, especially if something makes you angry, before you share it or interact with it, really ask yourself, “Who’s doing the promoting that, and do I trust them?”

What’s the most important thing rigs need to do to install metaphorical seatbelts?

I think the most important thing platforms should do, especially in these times of crisis, is [recognize they] must not promote engagement-only content. ‘Cause you have to remember disinformation is really engaging. It’s engaging for some of the reasons I’ve talked about: a very emotional appeal, things that bypass reason and go straight to your stomach. This is a really effective tactic to deceive. So I think that’s when platforms need to reinforce the importance of content quality versus content engagement. It’s the first thing they could do, and almost everything else pales in comparison.

Share.

Comments are closed.