Social media platforms have pledged to curb extremism. Buffalo puts them to the test

0

CNN

By Clare Duffy and Donie O’Sullivan, CNN Business

After The Saturday Mass Shooting in Buffalo, New York, Big Tech platforms scrambled to stop the broadcast of a video of the attack filmed by the suspect and a document allegedly also produced by him where he sets out his beliefs.

Major social media platforms have tried to improve how they react to sharing this type of content since the mass shooting in Christchurch, New Zealand, in 2019, which was broadcast live on Facebook. Within 24 hours of this attack, facebook said it deleted 1.5 million copies of the video. Online extremism experts say such content can act as far-right terrorist propaganda and inspire others to carry out similar attacks; the Buffalo shooter was directly influenced by the Christchurch attack, according to the document he allegedly shared.

The stakes in dealing with the rapid dissemination of such content are high. “This fits into a pattern we’ve seen time and time again,” said Ben Decker, CEO of digital investigative consultancy Memetica and an expert on online radicalization and extremism. “At this point, we know the consumption of these videos creates copycat mass shootings.”

Yet social media companies facing challenges by responding to what appears to be users posting a deluge of copies of Buffalo’s video and shooting document.

Big Tech’s response

Saturday’s attack was streamed live on Twitch, an Amazon-owned video streaming service that’s particularly popular with gamers. Twitch said it deleted the video two minutes after the violence began, before it could be viewed widely, but not before it was uploaded by other users. The video has since been shared hundreds of thousands of times on major social media platforms and also posted on more obscure video hosting sites.

Spokespersons for Facebook, Twitter, YouTube and Reddit all told CNN they had banned sharing of the video on their sites and were working to identify and remove copies. (TikTok did not respond to requests for comment on its response.) But the companies appear to be struggling to contain the spread and manage users looking for loopholes in their content moderation practices.

CNN observed a link to a copy of the video circulating on Facebook on Sunday evening. Facebook included a disclaimer that the link violated its Community Standards, but still allowed users to click through and watch the video. Facebook parent company Meta said it removed the link after CNN questioned it.

Meta designated the event as a “terrorist attack” on Saturday, prompting the company’s internal teams to identify and delete the suspect’s account, as well as begin deleting copies of the video and document. and links to them on other sites, according to a company spokesperson. The company added the video and document to an internal database that automatically detects and removes copies if they are re-uploaded. Meta also banned content that praises or supports the attacker, the spokesperson said.

The video was also hosted on a lesser-known video service called Streamable and was only removed after it had been viewed over 3 million times, and its link shared on Facebook and Twitter, according to the New York Times.

A Streamable spokesperson told CNN the company is “working diligently” to remove copies of the video “promptly.” The spokesperson did not respond when asked how a video reached millions of views before being deleted.

Copies of the document allegedly written by the shooter were uploaded to Google Drive and other smaller online storage sites and shared over the weekend via links to those platforms. Google did not respond to requests for comment on using Drive to deliver the document.

Challenges in countering extremist content

According to Tim Squirrell, head of communications at the Institute for Strategic Dialogue, a think tank dedicated to countering extremism.

But consumer big tech platforms also have to deal with the fact that not all internet platforms want to take action against this type of content.

In 2017, Facebook, Microsoft, YouTube and Twitter founded the Global Internet Counterterrorism Forum, an organization designed to help promote collaboration to stop terrorists and violent extremists from exploiting their platforms, which has since grown. expanded to include more than a dozen companies. Following the Christchurch attack in 2019, the group pledged to prevent the attacks from being broadcast live on their platforms and to coordinate to tackle violent and extremist content.

“Now technically it failed. It was on Twitch. It then started publishing within the first 24 hours,” Decker said, adding that platforms still had work to do to effectively coordinate the removal of harmful content during crisis situations. Still, the work the major platforms have done since Christchurch means their response to Saturday’s attack has been faster and more robust than the response three years ago.

But elsewhere on the Internet, smaller sites such as 4chan and messaging platform Telegram provided a place where users could congregate and coordinate to repeatedly upload the video and document, according to Squirrell. (For its part, Telegram says it “expressly prohibits” violence and is working to remove footage of the Buffalo shooting.)

“A lot of the threads on the 4chan message board were just people clamoring for the stream over and over again, and once they got a seven minute version, just repost it over and over” on bigger platforms, Squirrell said. As with other content on the internet, videos like the one from Saturday’s shooting are also often quickly manipulated by extremist communities online and incorporated into memes and other content that can be harder for viewers to identify and remove. consumer platforms.

Like Facebook, YouTube, and Twitter, platforms like 4chan rely on user-generated content and are legally protected (at least in the US) by a law called Section 230 from liability for much of what users post. But while mainstream Big Tech platforms are driven by advertisers, social pressures, and users to fight harmful content, smaller, more fringe platforms aren’t driven by a desire to protect ad revenue or attract a large user base. In some cases, they want to be online homes for speeches that would be moderated elsewhere.

“The consequence of that is you can never complete the game of whack-a-mole,” Squirrell said. “There will always be somewhere, someone passing around a Google Drive link or a Samsung cloud link or something else that allows people to access it… Once it’s in the ether, it is impossible to remove everything.”

The-CNN-Wire
™ & © 2022 Cable News Network, Inc., a WarnerMedia company. All rights reserved.

Share.

Comments are closed.