On January 6 anniversary, social media platforms are on high alert

0


By Clare Duffy, CNN Business

January 6 clearly marked a turning point for large social media companies, proving that they would, under certain circumstances, be willing to misrepresent a sitting US president. But some experts fear they haven’t done enough to address the underlying issues that have allowed Trump supporters and others on the far right to be misled and radicalized, and to organize themselves using their platforms.

Before the first anniversary of the parent company of Facebook, Meta, Twitter and YouTube say they are monitoring their platforms for harmful content related to the Capitol Riot.

“We have strict policies that we continue to enforce, including banning hate organizations and removing content that praises or supports them,” a Meta spokesperson told CNN, adding that the company had been in contact with law enforcement agencies including the FBI and Capitol. Police around the anniversary. As part of its efforts, Facebook is proactively monitoring content praising the Capitol Riot, as well as content calling for carrying or using weapons in Washington DC, according to the company.

“We continue to actively monitor threats on our platform and will respond accordingly,” said the spokesperson for Meta.

Twitter has convened an internal task force with members from various parts of the business to ensure the platform can enforce its rules and protect users around January 6, a Twitter spokesperson told CNN .

“Our approach before and after January 6 [2020] has been to take strict enforcement action against accounts and Tweets that incite violence or have the potential to cause harm offline, ”the spokesperson said, adding that Twitter also had open lines of communication with federal officials. and the police.

YouTube’s Intelligence Desk, a group tasked with proactively finding and moderating problematic content, monitors content and behavior trends. linked to the Capitol riot and its anniversary. Like On Wednesday, the company had not detected any increase in content containing new plots related to Jan.6 or the 2020 election that would violate its policies, according to spokeswoman Ivy Choi.

“Our systems actively point to high authority channels and limit the spread of disinformation harmful to election-related matters,” Choi said in a statement.

The efforts come after Facebook, Twitter, YouTube and other platforms have come under heavy criticism over the past year for the role of social media in the crisis. Businesses, meanwhile, have widely argued that they had solid policies in place even before the Capitol riot, and have only stepped up protections and enforcement since.

As rioters escalated their attack on the Capitol on January 6 – entering the building, ransacking congressional offices, overpowering law enforcement – social media platforms scrambled to do what they could to stem the fallout, first labeling the then president. Trump’s posts, then delete them, then suspend his account completely.

But some experts wonder if the moderation approach has changed significantly over the past year.

“While I certainly hope they would have learned from what happened, if they did, they haven’t really communicated about it publicly,” said Laura Edelson, researcher at the New York University studying online political communication.

This is of particular concern, Edelson says, as there could be a resurgence of disinformation about the attack and the conspiracy theory that the election was stolen arising around the one-year mark of the insurgency. “A big part of the narrative inside the far-right movement is that one, [the Insurrection] wasn’t that bad, and secondly, it was actually the other guys who did it, ”she said.

In interviews leading up to the January 6 anniversary, some Trump supporters in Washington DC told CNN they believed the Democrats or the FBI were responsible for the attack.

Facebook’s response to January 6

Facebook, now a division of Meta, took the most heat of any social media platform around January 6, in part because of internal documents leaked by Facebook whistleblower Frances Haugen who showed the company rolled back protections it put in place for the 2020 election before January 6 last year. Haugen told the SEC in a filing that the company only reimplemented some of these protections after the insurgency began.

A few days after the Capitol riot, Facebook banned “stop the theft” content. And internally, researchers analyzed why the company failed to prevent the movement from growing, documents since released by Haugen (and obtained by CNN from a Congressional source) have revealed. Meta has also taken steps to “disrupt militarized social movements” and prevent QAnon and militias from organizing on Facebook, Meta Vice President of Integrity Guy Rosen said in a blog post by October on the company’s efforts around the 2020 elections.

Meta rebuffed Haugen’s claims and tried to distance himself from the attack. Nick Clegg, the company’s vice president of global affairs, told CNN in October it was “ridiculous” to blame the riot on social media. “The responsibility for the violence of January 6 and the insurgency of that day rests entirely with those who inflicted the violence and those who encouraged it,” Clegg said.

But researchers say the company still struggles to crack down on disinformation and extremist content.

“We haven’t really seen any substantial changes in the moderation of content from Facebook that they have talked about publicly or that have been detectable from the outside,” Edelson said. “It seems they still use pretty rudimentary keyword matching tools to identify problematic content, whether it’s hate speech or misinformation. “

Meta noted in a September blog post that its AI systems have improved to proactively remove problematic content such as hate speech. And in its November report on enforcing community standards, the company said the prevalence of views of hateful content compared to other types of content declined for the fourth quarter in a row.

A new report released Tuesday by the technology research and advocacy group Tech Transparency Project found that content linked to “Three Percent,” an extremist anti-government militia whose supporters were indicted in connection with the 6 January, is still widespread. available on Facebook, some of which use the word ‘militia’ in group names or includes well-known symbols associated with the group. As TTP researchers examined this content, Facebook’s “suggested friends” and “related pages” feature recommended accounts or pages with similar images, according to the report. (TTP is funded in part by an organization founded by Pierre Omidyar.)

“As Americans approach the first anniversary of the insurgency, TTP has found many of the same disturbing patterns on Facebook, as the company continues to ignore militant groups that pose a threat to democracy and the rule of law,” says the report, adding that “Facebook’s advertising algorithms and tools often promote this type of content to users.”

“We have removed several of these groups for violating our policies,” Meta spokesman Kevin McAlister said in a statement to CNN regarding the TTP report.

Facebook says it has removed thousands of groups, pages, profiles and other content related to militarized social movements and has banned militia organizations, including the Three Percent, and noted that the pages and groups cited in the TTP report had relatively few subscribers.

The other players

To be sure, the disinformation landscape extends far beyond Facebook, including to more marginal platforms, such as Gab, which gained popularity in the aftermath of January 6 thanks to pledges not to moderate. content, as larger companies faced calls to crack down. on hate speech, disinformation and violent groups.

In August, the House special committee investigating the deadly January 6 riot on Capitol Hill sent letters to 15 social media companies, including Facebook, YouTube and Twitter, seeking to understand how the disinformation and efforts to overthrow elections by foreign and national actors existed on their platforms. .

Six days after the attack, Twitter noted he had deleted 70,000 accounts spreading conspiracy theories and QAnon content. Since then, the company claims to have deleted thousands more accounts for violating its policy against “coordinated harmful activities” and also says it prohibits violent extremist groups.

“Engagement and focus within government, civil society and the private sector are also essential,” the Twitter spokesperson said. “We recognize that Twitter has an important role to play and we are committed to doing our part. “

YouTube said that in the months leading up to the Capitol Riot it had removed channels from various groups that were later associated with the attack, such as those linked to the Proud Boys and QAnon, for violating policies existing laws on hatred, harassment and electoral integrity. . During the attack and in the days that followed, the company removed live broadcasts of the riot and other related content that violated its policies, and YouTube says its systems are more likely to lead users to authoritative sources of election information.

“Over the past year, we have removed tens of thousands of videos for violating our US election policies, the majority before reaching 100 views,” YouTube’s Choi said. “We remain vigilant ahead of the 2022 elections and our teams continue to closely monitor and swiftly tackle electoral misinformation.”

– CNN’s Oliver Darcy contributed to this report.

The-CNN-Wire
™ & © 2022 Cable News Network, Inc., a WarnerMedia Company. All rights reserved.



Share.

Comments are closed.