On the anniversary of January 6, social media platforms are on high alert

0

CNN

By Clare Duffy, CNN Business

January 6 was a watershed moment for major social media companies – proving that they would, under certain circumstances, be prepared to deplatform a sitting US president. But some experts fear that not enough has yet been done to address the underlying issues that have allowed Trump supporters and other far-right supporters to be misled and radicalized, and to organize using their platforms.

Before a year birthday Facebook-parent company Meta, Twitter and YouTube say they are monitoring their platforms for harmful content related to the Capitol Riot.

“We have strong policies that we continue to enforce, including banning hate organizations and removing content that praises or supports them,” a spokesperson for Meta told CNN, adding that the company had been in contact with law enforcement agencies including the FBI and Capitol Police around the anniversary. As part of its efforts, Facebook proactively monitors content praising the Capitol Riot, as well as content calling on people to carry or use weapons in Washington DC, according to the company.

“We continue to actively monitor threats on our platform and will respond accordingly,” the Meta spokesperson said.

Twitter convened an internal task force with members from various parts of the company to ensure the platform could enforce its rules and protect users around the one-year mark of Jan. 6, a source said. Twitter spokesperson to CNN.

“Our approach before and after January 6 [2020] has been to take strong enforcement action against accounts and Tweets that incite violence or are likely to cause harm offline,” the spokesperson said, adding that Twitter also had open lines of communication with officials. feds and law enforcement.

YouTube’s Intelligence Desk, a group responsible for proactively finding and moderating problematic content, monitors trends in content and behavior. related to the Capitol riot and its anniversary. Like As of Wednesday, the company had not detected an increase in content containing new conspiracies related to Jan. 6 or the 2020 election that violates its policies, according to spokeswoman Ivy Choi.

“Our systems actively point to high-authority channels and limit the spread of harmful misinformation for election-related topics,” Choi said in a statement.

The efforts come after Facebook, Twitter, YouTube and other platforms have come under heavy criticism over the past year for social media‘s role in the crisis. The companies, meanwhile, have widely argued that they had strong policies in place even before the Capitol riot and have only stepped up protections and enforcement ever since.

As rioters escalated their attack on the Capitol on Jan. 6 — breaching the building, trashing congressional offices, overpowering law enforcement — social media platforms rushed to do what they could to stem the falloutfirst labeling then president Trump’s posts, then deleting them, then completely suspending his account.

But some experts wonder whether the approach to moderation has changed significantly over the past year.

“While I certainly hope they would have learned from what happened, if they did, they haven’t really communicated about it publicly,” said Laura Edelson, a researcher at the University of New York who studies online political communication.

It’s particularly concerning, says Edelson, because there could be a resurgence of misinformation about the attack and the conspiracy theory that the election was stolen emerges around the one-year mark of the insurgency. “A lot of the narrative inside the far-right movement is this, [the Insurrection] wasn’t that bad, and two, it was actually the other guys who did it,” she said.

In interviews leading up to the Jan. 6 anniversary, some Trump supporters in Washington DC told CNN they believe the Democrats or the FBI were responsible for the attack.

Facebook’s response to January 6

Facebook, now a division of Meta, took over all social media platforms around January 6, in part due to internal documents leaked by Facebook whistleblower Frances Haugen that showed that the company had canceled the protections he had set up for the 2020 elections before January 6 last year. Haugen told the SEC in a filing that the company did not reimplement some of those protections until after the insurgency began.

A few days after the Capitol riot, Facebook prohibited “stop theft” content. And internally, researchers analyzed why the company failed to prevent the movement’s growth, documents since released by Haugen (and obtained by CNN from a congressional source) have revealed. Meta has also taken steps to “disrupt militarized social movements” and prevent QAnon and militias from organizing on Facebook, Meta’s Vice President of Integrity Guy Rosen said in an October 2016 post. blog post on the company’s efforts around the 2020 elections.

Meta pushed back against Haugen’s claims and attempted to distance herself from the attack. Nick Clegg, the company’s vice president of global affairs, told CNN in October that it is “ridiculous” to blame the riot on social media. “The responsibility for the violence of January 6 and the insurgency that day lies entirely with those who inflicted the violence and those who encouraged it,” Clegg said.

But the researchers say the company still struggles to tackle misinformation and extremist content.

“We haven’t really seen any substantial changes in Facebook’s moderation of content that they’ve talked about publicly or that have been externally detectable,” Edelson said. “From the outside, it looks like they’re still using pretty rudimentary keyword matching tools to identify problematic content, whether it’s hate speech or misinformation.”

Meta rated in a September blog post that its AI systems have improved to proactively remove problematic content such as hate speech. And in his Report on the application of Community standards of Novemberthe company said the prevalence of views of hate speech content compared to other types of content declined for the fourth consecutive quarter.

A new report released Tuesday by tech advocacy and research group, the Tech Transparency Project, found that content related to “Three Percent” an anti-government and extremist militia whose subscribers have been accused in connection with the January 6 attack, is still widely available on Facebook, some of which use “militia” in group names or includes well-known symbols associated with the band. As TTP researchers examined this content, Facebook’s “suggested friends” and “related pages” feature recommended accounts or pages with similar images, according to the report. (TTP is funded in part by an organization founded by Pierre Omidyar.)

“As Americans approach the first anniversary of the insurgency, TTP has found many of the same troubling patterns on Facebook as the company continues to ignore militant groups that pose a threat to democracy and the rule of law,” says the report, adding that Facebook’s “algorithms and advertising tools often promote this type of content to users.”

“We removed several of these groups for violating our policies,” Meta spokesperson Kevin McAlister said in a statement to CNN about the TTP report.

Facebook says it has removed thousands of groups, pages, profiles and other content linked to militarized social movements and has banned militia organizations, including the Three Percenters, and noted that the pages and groups cited in the TTP report had relatively few subscribers.

The other players

Certainly, the misinformation landscape extends far beyond Facebook, including to more fringe platforms, such as Gabwhich gained popularity after January 6 thanks to promises not to moderate content, as big companies were called upon to crack down on hate speech, misinformation and violent groups.

In August, the House Select Committee investigating the deadly Jan. 6 Capitol riot sent letters to 15 social media companies, including Facebook, YouTube and Twitter, seeking to understand how misinformation and efforts to overturn the election by foreign and domestic actors existed on their platforms.

Six days after the attack, Twitter noted it had deleted 70,000 accounts spreading conspiracy theories and QAnon content. Since then, the company claims to have deleted thousands of other accounts for violating its policy against “coordinated harmful activity” and also claims that it prohibits violent extremist groups.

“Engagement and focus from government, civil society and the private sector is also essential,” the Twitter spokesperson said. “We recognize that Twitter has an important role to play and we are committed to doing our part.”

YouTube said that in the months leading up to the Capitol Riot, it removed the channels of various groups that were later associated with the attack, such as those linked to the Proud Boys and QAnon, for violating existing policies on the hatred, harassment and the integrity of elections. . During the attack and in the days that followed, the company halted live streams of the riot and other related content that violated its policies, and YouTube says its systems are more likely to direct users to authoritative sources of election information.

“Over the past year, we’ve removed tens of thousands of videos for violating our US election policies, the majority before reaching 100 views,” YouTube’s Choi said. “We remain vigilant ahead of the 2022 election and our teams continue to closely monitor and promptly address election disinformation.”

–CNN’s Oliver Darcy contributed to this report.

The-CNN-Wire
™ & © 2022 Cable News Network, Inc., a WarnerMedia company. All rights reserved.

Share.

Comments are closed.