Social media platforms’ ‘flawed policies’ amplify voter fraud allegations: report

0

Social media companies have weak policies on misinformation and did not apply them consistently until the 2022 midterms, according to a new report released Monday.

The report, from New York University’s Stern Center for Business and Human Rights, faults Meta, Twitter, YouTube and TikTok for failing to take a proactive approach to combating misinformation, including a growing trend of election denial and false allegations of fraud.

They say that the absence of a proactive approach threatens the next elections.

Although social media companies are committed to combating election misinformation, the report states that “flawed policies and inconsistent company enforcement are driving the continued amplification of election denial, particularly in key battleground states. battle”.

The report highlights Facebook‘s continued exemption of politicians from its fact-checking program in an effort to amplify the spread of election denial.

He also criticizes Twitter for having a “repeat/repeat enforcement” of its civic integrity policy in a way that has allowed election denial to “build momentum since the start of 2021.”

The report also focuses on video content, saying YouTube has allowed its platform to be “exploited by proponents of misinformation” and that TikTok is “increasingly plagued by political misinformation.”

Paul Barrett, author of the report, said the risk is heightened by the number of Republican candidates who have embraced election denial.

“In a sense, the problems are intensifying, even though it’s a slow year, even though it’s not a presidential election. And I don’t think the platforms have appreciated how election denial seems to have become a sort of permanent part of Republican politics,” he told The Hill.

“Rather than being extremely vigilant, it seems that the platforms are just following the movements there. They released statements and said these are our policies – these are basically the same policies they’ve had in the past,” he said.

In response, platforms responded to researchers by defending the policies they have put in place.

Meta spokesman Tom Reynolds said in a response quoted in the report that the platforms’ systems are designed to “reduce misinformation, not amplify it.”

“Any other suggestion is wrong. We use a combination of artificial intelligence, human review and input from partners – including fact checkers – to address problematic content, which again is not aligned with our business interests,” said said Reynolds.

The report said YouTube did not provide an official response to the report, but the company earlier this month released an announcement about its efforts to combat misinformation, including a commitment to apply the policies “consistently for everyone, regardless of the speaker’s public figure status.

A TikTok spokesperson said in a statement cited in the report that the company prohibits and removes election misinformation and works with fact-checkers to assess content.

Twitter, which has come under scrutiny after a whistleblower recently alleged widespread security flaws, told the report’s authors that it uses triage resources to pay attention to misinformation. related to elections in the United States and around the world.

To address the problem, the report recommends greater transparency of the platform’s algorithms, whether from the platforms themselves or from the government passing laws to coerce their hand.

It also calls for independent audits to verify platforms, enhanced fact-checking, removal of “obviously false content” and consistent policies.

The report also says platforms need to focus more on the “next threat” proactively rather than reactively.

“Being able to figure out what issues are on the horizon and what they want to do about them before they hit their platforms widely would be a huge improvement,” Barrett said.

Share.

Comments are closed.