A New York Times The story of the “divide” between Facebook and the Biden administration over COVID-19 “disinformation” illustrates the vagueness of this category and the dangers of removing it at the behest of the government. While administration officials often claim that they are simply encouraging the social media platform to enforce its own rules, their idea of disinformation is not necessarily the same as Facebook’s, and this divide shows that the government imposes proxy online censorship, pushing to broaden the definition of intolerable speech.
“We have engaged with Facebook since the transition on this issue,” White House spokesman Mike Gwin said. Times, “and we made it clear to them when they have failed to meet our, or theirs, standards and have actively increased content on their platforms that is misleading the American people.” Given that the Biden administration has the power to make life difficult for social media companies by pursuing litigation, drafting regulations, and supporting new legislation, Facebook et al. have a strong incentive to follow government “standards” rather than their own.
Facebook told White House officials it was grappling with content that was not explicitly bogus, such as posts that cast doubt on vaccines but did not clearly violate the social network’s rules on disinformation about health”, Times said. “Facebook allows people to voice their experiences with vaccines, such as pain or side effects after receiving an injection, as long as they don’t explicitly endorse the lies.”
DJ Patil, the CTO of Biden’s transition team, had no patience with the accolade. “Seriously?” Patil texted “Team Biden” on a video call. “We have to get past talking points. People are literally dying.” This conviction culminated in Biden’s July 16 charge that Facebook et al. “Kill people” by not taking the police speech as he thinks.
Surgeon General Vivek Murthy, in his July 15 opinion calling for a “whole of society” effort to tackle the “urgent threat to public health” posed by “health disinformation”, made it clear that the administration expects social media platforms to remove statements it deems “misleading”, even when they might be true. “Claims can be very misleading and harmful even if the science on an issue is not yet settled,” he said.
If a Facebook user says we don’t yet have data on the long-term side effects of COVID-19 vaccines, for example, that would be true but unnecessary and therefore likely considered “misleading” in Murthy’s book. Likewise if someone points out that the vaccines have not yet been fully approved by the Food and Drug Administration.
A Times the story of “virus disinformation” further illustrates this point. According to Zignal Labs, which provided the data underlying the article’s report that “Coronavirus misinformation has increased online in recent weeks,” this category includes claims that “vaccines don’t work,” that “they contain microchips” and that they “cause miscarriages.” Fair enough. But Zignal Labs also sees as “misinformation” the argument that “people should rely on their” natural immunity “instead of getting vaccinated,” which may be bad advice but is not a statement. made.
According to a report by Media Matters for America, the Times Note, “many of the most popular posts” in Facebook groups “dedicated to the vaccine discussion” did not involve “explicit lies.” An example: “an image of a character from Scooby Doo [Fred Jones] unmasking a ghost “labeled” Delta variant. “Fred says,” Let’s see what makes you scarier than all the other variants. “The severity of the Delta variant.”
Such media criticism not only is not considered an “explicit lie”; it is arguably correct, even according to the Biden administration. But according to the Times, it’s still problematic:
Like the comments on many other pages, the ones under the Scooby Doo item contained unfounded allegations. They also included calls for violence.
“China is completely to blame,” one comment said. “We will have to fight them eventually, so I am advocating a pre-emptive nuclear strike.”
The Times seems to suggest that some Facebook blowers are inciting violence by calling for war with China, implying that his comments may not be constitutionally protected. The document also suggests that expressions of opinion should be deleted, even when they are not clearly wrong, if they might encourage others to make “unfounded statements”. Yet all of this talk, including the verifiable false claims about COVID-19, is unquestionably protected by the First Amendment. While social media companies like Facebook aren’t constrained by the First Amendment and have the right to moderate posts as they see fit, the situation changes when those decisions are shaped by the demands of the federal government.
Even when the net of “disinformation” is not wide enough to encompass criticism of the media and commentary on foreign policy, there are many opportunities for erroneous determinations of the truth. An episode that RaisonRobby Soave recently described how even well-meaning fact-checkers can get it wrong, fostering misinformation in their zeal to identify him.
Polifact Classified as “fake” an Instagram post by someone who said “the effectiveness of my immune system” against COVID-19 is “99.98%”. But this statement is consistent with the Center for Disease Control and Prevention’s “current best estimate” of the COVID-19 infection death rate (IFR) for children and adolescents, and it is close to the estimate the CDC is proposing. for adults under 50. Even the estimated IFR for 50-64 year olds implies that more than 99% of people in that age group who are infected with the COVID-19 virus will survive. The IFR estimate for people 65 or older is much higher, implying a 91% survival rate.
The Instagram user went off the rails when he compared the survival rate of COVID-19 to the effectiveness rate of vaccines, indicating how well they reduce the risk of certain outcomes, such as infection, hospitalization, and death. Corn Polifact did not simply point out the error of this inappropriate comparison; he said that “the COVID-19 survival rate does not exceed 99%”. According to figures from the CDC, the statement Polifact deemed false appears to be true for everyone except the oldest age group.
When Polifact does something wrong, critics have an opportunity to correct it. But when social media platforms suppress speech because of behind-the-scenes “demands” from the government, it undermines the regular process of verifying claims and counterclaims. It also leads people with mistaken beliefs further into an echo chamber where their statements are less likely to be challenged. The Times Notes the proliferation of vaccine misinformation on alternative platforms such as Gab, Bitchute and Telegram, which is a predictable result of the campaign to drive vaccine skeptics away from mainstream services such as Facebook, Twitter and YouTube.
Officials in the Biden administration may think they are saving lives by pushing social media platforms to ban users whose posts might deter vaccination. But these efforts only build resistance to vaccines by raising suspicion that anti-vaccines are telling a truth the government doesn’t want us to know. The alternative – to address the concerns of vaccine-suspicious Americans by citing evidence that contradicts their fears – is definitely more difficult, and success is by no means guaranteed. But at least it offers an opportunity to persuade people, this is how arguments are supposed to be resolved in a free society.