Can Social Media Apps Really Suppress Racism Online? – Doha News

0

Racism has sparked a debate over how governments should regulate social media companies.

Last week, the world watched England and Italy come together for the Euro 2020 final. The game continued until a penalty shoot-out, with Italy eventually beating England and winning the chopped off. While many English fans were disappointed, the behavior of their peers was significantly more disappointing than the outcome of the match.

Images online showed fans trashing the streets, storming stadiums and engaging in brawls. Concerns have led many in Qatar to worry about welcoming English fans to the country for the World Cup next year.

Read also: Fans react to Italy’s Euro 2020 victory amid concerns over ‘chaotic hooliganism’ in Qatar

Worse yet, black English soccer players faced immense racism after the match, drawing criticism from social media users around the world and prompting condemnation statements from senior officials including UEFA.

“Those who directed racist abuse against some of the players, I say shame on you and hope you crawl under the rock you came out of”, British Prime Minister Boris Johnson said.

English footballer Tyrone Mings [Twitter]

As the flame continues to burn, Johnson later announced a plan to ban racist fans from attending games. There is currently a law that allows courts to deny entry to fans who make racist remarks at the stadium, but the ruling does not extend to racism online.

Are social networks responsible?

The wave of racism against black England players online has sparked a global conversation about how to fix the problem in today’s world, with many questioning whether social media giants like Facebook and Twitter are taking action adequate to extinguish hate on their apps.

Facebook and Twitter say they are quickly removing racist comments online, with Twitter claiming to have deleted more than 1,000 racist tweets targeting England players in the latest round of abuse.

Many were quick to point out that social media companies have the tools they need to tackle racism, but they are just not doing enough.

On Twitter, users asked why Instagram couldn’t report racist comments the same way it reports all posts mentioning Covid-19.

In recent months, Instagram has gone above and beyond to detect posts and stories mentioning the coronavirus, but has not extended this technology to detect racist posts as well.

The popular photo sharing app uses optical character recognition (OCR), a tool that allows it to scan images for text. This essentially allows it to detect coronavirus-related words even if they were not typed in by the user, but instead are present in the image.

To read also: Do ​​I have to declare if a photo is retouched? Norway thinks so

This technology can be applied to detect racist posts and immediately flag them with a warning, although Instagram does not have such safeguards in place.

Instagram’s failure to remove racist comments doesn’t end there. Many users pointed out that they reported abusive posts to the platform, but its automatic moderation revealed that the content did not go “against community guidelines.”

Instead of seeing the reported content removed from the app, users are regularly frustrated with this automated response:

“Our technology discovered that this comment probably did not violate our Community guidelines. Our technology is not perfect and we are constantly working to improve it.

While Instagram admits its technology isn’t perfect, it refuses to provide human moderators to verify if user reports are valid. As such, racism remains alive on the platform.

Twitter, on the other hand, has taken steps in recent months to limit the appearance of potentially abusive content on its platform. Instagram can take inspiration from two of these changes.

Ask before harmful content

Twitter prompts users to view tweets with potentially harmful language [Twitter Blog]

Twitter now displays a prompt to users before posting content flagged as offensive. This gives users a chance to think before they tweet. Instagram can implement this feature when it detects a potentially malicious comment.

Twitter research found that when presented with this prompt, 34% of users were editing or deleting their tweet. He also found that 11% of users presented with this prompt tweeted less offensive content in the future.

  1. Hide flagged comments

Twitter hides comments that “interfere with the conversation” [Twitter Blog]

Instagram’s moderation approach ensures that its technology only removes comments that are certain to be harmful. Since the technology is not perfect, it means that a lot of abusive comments stay online. Deleting comments that aren’t really racist can annoy users, just like letting those who are racist do too. As such, Instagram is in a dilemma, trying to strike the right balance for what’s abusive enough to be deleted.

On Twitter, some replies to a tweet are hidden behind a “Show more replies” button. The language here does not imply that the content is abusive, in case their systems are wrong, but always hides that content away from the main conversation.

Instagram could take inspiration from Twitter here, allowing the platform to hide potentially abusive content without necessarily marking it as such.

Another option Instagram should take is to hire more human moderators. While the app often blames algorithms for its moderation failures, social media users are now demanding that the platform take more responsibility for removing abusive content online.

Read also: Hundreds of Facebook employees mobilize to fight Palestinian censorship

Should social media require ID for verification?

Meanwhile, a more controversial option has been suggested to tackle abuse.

Read also: How private is your private data?

A petition launched in the UK a few months ago called for verified ID to open social media accounts.

Since the recent wave of abuse in the aftermath of the Euro, this has gained ground, collecting more than 600,000 signatures. While such measures are likely to reduce online abuse, they will also significantly reduce user privacy online.

In response to the petition in May, the UK government said the benefits did not outweigh the damages.

The government recognizes concerns about online anonymity, which can sometimes be exploited by malicious actors seeking to engage in harmful activities. However, restricting the right of all users to anonymity, by introducing mandatory user verification for social media, could have a disproportionate impact on users who rely on anonymity to protect their identity.

It should be noted that this response was made two months ago and recent public pressure may force the government to change its position.

Do you think governments should have more control over social media companies? Is it reasonable to require users to verify themselves with ID, or is it too extreme? Let us know in the comments.

Follow Doha News on Twitter, Instagram, Facebook and Youtube



Source link

Share.

Leave A Reply