Social media apps are not safe for kids, big tech can change that

0


[ad_1]

  • Social media platforms don’t prioritize children’s online safety enough, but we’ve asked them the wrong things.
  • Children face real dangers online, including issues like bullying, sexual predators, and more.
  • Big Tech needs to partner with parents to give them greater control over their children’s online worlds.
  • Brian Bason is a father of two and founder and CEO of Bark Technologies.
  • This is an opinion column. The thoughts expressed are those of the author.
  • See more stories on the Insider business page.

Recent announcements from


social platforms

like TikTok, YouTube, and Instagram about their increased child safety efforts have garnered a lot of attention among parents looking for new ways to help protect their children online. And while these are all positive steps, these efforts will not really solve the larger issue of child safety online. In the end, the measures announced give only an illusion of security.

We absolutely need social platforms to play a vital role in children’s online safety (reporting CSAM, eliminating bad actors, ensuring transparency of their data practices, etc.), but the central issue in the Current narrative is the expectation that each social platform can – and will independently build enough tools to keep children safe. Parents, rather than the big techs, ultimately shoulder this responsibility – they just need access to empower them to do so.

Big Tech’s No.1 Focus Will Never Be On Child Safety

It’s no secret that the main duty of a great social platform is to generate shareholder value by serving ads to its users, optimizing user count and engagement metrics. . Platforms are inherently discouraged from spending any significant energy on child safety features that reduce the use of one of the larger groups on their platform. Updates done in the name of child safety are usually of the ‘tick the box’ type, as social platforms are not non-profit and are not dedicated to providing safe online spaces for children.

Some platforms offer versions of their apps aimed at children and designed for young children (that is, users to whom they cannot yet serve ads due to the Children’s Online Privacy Protection Act, a law protecting the privacy of children under 13). For example, Facebook Messenger Kids and the upcoming Instagram for Kids rumor. Aside from whether we want to encourage more use of social media among younger people, these types of apps generally work quite well for kids ages 6-10.

But the kids ‘versions are just not great for a tween or teen to use, are limited in functionality, and are awkwardly linked to their parents’ accounts on this platform. To upgrade to the adult version of the app, all they need to do is fake their date of birth. Without age verification, they can start exploring the entire app or chatting with strangers.

Alternatively, some platforms provide “parental controls” for children using the adult versions of their apps, but in most cases children can simply turn them off at any time without their parents’ permission. And even when enabled, parental controls are inherently blunt instruments, often severely limiting the usefulness of the app. While disabling specific features can certainly be helpful when a child (usually younger) is only consuming content, it has little impact on the bigger challenges kids face once they’ve moved on. the use of platforms as tools for communicating with others.

The dangers children face online are real

At Bark, the online safety company, we’ve learned first-hand that when kids are online, the chances of them having a problem are extremely likely. In the bark annual report 2020, we analyzed over 2.1 billion messages via SMS, email, and over 30 social media and messaging platforms, and found:

  • 41.4% of pre-adolescents and 66.6% of adolescents were involved in a situation of self-harm / suicide.
  • 70.9% of tweens and 87.9% of teens have encountered nudity or sexual content. ● 76.7% of tweens and 82.0% of teens have been bullied as a bully, victim or witness.

And parental controls on a platform – even if implemented well – would have helped almost zero of them. Parental controls just don’t help when a child is bullied, forced to send nude photos of themselves, expresses suicidal thoughts, or receives unwanted pornography via a DM / text message.

Granted, on a small handful of platforms, a parent could use the platform’s built-in parental controls to completely disable DMs – but that has the side effect of making the app largely useless for older kids, and simply move the problematic communication to another platform with less restrictive controls. And, more importantly, this naive “block or allow” approach builds a wall between child and parent – there is no feedback loop for parent to offer support.

Put families in charge of their own data

Adolescent / pre-teen suicide rate,


depression

, online bullying and predation have increased dramatically over the past decade, but parents face a huge lack of awareness when these issues arise. It’s too great a burden for our children to expect them to always know when and how to ask for help (statistically, they don’t), and history has shown that to expect what platforms independently and proactively implement sufficient tools for parents and children. is not a viable option.

So what do we really need? While I certainly encourage platforms to keep working to deliver better tools, the first thing platforms need is neither controversial nor difficult to deliver: enabling users to truly own and access. to their data. Once again, the biggest challenges our children face – the rise in teen and pre-teen suicide, mental health issues, online predators and bullying – are all buried in the world. content of their messages.

Without the ability to move data out of a platform’s walled garden, caregivers are unaware of the major issues or teachable moments to guide their children with responsible use of technology.

Just as we have the option of choosing to send our data to other services (for example, your biometric data to FitBit or your financial data to Mint), families need access to the data to use tools of Specialized online safety in order to become aware of the situations children face online.

While “data portability” doesn’t seem as powerful as “parental controls”, it is the key to helping parents truly be parents in the digital age. Without it, a parent’s only two options are A) don’t let their kids use technology (i.e. the “don’t ride that bike, you might fall” approach), or B) donate to their children unlimited access to an amount of dangers without guidance.

Ultimately, the solution isn’t to demand more “parental controls” from tech companies that don’t have an incentive to provide them in the first place. It is time for them to give us the means to fully protect our children.

Brian Bason is a father of two and founder and CEO of Bark Technologies, Inc., an online safety company that helps protect more than 5 million children in the United States.

[ad_2]

Share.

Leave A Reply