The youth market researchers at Ypulse just reported that 83% of millennial parents agreed with their survey statement that “social media platforms should do more to police and prevent cyberbullying.”
Their concerns are certainly justified, because cyberbullying is “awful but lawful” – not generally a kind of online harm that law enforcement can address. But is it worse than the offline version of this social aggression? Actually, no, according to a just-released study by seven researchers at four universities. The authors were looking at different levels of negative impact of online-only, in-person-only and online + offline bias-based victimization on US 13-21 year-olds – “negative impacts” including emotional distress, school-related problems and physical symptoms.
They focused on bias-based victimization – hate speech, harassment and bullying focused on race, gender identity, sexual orientation or disability – because of American young people’s high exposure. Half of 13-18 year-olds in one national study reported having experienced some form of it; 63% of the 13-21 year-olds in this new study reported being targeted by at least one type of bias-based victimization, and 38.7% two or more types.
Toxic mix
What they found was that online-only incidents “occurred less frequently and impacted victims less negatively than either in-person victimization and blended in-person plus online, and – not surprisingly – being victimized both online and offline caused the greatest distress.
That’s cold comfort for parents, certainly. But if we’re going to reduce this type of harm, we have to understand it. All of us – from platforms to parents to school staff to policymakers – need to know that, where youth are concerned, cyberbullying is rarely just online or disconnected from what’s going on at school. What we see online is only the most visible part – rarely the whole story. School life is the true context of what shows up online, which means apps and platforms are seriously handicapped in addressing it effectively. Their content moderation systems and staff don’t have the (offline) context needed to know the difference between what’s actually harmful and what isn’t, and algorithms can tell even less then human moderators; they may never catch up with ever-new terms and images for hate, humor, sarcasm, anger, exclusion or teasing among peers.
What kids and parents need
So what can be done about this? The solution – which has to be as global as and more multicultural and multilingual than the platforms are – has many moving parts, including research like this, content moderation and smart regulation (or “super-regulation“). But one piece of the solution is way behind and incredibly important. It can actually help kids and parents. It provides the offline context platforms need. It’s a global digital help network.
Here’s what this solution, a collective of trusted helper organizations around the world, looks like….
Individual helplines:
- Everyone in your country or region knows about the Internet helpline – trusts that it knows what it’s doing. It’s independent of industry but trusted by it and has a direct line to content moderators behind the apps and platforms used where you are.
- If its operators see that your problem isn’t just online and can’t be solved through content deletion alone, it refers you to other sources of emergency or other help in your country which is tailored to the problem you’re contacting them about. Maybe that’s law enforcement, maybe it’s mental healthcare. Primarily, helplines refer problems to the appropriate problem-solver – unless they have professional expertise in house.
- In any case, your helpline is staffed by professionals – people, including youth (maybe volunteers), representative of your country or culture, who speak your language and are trained in active listening and other communication skills that make you feel cared for and can elicit the offline context that platforms need to action.
- Its agents understand how apps, games and Internet platforms work, are familiar with terms of service/community rules and know what content violates them, escalating only the cases that do – reducing the “false positives” (non-actionable reported content that isn’t actionable for many reasons) that plague the platforms.
- Your helpline also has the support and guidance of…
A global helpline network that…
- Vets, coordinates, supports and trains the network’s member helplines in best practice approved by industry content moderation leaders so that helplines and industry can cooperate without interruption for rapid response to problems users support.
- Is seen as a trusted partner – a collaborative of “trusted flaggers” by Internet companies (and is supported financially by the companies).
- Ensures that all member helplines follow the best practices the network establishes and evolves through its work with the industry and regulators and through keeping up with the research as it emerges.
- Works with regulators to educate them and complement their part of the solution.
- Ensures that its helplines stay in compliance with the established best practice, thus maintaining the trust of both industry and users that makes this whole system work.
- Works with cross-industry bodies and NGOs addressing specific harms such as the Technology Coalition and the Internet Watch Foundation, respectively, which both address child online sexual abuse.
- Keeps up with changing technology and its potential negative impacts on vulnerable groups and communities and promotes safety by design and practice.
- Keeps up with research like the above, as it emerges, to ensure ongoing relevance and best practice.
I mentioned that this solution is way behind, but it’s not completely absent. Its development has been piecemeal, country by country, and needs now to be a conscious collective effort. We don’t have an Internet helpline in the US. We have amazing specialized helplines for suicide prevention, addressing specific real-life harms such as dating abuse and eating disorders and supporting specific communities, such as LGBTQ+ youth. We even have TakeItDown.org now. But we have nothing like the helpline I described above that would qualify to join a global network of help, trust and goodwill powerful enough to help the Internet industry help its users deal with awful but lawful content.
This fleshing out of the missing piece of full, effective user care is my own, but the original idea of a global network of Internet help was not. It was brought to me a decade ago by some very smart, caring Internet help leaders in two other countries. After they shared this idea of a global network, leaders in another country joined us in support. It seems we were ahead of our time. I don’t think we are anymore. We have the amazing WeProtect Global Alliance, the industry’s Tech Coalition for countering child sexual abuse material online, the Global Internet Forum to Counter Terrorism and now StopNCII.org and TakeItDown.org – all global, all doing great work to mitigate illegal harm. Now it’s time to address the lawful content that’s awful for our children.
Related links
- Sampler of existing helplines, all of different origins and sources of support: Europe’s helplines and hotlines (the latter about illegal and awful), Britain’s Professionals Online Safety Helpline (for schools, law enforcement and social services) and Revenge Porn Hotline, Australia’s eSafety Commissioner’s Office and New Zealand’s Netsafe service. Here too is the European Commission’s page about the helpline/hotline networks.
- About a new network of regulators. This makes great sense, and if we – the Internet users of the world – can benefit from a global network of government regulators policing Internet companies – it’s logical that we’d also benefit from the more immediate, individually customized help of local helplines networked together.
- What about the Oversight Board? It’s a very different animal from a helpline. If you think of Internet user care as a prevention-intervention spectrum and, on the intervention side, interventions run from the most immediate to the most delayed or gradual, the decisions of the Oversight Board are at that “most delayed” end. The Board is basically a content moderation court of appeals, so decisions come way after problems surface – after the original decisions on what to do about a problem. So this is not what we’re talking about above. Users need immediate help with harmful content. That has never happened. Abuse reporting to apps and platforms has never been particularly responsive, and the reason why is all the mistaken or simply non-actionable reports platforms get and the lack of context for what’s being reported.
- Bigger picture: My 8-part prescription at the end of 2021 for the future of user care and where independent user care (networked helplines) comes in
- Lessons learned from piloting a helpline in the United States
Leave a Reply