US kids and parents need a toll-free number to call or text for help in getting harmful online content taken down. After studying various kinds of help like this that youth and parents in Europe, the UK, Australia and New Zealand have, we at The Net Safety Collaborative piloted a proof of concept for this – with independent evaluation – in the last decade.
Now, with so many state and federal laws aimed at protecting children passed or in the pipeline, it’s way past time our policymakers took up this missing piece in the child online protection toolbox and required it by law. It would help all parties to the harmful but legal content challenge: kids, parents and schools, as well as platform content moderators.
What US kids need and deserve
My friend and colleague, Prof. Sameer Hinduja of the Cyberbullying Research Center, just published two must-read blog posts about all the legislation, the first one looking at the problems with what has been proposed or passed so far and the second offering six components for comprehensive law that “would have the greatest positive impact” for young Internet users.
I agree that all six components are critically needed, and I respectfully suggest a 7th crucial one – or maybe a second piece to his 5th element that calls for the establishment of “an industry-wide, time-bound response rate when formal victimization reports (with proper and complete documentation and digital evidence) are made to a platform.”
The second part would help platforms attain that response rate: this independent source of help that would support platform content moderators by confirming harm called in by kids and caregivers, thereby reducing guesswork and “false positives” in the platform’s abuse reporting system. Reports that are false positives plague platforms because they’re not actionable – e.g., content that doesn’t appear to violate platform rules or “community standards,” that’s mis-reported or that basically abuses the abuse system (with users reporting someone they’re trying to get in trouble or kicked off the platform). In short, a helpline can provide the “real world” context that platform content moderators can never have – without which they usually can’t see how traumatizing particular content is for a child.
Real life context
By talking with the child, their parent or a teacher, a helpdesk gets that offline context and can confirm for the platform that the content is harmful. It just needs to be a “trusted flagger” or trusted partner of the platform – an external, statutory part of the overall abuse-reporting system – in order to be of real help to content moderators as well as kids. Help goes both ways when a helpline is also a trusted flagger.
In Europe, “trusted flaggers” are now codified in law, in the Digital Services Act that just went into effect for the world’s largest platforms. Researchers in Australia and Switzerland recently looked into whether trusted flaggers can help and found that they “can indeed reduce the spread of harmful content.”
Research grounding
There is US-based research that confirms this kind of help is needed. Researchers at the University of New Hampshire studied the two main options Americans have Internet “help-seeking”: abuse reporting and the police. Looking at “11 different types of technology-facilitated abuse” and found “very low rates of reporting” (7.3% and 4.8% to platforms and police, respectively). They also found that “only 42.2% said the website did something helpful and only 29.8% found police helpful. The authors clearly state that better help is needed in both cases. But as you can see, I’m suggesting another independent third party that would fill gaps neither apps nor cops can truly fill. Police can’t really help with content that’s “awful but lawful,” and platforms lack that offline context – for example, what’s happening in a peer group at school – and a helpline is all about meeting those needs.
This is not to say that there aren’t already trusted flaggers in the US – nonprofit helper organizations and hotlines that have relationships with certain platforms. It’s just that there’s no transparency around who they are, who they help and what platforms they work. “Some flaggers are more equal than others,” as researchers put it in the Yale Journal of Law & Technology. A law that sets up a central source of help all kids can find, requires support by all platforms and transparency on their separate and mutual work would fix that problem.
What the law might include
Ok, so to summarize: Ideally, a US helpline law would require a centralized go-to source of Internet help familiar to all US kids, parents and educators. It would include:
- A call/text center that includes as well as refers to expertise in child development, mental healthcare, children’s digital practices and interests, K-12 school culture and other key aspects of US kids’ everyday lives (ideally, it has young people as agents, interns or on call to help adult agents)
- Emergency referrals to law enforcement (knowing when to call 911) and the National Center for Missing & Exploited Children’s CyberTipline, as well as other forms of support for children, including in social services and specialized hotlines and helplines for vulnerable groups
- Ensures either prompt action for users or prompt explanation for why their reports can’t be actioned, based on information provided by platform moderators
- An office that either qualifies other organizations as trusted flaggers, as do the “digital services coordinators” established in Europe’s DSA, or publishes a list of Internet industry trusted flaggers and tracks industry compliance with this law
- An office that coordinates relationships with platform contacts or content moderation managers and maintains and continuously updated confidential list of those contacts
- Ensuring that all vulnerable groups are served by the trusted flaggers on that list
- Requiring platforms to include in their transparency reports data on the number of reports received from the Internet Help Center and all trusted flaggers, and the percentage of them actioned
- Requiring algorithmic promotion by platforms of the Internet Help Center and specialized helplines’ contact information (as platforms have long done with the Suicide Prevention Lifeline)
- Requiring deep knowledge of industry community standards, rules and terms of services to ensure that only violating content is escalated to platforms.
The above may not fully solve the sheer scale problem that content moderation represents – probably nothing will – but research shows it will help young Internet users in this country more than they’re being helped right now. Algorithmic moderation is getting better and better at preemptively easing the demands on post-facto content moderation so that maybe, just maybe, more human moderators can be devoted to working with human helpline agents on content that is legal but traumatizing to kids. Let’s make this happen for our kids. It’s time, don’t you think?
Related links
- “On Trusted Flaggers” in the Yale Journal of Law and Technology
- “Trusted Flagger Programmes: Guidelines and Best Practice” from the UK Council for Internet Safety – includes principles and expectations of platforms as well trusted flaggers (much of the guidance our helpline pilot used is represented in this 2-page document)
- The “trusted flaggers” part of Europe’s Digital Services Act
- “Who’s Afraid of the DSA?” – details in Tech Policy Press about what companies must be in compliance now and how
- Human help needed: “Research has shown that tools based on artificial intelligence struggle to detect online harmful content. Authors of such content [such as teen-age harassers] are aware of the detection tools, and adapt their language to avoid detection,” researchers report.
- The Crimes Against Children Research Center on help is needed beyond reporting to platforms and police
- Lessons we learned from piloting a social media helpline for US schools
Lesley Podesta says
This is so thoughtful Anne and you’ve nailed it. Really impressive work