It was the 2024 version of public shaming – relentless on-camera questioning designed to send a message rather than hear answers. It seemed the lawmakers already had their answers. I get their frustration that…
- social media platforms can’t just “install” the digital version of car seats and seat belts or create product labeling like on a cigarette package
- these companies and their products are different from any company regulated before – not media companies, not just tech or telecom companies, but some sort of hybrid that, at least for those with billions of users, are part of a new global social institution in addition to being corporations designed to benefit shareholders and investors
- the lawmakers themselves haven’t been able to pass consumer protection laws that address all the above (though they probably will pass some soon) to the satisfaction of constituents who are parents who have lost children and without harming children who don’t have engaged parents.
But it was a relief, after watching that nearly 4-hour Senate Judiciary Committee hearing and the ensuing news coverage, to see some thoughtful responses from people who have been studying these issues for years. For example:
- The scholarly but accessible paper “Techno-legal Solutionism: Regulating Children’s Online Safety in the United States” about, among other things, the problems with the proposed Kids Online Safety Act (KOSA) – by researchers María P. Angel and danah boyd.
- A commentary in Tech Policy Press from industry online safety policy expert and author Alicia Blum-Ross about what the senators and bills such as KOSA are still missing, such as clarity on what they mean by terms like “reasonable measures” (for child online safety) in KOSA, an understanding of how content moderation and safety policy development work and, I would add, overarching legislation for the whole ecosystem of “consumer protection” or user care (see below for more).
Why? Because, as Blum-Ross writes, each of the apps or “platforms” represented in the hearing is different in how risk and safety work on them, how standardization of definitions and practices across all companies and government entities is needed, how essential AI is in detecting harmful content across hundreds of languages from nearly all countries and also how essential humans are to, for example, distinguishing between flirting and grooming in phrases like “you’re so beautiful” or between an inside joke and a cruel put-down in the same words.
There is hope
She also asks the vital question of just how effective it is for state or federal laws to focus heavily on parental controls.
“Even in a best-case scenario of caring families and easy-to-use tools, decades of research has shown that parental controls alone are ineffective at helping children navigate online, and are a shaky sole foundation for a legislative or parenting strategy,” writes Blum-Ross, a parent herself, citing the research of psychologists at the London School of Economics.
She adds a hopeful note, though: “Spurred by regulation and public pressure, but also by tech workers ourselves, especially given hiring of more child safety experts and a workforce newly old enough to have teenagers, the last few years have seen more innovation for teen safety than the whole of the decade before.” So keep the pressure on, lawmakers, but please end the theatrics that cancel people and honest discussion and please acknowledge colleagues who have taken in the complexity of regulating corporations that host data representing the speech, behavior and everyday lives of you and your constituents as well as billions of other people around the world.
Young people’s expertise
Blum-Ross also argues for bringing teens’ own views on how to design for their safety and wellbeing. “Young people are highly motivated to have their say about online safety, and are crucial for holding us all (tech workers and policymakers) to account as we design standards that put in place better protections without sacrificing access to tools that are deeply valued.”
I agree. How could anyone honestly not? But to understand what teens tell them, lawmakers first need to get unstuck. They need to get past the tech-solutionist “belief that technological changes alone can solve for digital well-being,” as researchers at Data and Society put it.
The lawmakers are certainly right that platforms need to keep doing more, but do they get that social media platforms will never be able to do enough – any more than consumer product manufacturers and probably less because, in social media, people are both product and producer? There could never be enough content moderators or perfect algorithms to catch and address all harm happening in real time, at scale, in every language and country. Just one major reason is that they don’t have offline context. Taking kids as an example, the real context of what’s happening online between two schoolmates is not the app or platform, it’s kids’ everyday lives – what’s happening at home and at school. The conditions affecting that content are contextual to those offline environments, individual to the child and situational in time.
What legislation would help?
I suggest three things:
- Third-party “consumer protection“: You could call them “trusted flaggers,” the term used in Europe and coined by Google in the last decade. It’s a formalized, standardized system of Internet user care that fills the gaps that neither industry nor government can fill – one that Internet users in many countries already have, but we don’t – something US lawmakers never seem to have considered. This has a whole article in Europe’s Digital Services Act. Though they’re not called this, we do have some “trusted flaggers” in this country – organizations that help Internet users by expertly flagging harmful content to platforms – but there is no standardization or transparency from industry about its work with them and no legal requirement that they do so or how. I provide more details here and here.
- Privacy law. We need this yesterday for many reasons, not least to ensure that everybody’s data is protected in all state and federal efforts to protect children and others at higher risk.
- An overarching online safety law rather than a random collection addressing different parts of the risk/safety ecosystem. Online safety is many things: safety from child sexual exploitation, hate speech and harassment, cyberbullying, identity theft, fraud, theft of digital, intellectual and financial property, etc. All this needs coordination if government is to address it effectively. The law could include something like an idea touched on several times in Wednesday’s Senate hearing, the Digital Consumer Protection Commission proposed by Sen. Elizabeth Warren and co-sponsored with her colleague across the aisle, Sen. Lindsay Graham. I say “something like” because it doesn’t fully address safety; it’s more about regulating companies than protecting “consumers.”
Before any such entity is established, it would need to ask and address these questions: What legislative blind spots are there? What do at-risk or harmed users really need to keep them safe – in the case of children, from their perspective as well as the views of parents, researchers, educators, content moderators (not just CEOs) and lawmakers? How can we make content moderation serve kids and other protected classes better? What are its limitations? What help can we get users beyond what platforms and their systems can provide?
Could lawmakers consider looking at how Europe’s Digital Services Act might be effectively adapted to US society? It addresses content moderation, transparency and user protection as well as fair competition. I didn’t think I’d ever advocate for a new federal agency but, after watching a few of these hearings, I’m about there.
What I could not support is a Commission that didn’t center – and frequently hear from – the protected classes it would serve. Or that didn’t actually center user care. My friend, researcher and author danah boyd says it better than I ever could: “In all of these discussions, we keep centering technology. Technology is the problem, technology should be the solution…. [But] if you care about children, CENTER THEM.”
A note added later: If policymakers are interested, here are 2 examples of how to center young people’s views and experience:
- “Mental Health & Digital Skills: 8 Findings by Young People” for their peers and policymakers (at home, school and in government). Published last fall for World Mental Health Day, the resource explains teens’ “experiences of mental health online and offers messages that young people want young people and adults … clinicians, parents, teachers and policymakers to understand.” In addition to the teens’ findings, the resource links to “help and support available in English, Portuguese, Dutch, French, Norwegian, Finnish, Polish, German and Italian,” as described by two leading researchers in the ySKILLS consortium of 15 universities and European Schoolnet which participated in the research and publishing.
- The development of General Comment 25, which – consulting with young people in 27 countries on 6 continents – brought the UN Convention on the Rights of the Child into the digital age.
[…] What child online safety really needs, senators – NetFamilyNews.org […]