The “control paradigm” around child online safety, which researchers on three continents have been calling out for at least five years, is in full bloom now. The term comes from a 2019 book by four Australian scholars about how control and surveillance have come to define digital safety, inclusion and citizenship. (Does that give you pause – especially around the citizenship piece?)
Among the signs, “bills filed [or already passed] in at least nine states and at the federal level.” According to an Education Week report, the bills “generally have three primary goals: compel social media companies to verify [all] users’ ages; bar social media companies from using algorithms to recommend content to young users; and restrict minors from using social media either through age requirements, parental permission requirements, or curfews and time limits.”
Lawmaking that misinforms
The latest example is an alarmist opinion piece by US senators in the Washington Post promoting the proposed Protecting Kids on Social Media Act, which would, among other things, ban anyone under 18 from social media without parental consent and require all users, including adults, to be age-verified.
One significant problem with their statement is, it’s misinformation. The senators equate social media use with screen time and, as researcher David Stein puts it, “portray social media as the primary reason why kids spend inordinate amounts of time in front of screens,” where, actually, “social media are a negligible contributor to screen time among preteens (18 minutes [out of 4.4 hours] a day) and only a minor contributor among teens (1.5 [out of 8.4] hours). Teens and especially preteens spend the vast majority of their screen time watching TV [e.g., Netflix] or videos and playing games…. The senators never even mention YouTube or TikTok or Netflix,” Stein continues. “Instead they excoriate Instagram, Facebook, and Snapchat.” So how protective is this legislation, senators? People, including children, don’t even need accounts to view video on TikTok or YouTube.
The child, the context, the activity
Yet even back when we were worried only about viewing on screens, scholars found that “the relationship is always between a kind of television and a kind of child in a kind of situation,” Prof. Sonia Livingstone quotes scholars of the time (1961) as writing (emphasis theirs). “To paraphrase Schramm et al., no informed person can say simply that screen time is bad or good for a person,” she writes. About today’s media, we’ve known for a long time that a child’s psychosocial makeup and home and school environments are better predictors of online risk than any tech or media they use.
Certainly it’s important to understand how our children’s media use is affecting them, but we need to ask ourselves what the best ways to attain that understanding are, based on each child’s needs. In most cases, it’s likely through communication with them not monitoring and controlling them, both of which erode parent-child trust and a child’s agency and developing media, digital and social literacy. We now have research from many scholars in multiple countries that points to the risks of blanket parental control and constant monitoring – even age verification that risks the privacy and identity security of people of all ages – as well as to the importance of parents prioritizing media mentoring and modeling and setting household rules that foster wellbeing for everybody in the family. A research group in Australia even created a “living lab” in which parents and kids could learn how to nurture online safety together, finding that young people’s tech skills are “a resource for parents who want to enhance their own digital literacy to support their children’s online safety.”
As for scholars who have cautioned against control paradigm approaches, besides the authors of Youth in Digital Society: Control Shift, they include Prof. Nathan Fisk, author of the groundbreaking book Framing Internet Safety (2016); Profs. Michael Adorjan and Rosemary Ricciardelli in Canada, authors of Cyber-risk and Youth Digital Citizenship, Privacy and Surveillance (2018); and Prof. Andy Phippen and research psychologist Maggie Brennan in the UK and Ireland, respectively, authors of Child Protection and Safeguarding Technologies: Appropriate or Excessive ‘Solutions’ to Social Problems? (2020).
Why control is a problem
One-size-fits-all laws are problematic out of the gate because, from the US’s youth online risk research, we’ve known for 15 years that kids and teens are not all equally at risk, that the kids most at risk online are those most at risk offline and that the most common online risk kids face is social emotional, in the form of online hate, harassment and bullying. Online harm is individual and contextual, the context being largely offline.
So let’s look at bullying, for example – and what mitigating it has to do with control. Insights surfaced by two experts over a decade ago have (obviously) stuck with me ever since. Prof. Ian Rivers in Scotland found that the single most significant predictor of suicide risk among [bullying and cyberbullying] bystanders is powerlessness – “learned helplessness” – the opposite of agency, which is the capacity to take action and protect peers. “Rivers and colleagues also found higher rates of absenteeism and substance abuse, along with depression and anxiety among school pupils who had witnessed bullying,” the US Department of Education reported. If we teach children that adult control is how kids stay safe, we are reinforcing helplessness and therefore anxiety and depression.
In a podcast around that time, educator and author Rosalind Wiseman talked about how parents can help their kids when the latter have been targeted by bullying. They might say something like, “‘I’m so sorry this happened to you – thank you so much for coming and telling me’,” she said, “because your kid is taking a risk to tell you about this. Most of the time they think that going to an adult will make it worse [why research shows only 10% of teens report cyberbullying to their parents]. Then you say, ‘and together we’re going to work on this – we are going to think through how we can do this so you can feel that you’ve got some control over a situation where your control has been taken away from you.”
Bad trajectory, good trajectory
So with these laws, we are not on a good trajectory. They could contribute to the youth mental health crisis because they’re all about control and surveillance. There are at least three things wrong with that:
- It depresses kids (by teaching learned helplessness)
- It disempowers kids (reserves power for the adults)
- It teaches kids that control and surveillance (rather than communication, social-emotional skills and looking out for each other) are what keep people safe.
What we need are laws that fund school psychologists and social workers, media literacy education and social-emotional learning. Because this trajectory, carrying the message, “don’t even try to make things better, you don’t know how; we do,” leads to either helplessness or defiance. Few parents would prefer either, including the latter, but it might actually be better for kids’ mental health. Working with them respectfully, supporting their agency, confidence and skilled participation is best of all.
- Added later: From TechPolicy.Press, May 22: “144 State Bills Aim to Secure Child Online Safety As Congress Flounders,” and zooming in on the broadest such law yet: “Louisiana Passes Bill That Would Require Parental Consent for Kids’ Online Accounts” in the New York Times (June 8), which reports that the just passed but not yet signed law “would prohibit online services — including social networks, multiplayer games and video-sharing apps — from allowing people under 18 to sign up for accounts without parental consent. It would also allow Louisiana parents to cancel the terms-of-service contracts that their children signed for existing accounts on popular services like TikTok, Instagram, YouTube, Fortnite and Roblox.”
- Senators, please read this: “The Rise and Fall of Screen Time,” by psychology professor Sonia Livingstone at the London School of Economics really puts to rest the idea that time on a screen can tell us anything useful about a child’s wellbeing now or in the future. It touches on the trouble the concept has brought families, the history of the concept, the agendas and misconceptions around it, and the so-called scholarship that infers causation from correlation, “neglecting to examine a range of more likely causes of childhood ills, and over-interpreting descriptive statistics.”
- Screen time limits don’t actually work. A March 2023 study looked at the time-limiting features on phones and apps such as Instagram, TikTok and YouTube. The researchers found that, “rather than leading consumers to spend less time on an activity … setting a time limit can have the opposite effect. This occurs because consumers implicitly treat time limits like budgets, perceiving time up to the limit as earmarked for the activity and facilitating such spending…. Consequently, setting a time limit (vs. not) can increase time spent.”
- About a large-sample study that turned up two distinct “classes” of adolescents – “family-engaged,” representing almost two-thirds, and “at-risk” – and how parents approached technology in their homes
- Dr. Sanjay Gupta, a dad of teens and CNN’s chief medical correspondent on parenting in digital times
- A perfect example of the kind of (state) legislation I’m talking about above.
- The power of dignity: Consider grounding bullying and cyberbullying prevention in dignity – the foundation of universal human rights, of course including children’s rights
- Researchers on assertions that the smartphone may be “destroying” younger generations
- About balancing external with internal safety “tools”