What a year it has been for child online safety, right?! There was the adoption of General Comment 25, bringing all things digital into the global Convention on the Rights of the Child; the draft Online Safety Bill and Parliament’s response in the UK; the release of Australia’s eSafety Commissioner’s Safety by Design for the tech industry and investors; the Age Appropriate Design Code coming into force in the UK; Apple announcing, then pausing “enhanced child protections”; seemingly innumerable US Senate hearings on safety and harm in social media; Facebook becoming just one of the products tucked under an umbrella named “Meta”; and of course there were the whistleblowers, Sophie Zhang and later Frances Haugen and a journalist talking like one named Quintin Smith of the “People Make Games” YouTube channel shining a light on problems at Roblox (of course, games and gaming platforms are social media too).
Whistleblowers play an important role. This year’s and, in Zhang’s case, last year’s have been shedding light on problems that are, at best, shadowy to most people – at the platforms the whistleblowers know. It’s a platform-by-platform approach that goes in-depth. Whistleblowing can be helpful to parents whose children spend most of their time on a single media property. But for policymakers and others looking to increase social media safety for all users, we need to look at the forest, not just individual trees, no matter how big the trees are.
Systems thinking is needed now. Because we have a systems problem on our hands. Even Meta, a global company with multiple platforms and billions of users, is a system within a larger system. So is social media. It’s a global ecosystem whose problems and vulnerabilities need systems thinking to be solved – and it too is a system among other systems: governments, institutions, traditional media, etc.
A whistleblower can put forth solutions but has only a limited perspective and is a catalyst for bringing stakeholders together. A policymaker can certainly spotlight constituents’ problems, blame social media and maybe even help pass legislation, but policymakers are only one stakeholder group. Systems problem-solving needs big-picture thinking by as many stakeholders, perspectives and forms of expertise as possible.
So let’s cut to the chase. What does a systems-thinking lens show us needs to happen in the new year for safer social media for all?
Systems thinking-type solutions for 2022
- Cross-industry standards: In his opening statement before a US Senate subcommittee this month, Instagram head Adam Mosseri called for “an industry body” that would set standards for keeping minors safe. He said that the standards set by this body “need to be high and the protections universal,” and that “companies like ours should have to earn their Section 230 protections by adhering to these standards.” This is systems thinking: global, cross-industry and cross-sector (input from stakeholders in civil society and government). Probably quite obviously, I feel this needs to happen in 2022, and any work that lawmakers do on Section 230 reform would, I hope, give careful consideration to this linkage between compliance to standards and the statute’s protections. Sen. Richard Blumenthal appeared to dismiss this proposal as industry “self-regulation,” but it would not be if protections against liability were tied to compliance.
- Cross-industry and -sector civic integrity easing the burden on content moderation. A brilliant move this year was the establishment of the Integrity Institute by Jeff Allen and Sahar Massachi, two former Facebook integrity workers and data scientists, probably former colleagues of Frances Haugen. “For too long, integrity work has been a public service trapped within private entities. We’ve been identifying ways to build better cities and fighting to get them implemented, but if our proposals cut against other company goals, they might not see the light of day,” writes Massachi in MIT Technology Review. “If social media companies are the new cities, we are the new city planners,” Allen and Massachi write in their Founders’ Letter. “We build the speed bumps, plan the sewage systems, and even design the physics of the city, so that everyone stays safe and the platforms don’t just rely on manual intervention as the main line of defense.” I hope one of their speed bumps will help companies slow down and weigh good and bad consequences of recommender systems, or algorithmic recommendation, for the wellbeing of users and societies. For example, defining mis/disinformation and ensuring that recommender systems don’t recommend it would also ease the burden on content moderation! This is the year to take a closer look at algorithmic recommendation and safety.
- Cross-industry and -sector work on content moderation: When harmful behavior and transactions don’t violate a platform’s Community Standards or Terms of Service, the content moderators behind that platform (including the many who are contract workers in distant countries) can’t act on that content. So as important as content moderation is, it’s not enough to keep kids (or any user) completely safe. But even if moderators’ bosses allowed them to take harmful content down, there’s another problem: offline context. Which content moderators almost never have. This is where help for users external to platforms – and the industry as a whole – come in (see next bullet). Cross-industry, platforms need to work together with important new bodies such as the Trust & Safety Professional Association to provide ongoing professional development and mental healthcare for content moderators themselves, as well as standards for user care on the platforms. As for cross-sector work, content moderation needs the understanding and innovative thinking of researchers and systems thinkers to improve user care by platforms – for example, legal scholar Evelyn Douek’s proposals for both procedural and structural “fixes” here, including industry standards and a diversity of approaches that honors platforms’ diversity.
- Industry-supported independent user care. “Independent” is the operative word. The user care people behind platforms such as content moderators can’t act on most of the abuse reports that come in because they either don’t violate the platform’s rules or they don’t have offline context for the content and simply can’t tell it’s harmful – even if they were allowed to take it down (most online harm is psychosocial). They need people on the ground, sometimes called “trusted flaggers,” who can confirm the content is harmful. Vulnerable users, in turn, need support for the online part of harm they’re experiencing, e.g., harassment or cyberbullying (because deleting the content can sometimes help, if not solve, relational problems). These are the “middle layer” services that provide context to workers in the cloud and support for users on the ground. The term often used for them is “Internet helplines.” Ideally, they’re familiar with platforms’ Community Guidelines (Terms of Service or rules) and escalate cases for removal, cutting through contextual confusion for moderators; they either include professional mental healthcare expertise or refer vulnerable users to professionals; and they understand both child and adolescent development and how young people use the Internet. There are many examples, including single-country Internet helplines throughout Europe, Netsafe in New Zealand and the eSafety Commissioner’s office in Australia. There should be one for users in every country. The latest and most cross-platform example is what Meta is spearheading in India: a new helpline that works in Hindi and 11 other Indian languages to get nude and sexually explicit non-consensually shared images taken down. It’s operating in partnership with the UK’s Revenge Porn Helpline and, remarkably, other platforms, Indian Express reports. A great development would be a global network of Internet helplines that establishes standards and provides training for new member helplines.
- Greater youth participation: This is a stakeholder group that has a 32-year-old global human rights convention behind it and, by that mandate, must be in the mix. Children’s rights, or the UN Convention on the Rights of the Child (CRC), is the logical framework for balancing and maximizing child safety and child participation. We’re seeing more and more consultation with youth – for example, Western Sydney University’s with young people in 27 countries for General Comment 25 and the European Commission’s October 2021 #DigitalDecade4YOUth report from 71 consultations with 750 kids and teens – as well as acknowledgment of the need for youth voice in forums about youth (e.g., this year’s global forum on AI for Children). So 2022 is the year we in the US, where many of the social media and metaverse companies are headquartered, need to get serious about joining the rest of the world in ratifying the CRC. We’re the only country on the planet that hasn’t. But we do have examples of youth consultation in various fields: youth advisers to the investors at Telosity, youth advisers to Harvard University’s Berkman Klein Center, youth advisers to the Media & Mental Health Initiative and #GoodforMEdia.org at Stanford University’s Psychiatry Dept, youth co-researchers with Western Sydney University’s Young & Resilient Research Center and youth advisers to startups for youth via the Headstream accelerator, to name a few.
- Revisit unintended consequences: Make a study of unintended consequences at scale. Researchers are probably already doing this but, regardless, regulatory teams need researchers’ input. What I mean is, little of what’s being exposed now about Roblox, for example, was ever intended or even imaginable when it was founded, and neither the platform nor societies have fixes for these unintended consequences. At today’s level of success, with vast numbers of people of all ages using it, what was once a kind of commercial blend of MIT’s Scratch and Minecraft (before Microsoft acquired the latter) has a whole new set of conditions. Roblox is not unique in this way. Discord is another example – see this week’s story in the New York Times that barely hints at the multi-cultural harm mitigation challenges that have grown along with this social media property that now has more than 150 million active users each month, 80 percent of them outside North America. Maybe this unintended consequences work will be a major focus of the meta-level civic integrity work mentioned in No. 2 up there; I hope so.
- Apply offline laws wherever applicable online. Of course this has been happening for years, but now, in the early days of the metaverse and as successful platforms keep getting bigger, the scope needs to be as wide as possible. Societies need to look at how well this is being done now and convene policymakers, legal experts and researchers to look at child labor laws, contract law, gambling, finance, royalties, copyright, false advertising, etc. to make sure they cover social media and the metaverse.
- Ecosystem education for users of all ages: Parents and policymakers, not only youth (from early childhood as appropriate), need education in how the internet came about, what’s happening to it now, what all of its moving parts are, the diversity of the parts, how children use them, children’s digital rights, what Internet governance is and might be, how machine learning algorithms work (like the kind that recommend content for good and ill) and the three literacies needed to navigate it all. A tall order, I know, but essential for informed systems thinking and collaboration.
Examples of people and organizations taking a systems approach:
- All Tech Is Human, a nonprofit organization bringing together people from many professions, disciplines, cultures and countries to help align technology with the public interest. ATIH is a partner in the HX Project (think HX, or human experience, instead of just UX for “user experience”). [Disclosure: I’m working with ATIH because I see it as a systems-thinking organization.]
- Rebooting Social Media, a “pop-up think tank” at Harvard University’s Berkman Klein Center “convening participants across industry, government, civil society and academia in focused, time-bound collaboration.”
- Existing cross-industry bodies, for example, the Technology Coalition for addressing child sexual exploitation online and the Global Internet Forum to Counter Terrorism and C2PA for fighting misinformation and disinformation
A safe conversation
This is a weird time, and not just in terms of public health. It feels too late to be changing things up; yet it’s early days – and not just early days for the metaverse that both corporations and governments are talking about. Early days for making HX (“human experience”) as good as UX (“user experience”). We’re looking at unprecedented structural problems: publicly traded companies that look, act and have the impacts of global social institutions, not only corporations. We don’t have models for what to do next. Hierarchical, even autocratic, governance and peer-to-peer media make for odd bedfellows, and we’re talking about governance and a jurisdiction that cover every country and government on the planet. How can we not think in terms of systems?
Yet “we are not in a place where we can even have conversations yet around how to remedy a bunch of these problems,” Haugen told the New York Times’s Kara Swisher in a podcast. I propose that one reason why we are not in that place yet is, we can’t seem to get unstuck from a dehumanizing discourse and piecemeal, compartmentalized approaches to the problem. That’s partly because societies are defaulting to the adversarial, good guys/bad guys (us against “Big Tech”) approach we see in both the news media and congressional hearings. What we didn’t see in the whistleblower coverage late this year is what Haugen told Swisher in that podcast: “Facebook…is doing some of the most important work in the world, and it’s not fixed yet….
“People rarely change because you’ve villainized them,” Haugen added. “I invite anyone who wants to stop feeling angry to have another way forward, because I think we can accomplish a lot more.” I think she’s right about that.
So, besides diversity, the key ingredient for accomplishing a lot more? Humility. On the part of all stakeholders. All the participants in the room need to feel safe.
Here’s wishing you all a safe, happy 2022!
Related links
- “Tools for Systems Thinkers: The 6 Fundamental Concepts of Systems Thinking,” by Leyla Acaroglu in Medium.com. See also TheSystemsThinker.com, “What is Systems Thinking?” at University of Southern New Hampshire and this entry in Wikipedia. [Apologies to scholars and professionals in systems thinking if the above is simplistic. The discussion certainly needs your perspective.]
- “Technology is also in a liminal phase where the promise of what might be coming next coexists with the complicated reality of what is happening now,” writes New York Times columnist Shira Ovide in “Tech Won. Now What?” I certainly agree. Do you? Tell me if so or not, here – or in LinkedIn, Medium or Twitter.
- In her column last week on Roblox, parent and “Pushing Buttons” columnist Keza MacDonald at The Guardian writes, “Kids and teens form and find communities there, in the same way I did on game forums in the early ‘00s. They explore their identities or learn about making games. I do not begrudge the kids their fun, and I’m not going to sit here and belittle the joy and meaning that they find in Roblox. It’s a strange venue for it, but it’s real.” And therein lies the complexity we’re dealing with. It seems all that’s good, all that’s bad and all that’s just neutral in everyday life is in our media environment now.
- First the Facebook whistleblowers, then Roblox in depth and now Discord: the New York Times zooms in this week on Discord’s struggles with keeping young users safe. You can bet no platform will be spared from the growing scrutiny.
- “A Former Facebook Executive Pushes to Open Social Media’s ‘Black Boxes’” in the Nee York Times – an illustration of what looks to be fruitful collaboration of policymakers and industry insiders
- To understand the history, “boom-time philosophy” and cultural ground out of which today’s “Big Tech” companies sprang, read Status Update: Celebrity, Publicity and Branding in the Social Media Age, by Alice Marwick, PhD, at University of North Carolina.
- For legal scholars’ views on the intersection of social media and society, this video panel with Evelyn Douek (Columbia University and Harvard University), David Kaye (University of California, Irvine) and Eric Goldman (Santa Clara University) is a must-watch.
- Our own lessons from piloting a social media helpline in the US at SocialMediaHelpline.com
- A little personal history in the metaverse
Disclosure: I serve on the Trust & Safety advisories of Meta, Snapchat, Twitter, Yubo and YouTube, and the nonprofit organization I founded and run, The Net Safety Collaborative, has received funding from some of these companies.
[…] What I’ve written about the metaverse in recent months: “The metaverse an the Meta part” and “Online safety for 2022: 8 things we need to see” […]