For a must-read article for anyone interested in safety and free speech online, some of the social media industry’s most seasoned content moderators – the apps’ and sites’ safety managers and free speech decisionmakers – went public for the first time.
“Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history,” write Catherine Buni and Soraya Chemaly, the authors of the piece in TheVerge.com.
Content moderators’ stories – and this article – reveal a lot more, including how hard it is for human beings, much less the software that supports their work, to decide for users representing all ages, languages, cultures and countries…
- What “safe” means for a community that includes children and many other protected classes
- What “free speech” means online and whether it’s different there
- What’s “newsworthy” – what violent or graphic content should be allowed to stay visible because its viewing could change the course of history
- When content has crossed the line from artistic to harmful.
You’ll see how all this works behind the apps and services billions of us use – the sheer scale of the work and the toll it can take on the mental health of the people doing it. And that last point is so important.
“YouTube’s billion-plus users upload 400 hours of video every minute. Every hour, Instagram users generate 146 million “likes” and Twitter users send 21 million tweets,” Buni and Chemaly write. “The moderators of these platforms — perched uneasily at the intersection of corporate profits, social responsibility, and human rights — have a powerful impact on free speech, government dissent, the shaping of social norms, user safety, and the meaning of privacy.”
Many moving parts
It’s important, I strongly feel, for us individual users to understand not only how unprecedented this work, these decisions, and their impacts are but also how essential it is not to look at any single aspect of this picture – safety or speech rights or newsworthiness – in isolation. They’re all vitally important to all of us, and what we do about each has bearing on the rest of this strange, unfamiliar, constantly changing, phenomenon that we all – from users to moderators to policy makers – are part of. What we do in the name of protection rights affects participation and expression rights. We can’t forget that when we’re setting rules or writing laws. Everything from wily workarounds to serious harm can come with the unintended consequences of policy making that fails to factor in research and this whole picture.
Work in progress
Content moderation, or community management, is also a work in progress – a global one. It’s “not a cohesive system, but a wild range of evolving practices spun up as needed, subject to different laws in different countries, and often woefully inadequate for the task at hand,” Buni and Chemaly write. And it has as vast a range of approaches, from 4chan’s to Facebook’s to a month-old messaging app (do read the article for those stories).
So very human
The biggest takeaway of all, though, is the human factor. The real contribution of “The Secret Rules of the Internet” is that it sheds light on the very human work going on behind what we too often think of as technology.
Related links
- “The global free speech experiment for participants of all ages” (May 2013)
- “Flawed early laws of our new media environment” (Dec. 2013)
- “Proposed ‘rightful’ framework for Internet safety” (July 2014)
- “Digital citizenship’s missing piece” (Sept. 2015)
- “Counter speech: New online safety tools with huge potential” (Dec. 2015)
- “Our humanity, not our tech, is the key to fixing online hate” (Feb. 2016)
Leave a Reply