For a must-read article for anyone interested in safety and free speech online, some of the social media industry’s most seasoned content moderators – the apps’ and sites’ safety managers and free speech decisionmakers – went public for the first time.
“Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history,” write Catherine Buni and Soraya Chemaly, the authors of the piece in TheVerge.com.
Content moderators’ stories – and this article – reveal a lot more, including how hard it is for human beings, much less the software that supports their work, to decide for users representing all ages, languages, cultures and countries…
- What “safe” means for a community that includes children and many other protected classes
- What “free speech” means online and whether it’s different there
- What’s “newsworthy” – what violent or graphic content should be allowed to stay visible because its viewing could change the course of history
- When content has crossed the line from artistic to harmful.
You’ll see how all this works behind the apps and services billions of us use – the sheer scale of the work and the toll it can take on the mental health of the people doing it. And that last point is so important.