This may not be the Internet safety look-back on 2018 you’d expect, what with all the news about data breaches, “fake news,” “tech addiction,” algorithmic bias, election manipulation, hate speech, etc., etc. Not a pretty picture.
But it’s also not the whole picture. By definition, the news reports airline crashes, not safe landings. Even if 2018 was truly unique, though, with bad news the rule not exception, then positive developments really are news, right? So here are some developments worth noting from NetFamilyNews’s 2018 tech and media newsreel (with Part 2 next week on some interesting ideas and proposals for digital safety solutions):
- An important book on cyberbullying vs. dignity: In Protecting Children Online? (MIT Press, 2018), author and researcher Tijana Milosevic for the first time places the subject of cyberbullying where it belongs: in the framework (and slowly growing public discussion) of dignity. Why there and not in “Internet safety”? Because “dignity is what is violated when bullying and cyberbullying take place—when a child or a teen is ridiculed because of who or what they are,” Dr. Milosevic writes. “Dignity is sometimes conceptualized as the absence of humiliation,” and – though it can be private or 1:1 like bullying – cyberbullying, because media-based, takes the form of public humiliation almost by definition. Dignity is particularly effective as an antidote to social aggression because it removes the differentiations and imbalances, such as social comparison, social positioning and power imbalances, that fuel it.
“Dignity is an inalienable right, which, unlike respect, does not have to be deserved or earned,” according to Milosevic, citing the work of scholars and practitioners from the fields of political science, education, conflict resolution and clinical psychology. This cross-disciplinary thinking is a major step forward for Internet safety for the very reason that what happens online can’t be separated out from bullying, harassment and hate speech offline and is primarily about our humanity and sociality, rather than our technology. She also draws a direct link between dignity and digital rights in children’s lives, and I’ll come back to the rights part in one of my first 2019 posts.
- Real “screen time” clarity, finally: Screen time is not a thing. It’s many things, researchers tell us, which contrasts pretty significantly with lots of scary headlines and many parents’ harsh inner (parenting) critic. Here’s a headline actually drawn from academic research: “We’re got the screen time debate all wrong. Let’s fix it.” As Wired reported, citing researchers at Oxford University, University of California, Irvine, and the National Institutes of Health, that “time spent playing Fortnite ≠ time spent socializing on Snapchat ≠ time spent responding to your colleague’s Slack messages.” See also “Why the very idea of screen time is muddled and misguided” and “The trouble with ‘screen time rules’” from researchers in the Parenting for a Digital Future blog – and I love just about everything Dr. Mimi Ito writes about parenting, including “How dropping screen time rules can fuel extraordinary learning” in the same blog.
- Safety innovation in social norms: A powerful tool for social-emotional safety and civility that humans have shaped for thousands of years, social norms are just beginning to be associated with safety in communities, from schools (see this from Prof. Sameer Hinduja) to online communities. And now this tool is being deployed by some social platforms for their users’ safety. I wrote about a couple of examples in the massively popular live video sector of social media here, but briefly I mean giving users tools to set the tone, create a sense of belonging, establish norms, then resolve issues in their own communities online based on that work. It’s about platforms giving users more control not ceding responsibility. Some platforms, such as giant Facebook and startup Yubo, are deleting more harmful content than ever proactively rather than only in response to users’ requests. We can contribute to that trend’s momentum by reporting online abuse ourselves and encouraging our children to report content that disturbs or hurts them – showing them they’re part of the solution. We know they are not passive consumers online; they have agency and intelligence, and one way they can exercise their rights of participation is in protecting their own and their peers’ safety in the apps they use. Equipping them for this is part of social-emotional learning. It’s another “tool” that has made real headway in adoption by schools in many states this past year, and it’s being discussed more and more in other countries as well. SEL teaches skills that support children’s empathy development, good social decision-making and recognition of their own and their peers’ dignity and perspectives (here are examples of this work in six states in a 2018 report from the Collaborative for Academic, Social and Emotional Learning and much more on social norms and safety here).
- Unprecedented multi-perspective discussion – even in policymakers’ hearings. I wrote about one historic one – the first-ever formal House of Commons committee hearing outside the UK – here. There was grandstanding, sure, but also truly substantive testimony from a rich range of views and expertise, those of scholars, news executives and reporters, as well as platform executives (note toward the end what CBS chief White House correspondent Major Garrett said about our children’s generation). I’m convinced we will not move the needle in making this new media environment truly work for us until we get all stakeholders at the table talking rationally and respectfully. Old-school shaming, fear-mongering and adversarial approaches will not serve us.
- An important new book on content moderation. The ability to get harmful online content deleted has long been the main focus of “online safety.” This was the year it became clear that content moderation is both less and more than our source of online safety – and that we need it but certainly shouldn’t completely rely on it. One person’s “free speech” is another’s harm. It’s highly contextual. “It is essential, constitutional, definitional,” writes Tarleton Gillespie in his important new book Custodians of the Internet. “Moderation is in many ways the commodity that platforms offer.” It defines a platform, our experience of it and even the nature of our media environment. And it defines even more: “We have handed over the power to set and enforce the boundaries of appropriate public speech to private companies,” writes Dr. Gillespie, a principal researcher at Microsoft Research New England, in the Georgetown Law Technology Review. And we’re talking about “appropriate public speech” in every society on the planet. These are not just platforms or Internet companies, they’re social institutions, a point made by scholar Claire Wardell in the parliamentary hearing I mentioned above and journalist Anna Wiener in The New Yorker (see this). That fact calls for new, not more – new forms of risk mitigation and regulation, TBD in my next installment.
- Platforms discussing content moderation themselves – publicly. Another first this year was the rich, cross-sector discussion about this on both coasts this year. At two conferences called “CoMo at Scale,” one at Santa Clara University in California, the other in Washington (the latter all recorded here), social media platform executives gathered with scholars, user advocates and the news media and discussed their content moderation tools and operations publicly for the first time. Techdirt reported that “one of the great things about attending these events is that it demonstrated how each internet platform is experimenting in very different ways on how to tackle these problems. Google and Facebook are trying to throw a combination of lots and lots of people plus artificial intelligence at the problem. Wikipedia and Reddit are trying to leverage their own communities to deal with these issues. Smaller platforms are taking different approaches [such as nearly real-time intervention when users disregard the rules]. Some are much more proactive, others are reactive. And out of all that experimentation, even if mistakes are being made, we’re finally starting to get some ideas on things that work for this community or that community….”
- Platforms’ improved transparency. There’s a long way to go, but they’re investing in it. This year they put out increasingly granular numbers on what content is coming down. That’s partly due to laws like Germany’s just-enacted anti-online hate law NetzDG (though that too is not all good news, according to The Atlantic). What’s different now is that Facebook now includes numbers on proactive deletions vs. reactive ones, and Twitter includes deletions in response to users’ requests, not just governments. Also for the first time this year, Facebook included data on bullying and harassment violations, saying that in the third quarter (the first time it provided numbers for this category), it took down 2.1 million pieces of such content, 85.1% of it reported by users, demonstrating the importance of users making use of abuse reporting tools (here are Facebook’s and Twitter’s transparency reports). This greater transparency is so important. But it’s not the ultimate goal, right? It’s a diagnostic tool that gets us to a better treatment plan – where the treatment demands a range of skills and actions both human and technological behind the platforms and in society. Safety in this user-driven media environment is a distributed responsibility. When platforms say this, it’s seen as self-serving, but it’s simply a fact of our new media environment. The platforms have their responsibility, on both the prevention and intervention sides of the equation. But there’s a limit to what they can do, and transparency allows users and policymakers to find and fill the gaps and figure out solutions that work for the media environmental conditions we’re only just beginning to get used to.
So that’s it – not for this series, just for this year. These bright spots are by no means comprehensive; they’re just the developments that stood out to me most this year. What’s exciting is, they come with some really interesting ideas for developing solutions to the problems that got so much scrutiny and news coverage this year. That’s what’s coming up next, first thing in 2019: some creative ideas for solutions. I’ve placed them in two buckets, one labeled The Middle Layer and the other labeled Agency & Rights. Stay tuned. And meanwhile…
Happy New Year!!
Related links
- Taking a measure of public opinion: “The public is more aware than ever of some of the negative consequences of the technologies that have changed their lives,” Axios reported last month, but “about 40% of Americans still feel that social media is a net positive for society” and “63% of respondents say they sleep with their phone in or next to their bed.” Hmm.
- Another great book of 2018: Even though “screen time” is in the title, The Art of Screen Time, by NPR’s Anya Kamenetz, was a parental haven from the storm of scary social media-related headlines last spring. I reviewed it here (don’t miss the part about dandelions and orchids, cuz you want your kid to be a dandelion, if possible).
- Some of the finest journalism I’ve seen on content moderation this year showed up in “Post No Evil,” a podcast from RadioLab (and Motherboard.com published an in-depth article on the subject under the headline “The Impossible Job“). Interestingly, looking back on his reporting for “Post No Evil,” RadioLab producer Simon Adler told On the Media, “I know everyone wants to hate Facebook…but this is an intractable problem that they have to try to solve but they never knew they were creating. And I walked away from this reporting feeling they will inevitably fail, but they have to try, and we should all be rooting for them. Because the alternative is more situations like those in Sri Lanka or Myanmar where they are goofing up and people are dying because of it.”
- Fine investigative work on content moderation that pre-dated Gillespie’s book a 2014 article in Wired by Adrian Chen, an award-winning 2016 article in The Verge by Catherine Buni and Soraya Chemaly, “The Secret Rules of the Internet, and work by J. Nathan Matias that led to his 2017 PhD dissertation.
- On ethics & social norms in content moderation: Former Twitch moderator Claudia Lo gave an insightful talk last spring on her graduate work at MIT around moderating content at the mind-bogglingly fast pace of live-streaming video are all fascinating.
- “People are retreating from the public square” on the Internet and into private spaces like messenger apps. Is that cyclical? Listen to “The Future Will Not Be Podcast” with Matt Silverman Evan Engel and Alex Fitzpatrick. “Achieving the media literacy of the 21st century is the crux of the future shock…. How can we achieve it if the media is moving so fast?… I don’t think that media literacy as we know it is possible.” Maybe not if locked in curricula, but there are other means of developing it – see my post about an important media literacy experiment that was conducted in Ukraine, “Media literacy may take a village now.” Besides, if we talk with our children about their media use, they will help us (and vice versa).
- About what happened last spring: some of my own writing on rewriting the social contract and when big data got personal
- Global Kids Online just published its 2018 year-in-review post, reporting that, since 2016, surveys have been conducted with more than 15,000 kids and teens and their parents in nearly a dozen countries on five continents. New research is happening right now in Albania, Canada, Montenegro, New Zealand and the Philippines.
[…] The (media) reality we’re dealing with now: “We have handed over the power to set and enforce the boundaries of appropriate public speech to private companies,” Tarleton Gillespie, a principal researcher at Microsoft Research, wrote, in the Georgetown Law Technology Review in 2018. These are not just platforms or Internet companies, they’re social institutions, a point made by scholar Claire Wardell in the first British parliamentary hearing held outside of the UK (in Washington, DC, in fact) and journalist Anna Wiener in The New Yorker (see this). What do we-the-people do with these new social institutions that are global corporations? [I wrote about Gillespie’s important 2018 book Custodians of the Internet (aka content moderators) here.] […]