This could almost be a sidebar to what I wrote earlier this month about the new middle layer of user care that’s organically developing for the new conditions of today’s media environment – a layer of care that’s independent of government and corporations and lies between “the cloud” and long-established care on the ground. But this news is too big for a sidebar! [Disclosure: In addition to the trust & safety advisories of other social media platforms, I serve on Facebook’s Safety Advisory Board but was not briefed beforehand on this announcement. The ideas expressed here are entirely my own.]
Facebook, Google, Twitter and maybe a few other platforms are actually not just platforms. Or tech companies. Or media companies. The consensus is growing that they’re a new kind of social institution. So think about Facebook’s announcement this week that it’s developing a global independent Oversight Board in that light: One of our planet’s new social institutions just announced that it’s creating yet another new kind of global institution: an independent oversight body for decision making about our social media content.
Giant step + baby step
Yes, I actually believe it’s that big a deal. Last year was about a very large swath of the planet’s population – not just people in India, Myanmar and Sri Lanka who’d been severely victimized by social media content – waking up to society-level downsides to our new media reality. As well as to the fact that the mechanisms countries and companies have in place to manage the impacts of new media aren’t adequate.
I was struck by what Radiolab producer Simon Adler told On the Media host Brooke Gladstone last August about what he took away from his in-depth reporting on Facebook’s content moderation challenges, “Post No Evil“:
I know everyone wants to hate Facebook … but this is an intractable problem that they have to try to solve but that they never knew they were creating. And I walked away from this reporting feeling they will inevitably fail, but they have to try, and we should all be rooting for them.
I agree with him. It is an intractable problem, and these new social institutions – no matter how big they are – can’t solve it by themselves. So this week’s announcement by Nick Clegg, Facebook’s new VP of global affairs & communications, looks like the next big step in Internet safety’s evolution. It’s a necessary one, so thankfully Facebook stepped up. So far, this looks to be the first independent, global, multi-stakeholder chunk of the “middle Layer” that, to date, has been discussed and built out only in an ad hoc way, country by country – and mostly by governments and NGOs, not platforms.
It’s also a baby step – toward changing the current conditions of content moderation by moving at least some of them out to an external body whose decision makers hopefully will have context on the appeals that will be made by users all over the world. Maybe that will help. We don’t know yet, neither does Facebook, which is probably why it calls the charter for this new Board a “draft” and plans to take the discussion out on the road to “Singapore, Delhi, Nairobi, Berlin, New York, Mexico City and many more cities” over the next six months.” It’s also a baby step because – though it’s being called an “oversight” body, which sounds more like a regulatory one – apparently it’s only about appeals to content moderation decisions already made by Facebook – not oversight and not responses to users’ abuse reports. The latter remain in house, so independent helplines will still be needed. For now.
But whatever it ends up doing, the biggest question is how to set up what amounts to a new global institution? Other key ones are: how does one body represent, much less serve, the whole world? How does it interface with governments? Does it only address issues not addressed by national laws? What kind of support staff does it need, is support staff part of Facebook or independent too, and — if the former — what part of Facebook does it “live” in and and what rules govern its work with this external body?
This is only the beginning of a fascinating new dimension and discussion of Internet safety — not to mention Internet governance — worldwide. It’s necessary, and courageous, pioneering work on Facebook’s part. It has some commonalities with the ideas of Prof. Gillian Hadfield and researcher Tarleton Gillespie I shared in my last post, and it’ll be interesting to see how those ideas and other experts’ will be folded in — to see how this concept evolves as Facebook learns from the discussions it plans to hold. Just setting up meaningful discussions with multiple perspectives and stakeholders speaking in many languages in the same room in each of those cities will have its challenges.
It certainly won’t be easy for Facebook to find and then hold to the signal amid all the noise it’ll encounter in this process. As the discussion grows, I predict 3 things will happen (among undoubtedly many more):
- Cross-industry: People will want this kind of appeals process at other apps and platforms, so it necessarily becomes a cross-industry body that Twitter, Snap, Google (YouTube), Microsoft (LinkedIn and Xbox Live), possibly Amazon (Twitch) and Tencent will join and support. I believe this is very possible because 0f great cross-industry work in the past, in together addressing network security and child sexual exploitation. Those are models for future collaborative work.
- Simplification: An independent appeals process is necessary but not enough. Users will likely struggle to distinguish between content moderation (getting harmful content removed) and moderation appeals (appealing removal decisions) — between the work of “deletion centers” like those in Germany and that of the Oversight Board. They will naturally look for “one-stop shopping,” where content moderation’s concerned. So….
- Multiplication: For reasons of capacity, diversity, representation and practicality, governments and NGOs will ask Facebook to develop more Oversight Boards: regional and eventually national ones. That’s more possible with cross-industry support, with an association of such Boards setting standards of operation and best practices and training new Boards coming on line.
I welcome your views on these predictions – food for thoughts and discussion, I hope. In any case, as the Oversight Board is described now, we’ll immediately see some overlap between its work and the NGOs and governmental entities that already deal with social media content moderation. So it will be fascinating to watch how, together, national governments + international corporations + both national and international NGOs work together to simplify the structure of user care worldwide. Because working together is a must. Blaming, shaming and adversarial defaults will not serve us if we want to advance safety for all Internet users.
- See the Draft Charter for details on how the Board will work, how many will be on it, how long they’ll serve, etc. It’s clear and written almost like a FAQ and includes things Facebook considered going in.
- In light of this announcement, don’t miss a conversation recorded before Facebook made it: In a thought-provoking interview on The Ezra Klein Show, entrepreneur and technologist Anil Dash told Klein: “This is a new thing. Humans have never encountered this before. We’ve never designed a system resilient to this. Everything’s been disrupted – democracy, public discourse…. So now what? I think people are underestimating this. These are not corporations in the conventional sense…. These large-scale platforms have changed the world in a way that’s unprecedented. And we’ve decided a as a culture that regulation’s not going to fix it.” Is he right? Obviously, I think so. But please weigh in!
- Other views on this development at The Verge and New York Magazine, both of whom call the Board a kind of “system of justice” or “quasi-constitutional checks and balances” (probably because of Mark Zuckerberg’s past reference to a kind of “supreme court” – see Lawfareblog.com). And a commentary by editorial board member Molly Roberts at the Washington Post to me makes the most sense of the coverage so far.
- Fortune’s picks: It’s interesting to see how sadly U.S.-centric Fortune.com’s candidate list is (with the exception of Tim Berners-Lee, of course). To be fair, Fortune only proposes 5 people for the 40 seats FB says it’s looking to fill, but it’s good Clegg stated up front that the discussion is heading to multiple continents.
- Radiolab’s outstanding in-depth podcast about content moderation at Facebook, “Post No Evil” (you’ll find producer Simon Adler’s conversation with Brooke Gladstone at the end of this equally outstanding On the Media show about Twitch)
- Earlier posts with links to others’ comments on these new global social institutions in this on the middle layer, our big data wakeup call (Cambridge Analytica), the ad hoc development of a new global social contract, and takeaways from the first formal meeting of a committee of the UK Parliament in the U.S.
- My more abbreviated version of this at Medium.com
Disclosure: As a nonprofit executive, I’ve served on the safety advisories of companies such as Google and Facebook for a number of years. The ideas expressed, here—informed by that work, as well as 20+ years of writing about youth and digital media—are entirely my own.