• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

NetFamilyNews.org

Kid tech intel for everybody

Show Search
Hide Search
  • Home
  • Youth
  • Parenting
  • Literacy
  • Safety
  • Policy
  • Research
  • About NetFamilyNews.org
    • Supporters
    • Anne Collier’s Bio
    • Copyright
    • Privacy

How this new app might well be safer…

October 6, 2022 By Anne 2 Comments

…and what that has to do with content moderation

Fizz just might come to be known as the kinder social media app. I know, you’re probably thinking, “Yeah sure.” And I do understand. But, from some great reporting by TechCrunch, I picked up on two design features that, together, could give this new app the edge where user safety’s concerned. They are that it has…

  • The local factor. When someone joins, their experience of Fizz is just about their own university campus. They’re joining with other students in their school community, because they have to have their school’s .edu email address to join. Presumably that includes staff and faculty, but that’s not clear yet.
  • Peer moderation. This is the kicker. Other apps focused on location certainly didn’t have any edge on civility, but Fizz isn’t just moderated by people at the app’s back end or offices. It pays students on the user’s own campus to moderate the content posted by peers at their school.

Now, the key to making sure that keeps Fizz harmless and anxiety-free is who Fizz hires to do that local moderation work. Will the app screen for emotional intelligence and communication skills in hiring? Let’s hope!

The importance of peers

But why is peer moderation so important? If you’re in Twitter, you may’ve seen me tweet about how crucial offline content is for content moderation. The safety edge is all about offline context.

See, the problem is that the content moderators behind social media platforms, the people who make decisions about what posts and comments are harmful – or not – have no “real world” context for the reported content they’re seeing on their screens.

For example, if someone posts a picture of a plain-old pig, for example, moderators far away who’ve probably never been to that school (or maybe even that country) have no way of knowing whether the poster is suggesting something about the campus police, implying something mean about a roommate or spreading the word about Saturday evening’s fraternity pig roast.

Silly example, but you get the idea. Content moderators looking at a gazillion abuse reports a day have no way of getting that offline context – especially not in the seconds they have to make a take-down decision.

Why content moderation is so hard

We know that, where teens and young adults are concerned, cyberbullying and hate speech are very specific to their social life at school, so it’s almost impossible for app moderators – much less machine-learning algorithms – to get it right.

The vast majority of abuse reports that apps get are what the industry calls “false positives” – not actionable by a platform for any number of reasons. For example, maybe the user is abusing the system to bully someone (reporting them to get them banned); is reporting something that may feel cruel to them but doesn’t violate Terms of Service or community rules and so can’t be deleted; is reporting the content inaccurately (so the report doesn’t trigger human review); or is just reporting an unflattering photo or something else they just don’t like (which also doesn’t break the rules and can’t be actioned).

Content moderation, whether by humans or algorithms, is ridiculously complicated and nuanced. That’s because social content is real-world contextual, individual (often unique to the people involved) and situational (unique to what’s happening at a particular moment in time). Plus social norms and communication are constantly changing. Young users in particular are always innovating, creating their own responses to popular culture in speech, art, interaction and workarounds.

Youth challenges algorithms (!)

Machines aren’t great at nuance. Algorithmic moderation – the kind that catches violating content before it’s seen and definitely doesn’t wait for users to report the problem – is new, too. Platforms used to be entirely reactive, only (sometimes) responding to abuse reports. But where algorithmic moderation is concerned, it’s pretty impossible to teach an algorithm to “learn” from, i.e., find patterns in, data that is constantly changing. Anything close to a pattern keeps changing, which equals no pattern. And for the big platforms, we’re talking about data in just about every language on the planet. They may localize the algorithm to an individual country, but they’ll still have sub-cultures in terms of user ages, ethnicity, dialect, etc.

The same goes for human moderators, except they have mental health that needs to be cared for. They deal with the same lack of real-world context as well as nuance and complexity. They still have no way to get at intention to know whether a reported post is a joke among friends, a mean joke, outright cruel, contextually appropriate or just no big deal.

That’s why it’s refreshing to hear about an app that’s building knowledge of local context into its content moderation. If a student reports what always to be community style-breaking content and if the moderators is conscientious, the moderator can find out the offline (campus) context for that post so they’re likely to make an accurate, off at least informed, decision about whether that content needs to come down. This is why I’m an advocate for internet helplines such as the networks of them in Europe and Netsafe in New Zealand. We need one in the United States. But as long as we don’t, we’re dependent on the apps themselves to do the best they can. Which is usually not great.

What else would really help

If Fizz the startup can figure out how to monetize and grow, it will become one model for how to design for online safety. [For a bit of history: You may (or may not) remember that a college campus is where Facebook got its start, and look what happened there! Back then, user safety was barely an afterthought. Facebook was certainly thinking about it when I joined its Safety Advisory Board in 2009, the year the advisory was formed. Now the company says that half of its some 80,000 employees worldwide are devoted to the safety of its apps’ users.]

With all the regulatory scrutiny, from California to DC to London to Sydney to Wellington, a startup now has to be thinking about safety. Sadly (at least if Fizz’s peer moderation succeeds) there isn’t a Fizz for people in middle and high school. And so far it’s only on a handful of campuses in the US (see the TechCrunch piece for details). So teens are going to have to wait for civility-focused social apps that lower parental anxiety levels. What would help is if the US had a social media helpline – like the ones in Australia, Brazil Europe and New Zealand – that they could call if they ran into trouble online. But that’s another story. For now, I’m rooting for Fizz.

Related links

  • “Why Online Speech Gets Moderated“: The Washington Post provides a great primer in Q&A format, courtesy of Bloomberg. It’s focused on Twitter, for reasons of newsworthiness, but the info goes for all social media platforms. It’s also mostly about the US context but touches on what’s happening in China, Europe, India and Russia as well.
  • About TikTok’s earliest days: Musical.ly, pre-TikTok in the US
  • From Snapchat’s early days, how it was more a break from the self-presentation and performance fatigue that social media had come to be for teens, here and here; what set Snapchat apart from other apps back then; and more generally about anonymity vs. self-presentation fatigue here
  • News barely noticed: About two social apps that went away: Secret’s demise in 2015 from TechCrunch and a post on Reddit that Google Play app store removed Secret last spring; and, from their early days, a serious safety issue at Secret that surfaced and what set Whisper apart in its early days – with plenty of other coverage in those posts’ “Related links”
Share Button

Filed Under: apps, cyberbullying, Risk & Safety, Social Media, Youth Tagged With: algorithm, algorithmic moderation, content moderation, Fizz, free speech, hate speech, human moderation, moderators, peer moderation, Social Media

Reader Interactions

Trackbacks

  1. Mental health 2023, Part 1: Youth on algorithms - NetFamilyNews.org says:
    January 8, 2023 at 2:15 pm

    […] more on algorithms for keeping us safe (I also wrote about safety technology like this in a chapter on online safety, its history and […]

    Reply
  2. Sharing Diigo Links and Resources (weekly) | Another EducatorAl Blog says:
    October 9, 2022 at 11:46 pm

    […] How this new app might well be safer… – NetFamilyNews.org […]

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

NFN in your in-box:

Anne Collier


Bio and my...
2016 TEDx Talk on
the heart of digital citizenship

Connect with me on LinkedIn
Follow me on MASTODON
Friend me on Facebook
See me on YouTube

IMPORTANT RESOURCES

Our (DIGITAL) PARENTING BASICS: Safety + Social
NAMLE, the National Association for Media Literacy Education
CASEL.org & the 5 core social-emotional competencies of SEL
Center for Democracy & Technology
Center for Innovative Public Health Research
Childnet International
Committee for Children
Congressional Internet Caucus Academy
ConnectSafely.org
Control Shift: a pivotal book for Internet safety
Crimes Against Children Research Center
Crisis Textline
Cyber Civil Rights Initiative's Revenge Porn Crisis Line
Cyberwise.org
danah boyd's blog and book about networked youth
Disconnected, Carrie James's book on digital ethics
FOSI.org's Good Digital Parenting
The research of Global Kids Online
The Good Project at Harvard's School of Education
If you watch nothing else: "Parenting in a Digital Age" TED Talk by Prof. Sonia Livingstone
The International Bullying Prevention Association
Let Grow Foundation
Making Caring Common
Raising Digital Natives, author Devorah Heitner's site
Renee Hobbs at the Media Education Lab
MediaSmarts.ca
The New Media Literacies
Report of the Aspen Task Force on Learning & the Internet and our guide to Creating Trusted Learning Environments
The Ruler Approach to social-emotional learning (Yale Center for Emotional Intelligence)
Sources of Strength
"Young & Online: Perspectives on life in a digital age" from young people in 26 countries (via UNICEF)
"Youth Safety on a Living Internet": 2010 report of the Online Safety & Technology Working Group (and my post about it)

Categories

Recent Posts

  • Safety by co-design: How we can take youth online safety to the next level
  • Much-less-social media on Facebook’s 20th birthday
  • What child online safety really needs, senators
  • Welcome to 2024!
  • Supporting the youngest witnesses of this humanitarian crisis
  • Should our kids learn how to use generative AI? Well…
  • The missing piece in US child online safety law
  • Generative AI: July 2023 freeze frame

Footer

Welcome to NetFamilyNews!

Founded as a nonprofit public service in 1999, NetFamilyNews quickly became the “community newspaper” of a vital interest community of subscribers in more than 50 countries. Site and newsletter became a blog in the early 2000s. Nowadays, you can subscribe in the box to the right to receive articles in your in-box as they're posted – or look for toots on Mastodon or posts on our Facebook page, LinkedIn and Medium.com. She welcomes your comments, follows and shares!

Categories

  • Home
  • Youth
  • Parenting
  • Literacy
  • Safety
  • Policy
  • Research

ABOUT

  • About NFN
  • Supporters
  • Anne Collier’s Bio
  • Copyright
  • Privacy

Search

Subscribe



THANKS TO NETFAMILYNEWS.ORG's SUPPORTER HOMESCHOOL CURRICULUM.
Copyright © 2025 ANNE COLLIER. ALL RIGHTS RESERVED.