Richard Graham, a psychiatrist in London and a fellow youth advocate, tagged me in a post on LinkedIn today, and I’m glad he did. I was getting ready to post about the remarkable “ySKILLS” report published a few weeks ago, and his post not only touched on his journey as a co-author but showed how talking with young people about their lived experience with tech and media can help their peers, parents and mental health practitioners….
With collaboration across sectors such as research, schools, health care providers and nonprofit organizations (more efficiently called “charities” in the UK) – collaboration that I feel is essential to maximizing digital-age wellbeing. I’ll give you an example of one of those helping outcomes in this post, then, in my next post, tell you a little about the ySKILLS report itself (because I think both are super informative in different ways).
The research outcome Richard was posting about is actually the first in a series of products – “How to Understand Social Media Algorithms” – to be published at stem4.org.uk, the UK teen mental health nonprofit where he is clinical director.
So I’m going to steal a little thunder from my next post to write about this important article and piece of the report that sparked it. First, it’s important to point out, as ySKILLS does, that a lot of young people not only understand how social media algorithms work but also how to game them for their own wellbeing. For example…
“TikTok has a nice function – there are three dots, and you can click ‘Not interested’. It does something with the algorithm – if you keep doing that then the videos stop. You can take some control over some posts.” – a 19-year-old Norwegian
“If you definitely struggle with some stuff or, for example, an eating disorder or something, you might want to just mute the word. In that way, it won’t be showing up on your feed and you won’t see the word. And that way you’re actively helping yourself and your online world will get all positive.” – a 14-year-old Briton
So smart. Now we just need to up the number of teens with this level of awareness and skill and – to do that – we adults really need to understand how algorithms work too. Richard’s article will help. I mean, at this point in the digital age, many of us are familiar with the way apps – from Facebook to TikTok to Netflix to LinkedIn – recommend videos to watch, people to follow, things to buy, etc., based on all the personal data we feed their recommendation algorithms, right?
Those recommendations are the new serendipity of the newspaper front page or all those old supermarket tabloid headlines. They definitely help us discover stuff we never knew we were interested in (some of it good). But we need to be clear that they can also send us and our children down dark rabbit holes of mis/disinformation, extremism, hate, sexual violence and – depending on how one feels about oneself at some point in a day or week – just depressing social comparison. This is where algorithmic recommendation can trigger or reinforce mental health challenges.
The article in Stem4.org.uk represents pioneering discussion in the adult world not just on how algorithms have negative as well as positive effects but also on how they can become “callous algorithms”: “out of sync with and insensitive to [a] young person’s state of mind or ability to cope, leading to … unwanted re-exposure … and setbacks in their mental health.”
To encourage you to click to the article, I’ll just share two of its “4 critical [algorithmic] impacts” that I’d like to zoom in on:
- Algorithms can never be aligned to what’s going on in a young person’s head and life in the moment when they “recommend” social media content. That’s because machine learning algorithms cannot be up-to-the-moment when a young person checks in on their favorite app. The algorithm “learns” by being fed data from the past, and people change, from day to day, context to context and certainly over time – even within a single day. Because in a single day we can have resilient moments and vulnerable moments, when we can be “triggered,” right? But it can really help to know what to “feed” the recommendation algorithm at least to minimize, if not block, content that causes distress, as the teens above demonstrated.
- Seeing a lot of algorithmically recommended content that’s distressing can “normalize it” – cause the viewer to think, for example, that “it’s normal (or beautiful) to be so thin” or that “cyberbullying is common” or “most kids share nudes” – when none of that is true. And we know from the social norms research that, when people know that most people don’t engage in disordered eating, social cruelty or sexting, our behavior conforms with that new understanding, and we don’t follow suit (e.g., see “Kids deserve the truth about cyberbullying“).
Please check out the article. There is so much necessary digital age wisdom in it and all the work touched on above, a blend of young people’s lived experience, online and offline, clinical psychology, digital literacy and public health.
Related links
- “Collaborative obfuscation” is what you might call it. In a 2020 article, CNET describes how young people created a whole “obfuscation network” to confuse Instagram’s algorithm to maximize the privacy of everyone in the network. This is offline + online collaboration for online privacy. It’s pretty sophisticated stuff, so check out the article to see how they did it.
- “Zooming in on social norms“
- Under “What’s ahead” in my last post, I talk about a good direction in which algorithms are headed, in this case algorithmic content moderation, the decentralized kind. The more control we have as individuals and families over the algorithms that feed us content, the better. Each family needs its own, and I believe this is the direction in which we’re moving.
- And more on algorithms for keeping us safe (I also wrote about safety technology like this in a chapter on online safety, its history and technology, in a book published last year by the International Association of Privacy Professionals) (email me via anne[at]netfamilynews.org if you’d like a free copy of the chapter).
[…] Mental health 2023, Part 1: Youth on algorithms – NetFamilyNews.org […]