Wondering about the “Facebook Emotional Manipulation Study” so much in the news these past few days? If so, you’re not alone. What’s causing this firestorm? Consumer research is nothing new, and we all know academic research certainly isn’t. There are some aspects of this that are very different, however, and so cause a great deal of uneasiness if not outrage. What’s different here are…
- The media involved. This has several implications: 1) Market research has always been about people as consumers – consuming products as well as media in a mass-media environment. This is about participatory media, where the data being gathered and analyzed is the content of our lives and relationships. The manipulation rightly feels very personal because it just is. 2) Which arguably makes research subjects more vulnerable when changes are made around what we post of our innermost thoughts, intimate relationships and everyday lives. 3) Also, people sign up to use social media services – they “buy in” – so without policy or social changes thereto, businesses have long been assuming that “by using this you consent” because users can always choose to leave – why some services have treated use as a kind of contract into which users enter.
- The cross-sector nature of the project. This was a joint-academic/business research project – social media user data with academic analysis. So whose standards of practice apply? Public companies are governed by their shareholders; academic research is governed by universities’ IRBs (institutional review boards ensuring the safety of research subjects). Who governs cross-sector research?
- Where we are in history. We’re in the middle of concentric circles of concern. The smaller circle is the privacy or “Big Data” panic we’re now experiencing. The bigger, more general one is our media shift on the scale of the one that involved the printing press that led to the social changes called the Renaissance and the Reformation in the West. Only accelerated and global this time. That’s because a huge proportion of the planet’s population is now networked. Social change makes people uneasy. New media that cause social change represent a lot of unknowns, and people fear both change and unknown quantities.
“The truth is that you’ve been a lab rat for at least as long as you’ve used online media. You just didn’t notice before,” writes sociologist Whitney Erin Boesel in a thorough, thoughtful commentary on this development in Time.
We’ve definitely noticed. The presence of the words “Facebook” and “manipulation” in the title of the study certainly helped – playing into the hands of all social media critics and showing how powerful words really are.
But this controversy can and should be turned into an important opportunity: to co-create an IRB of sorts that governs business-academic, or just cross-functional, research and – at the very least – a code of ethics for such research. This is very much needed for three reasons (if you think of more, please email me via anne[at]netfamilynews.org or post them in Comments below):
- Because, as I mentioned in the first difference above, researching with data involving users’ thoughts, relationships and everyday lives, most probably increases their vulnerability (research on this is needed!) and especially users who are already emotionally vulnerable.
- Because vulnerable users and protected classes are part of every service’s user base. These include children and victims of harassment, domestic violence, stalking, etc., arguably more vulnerable than subjects of many purely academic studies.
- Because this research is important. We have much to learn about ourselves, our sociality, our relationships, our vulnerabilities, our cultures and our communities from our collective expression and activities in social media.
This new inter-disciplinary research needs the protection of sound policy and ethical guidance, guided by practitioners of both market and academic research, ethicists, experts in at-risk populations, social media users and social media companies.
- Highly selective outrage. From pioneering social media researcher danah boyd’s commentary on this story: “Guess what? When people are surrounded by fear-mongering news media, they get anxious. They fear the wrong things. Moral panics emerge. And yet, we as a society believe that it’s totally acceptable for news media — and its click bait brethren — to manipulate people’s emotions through the headlines they produce and the content they cover…. Somehow … it’s … acceptable to manipulate people for advertising because that’s just business. But when researchers admit that they’re trying to learn if they can manipulate people’s emotions, they’re shunned.”
- Low vulnerability in this study. “So did Facebook manipulate my feelings?” was No. 5 of the Washington Post’s very good “9 answers about Facebook’s creepy emotional-manipulation experiment.” And the answer is: “In all likelihood, Facebook didn’t manipulate your feelings personally. First off, the experiment only affected a tiny fraction of users [roughly 0.04% or 698,003 people]. And even that tiny fraction didn’t really feel the change: the study documents only small changes in user behavior — as small as one-tenth of a percent. Here’s how Adam D.I. Kramer, one of the study’s authors, defended his methodology in a public Facebook post: ‘…the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.'”