“Facebook will soon be on privacy probation, thanks to a proposed settlement with the Federal Trade Commission,” PC World reports. It’s referring to a “consent agreement” about the FTC’s “reason to believe” that Facebook has committed “fraudulent, deceptive, [or] unfair business practices,” as the FTC put it in its press release – though “most of the issues raised in the complaint have been previously addressed by Facebook and relate to changes made in 2009,” according to the Future of Privacy Forum. So after a 30-day comment period, now, the FTC will issue a “consent order” that will carry the force of law.
Users get the truth. An obvious but important rule to have on record: no misrepresentations of the privacy or security of users’ info [Though in his blog post about this, Facebook CEO Mark Zuckerberg wrote that “overall, I think we have a good history of providing transparency and control over who can see your information,” he followed that with “I’m the first to admit that we’ve made a bunch of mistakes.”]
Users “opt in.” Sort of. Facebook has to get users’ consent before making any changes to existing privacy practices (e.g., making their friends list public), so users opt in. “In other words, it’s the end of opt-out privacy settings,” according to PC World, but I think that’s overstating it. I think what the FTC means is that, if FB makes changes to its current handling of user data, it needs to get users’ “expressed consent.” It doesn’t necessarily mean all future changes will be user opt-in.
Deleted data is invisible data. If you’ve deleted your account, Facebook has 30 days to make sure no one can see anything of the info you’d posted in it. An obvious plus on the surface, but it’s not clear what this means if you and a friend who hasn’t deleted his/her account are both tagged in a photo. Maybe just your tagged name goes away. File this under “Privacy is a shared proposition,” because it really is in a social media environment, where a lot of things are less clear-cut than they used to be.
In-house privacy vetting. The FTC calls it a “comprehensive privacy program” that FB will have to set up to address privacy issues around new products and services. Zuckerberg said in his blog post that Facebook had appointed two privacy officers to spearhead that effort, and that “we do privacy access checks literally tens of billions of times each day to ensure we’re enforcing that only the people you want see your content.”
Privacy auditing for 20 years – within 180 days of the FTC’s forthcoming consent order and then every two years, third-party certification that Facebook’s complying with the FTC order. Google started audits like that after its settlement with the FTC over Buzz.
In his blog, Zuckerberg cited the similar agreements between the FTC and Google and Twitter, saying “these agreements create a framework for how companies should approach privacy in the United States and around the world.” I think that’s true (more on that in moment). His last three words are important because Facebook claims to be used in every country, and we’ve all seen significant evidence of that. Its overseas regulator, the Data Protection Commission in Ireland, where FB’s international headquarters is, is right now auditing Facebook Ireland’s operations in what Facebook calls “a routine process that the DPC is entitled to carry out for any organization that is established as a data controller in Ireland.” The company says “we have welcomed this audit by the DPC as helpful in demonstrating Facebook’s compliance with the requirements of European data protection law.”
Don’t develop a false sense of security
Because this is social, user-driven, media we’re all talking about, I hope none of this gives users a false sense of security. Sanctions, fines, audits, etc. against a social-media host site, no matter how big it is, can’t control all the other parties to the equation: users of all ages, app makers, advertisers, etc. What these companies host is expressions of hundreds of millions of users’ everyday lives, posted in real time, 24/7, all over the world. As my ConnectSafely co-director Larry Magid points out in his coverage of this development, “as with any digital information, what’s posted online can always be copied and pasted so … never post anything that could get you [or your fellow users] into trouble or embarrass you now or in the future,” and keep teaching that to young social media users at your house or school.
What this development does do is mark a new phase of the new media era. In the last phase, Web sites had to have privacy policies, a step forward, even though most were incomprehensible because written by lawyers. In this phase, we’ll see clearer language, easier-to-use controls, more control, and more transparency. How can I say that? Because tolerance for the lack of those has been going down and just dropped noticeably with this announcement, and companies with any visibility (and – seriously – watch out for little ones flying under the radar of public awareness) will increasingly be held to the fire of raised expectations. New corporate-level social norms are kicking in, which I believe will, over time, mean fewer and fewer anti-social media companies!
[Later added: So what do I mean by “anti-social media companies”? Companies that don’t collaborate with their users and empower them to co-create and co-maintain media experiences that benefit users as well as all other parties to the media experience they host, including advertisers, app developers, and third-party sites and services. Anti-social media companies benefit by participatory media without themselves adopting and demonstrating participatory principles (think Gandhi, Mandela, Martin Luther King, and what so much of the world hopes will result from Tahrir Square – itself a work in progress to which social and traditional media companies continue to contribute). Increasingly pro-social and pro-participatory corporate practices, ideally self-regulatory because governments struggle to understand and embrace social media, are part of the media shift we’re all experiencing. As more and more of the world’s hundreds of millions of social media users become aware of this pro-/anti-social contradiction and their power as drivers and producers of the media these companies host, they will exercise their powers, with increasing impact on corporate-level social norms. To me, this is only logical. Does it make sense to you? Let me know in Comments, and thanks!]
Related links
- “How much should people worry about the loss of online privacy?” – a debate in the Wall Street Journal featuring social media researcher danah boyd, Washington attorney and author Stewart Baker, author and professor Jeff Jarvis, and Open Society Institute fellow Christopher Soghoian. Here is danah’s blog post with her complete answers to the Journal’s questions.
- USATODAY on “How Facebook tracks you across the Web”
- Information Week’s “Facebook Settles FTC Charges, Admits Mistakes”
- Following up on my comment above about a false sense of security, here’s more on this new media environment we find ourselves in and why we titled a national task force report “Youth Safety on a Living Internet.”
Steve Lee Ignacio says
I am cynical and untrusting of Facebook. I don’t want them data mining all my information. So I register with a throw-away email address, I use a psuedonym, and I fill all the personal info fields with nonsensical data. I don’t post photos or push the “like” button everywhere I go. Still, it’s not a big deal as the longer I have a facebook account, the less interested I am in it and the less I use it. I wonder if some people on Facebook are simply more interested in being as popular as possible than being private and discrete.