Post in our forum for parents, teens - You! - at ConnectSafely.org.

Thursday, December 10, 2009

FTC's milestone report on virtual worlds

This is pioneering stuff on the part of the US government. The Federal Trade Commission today sent to Congress its close study of 27 online virtual worlds – 14 for children under 13 and 13 aimed at teens and adults – looking at the level of sexually explicit and violent content and what the VWs were doing to protect children from it. I think it's important for parents to keep in mind when reading the study or just the highlights here that "content" in virtual worlds means user-generated content (which is why, in "Online Safety 3.0," we put so much stress on viewing children as stakeholders in their own well-being online and teaching them to be good citizens in their online and offline communities). Here are some key findings:

  • The FTC found at least one instance of either sexually or violently explicit content in 19 of the 27 worlds – heavy (sex or violence) in five of them, moderate in four, and "only a low amount in the remaining 10 worlds in which explicit content was found."
  • Of the 14 VWs for kids under 13, 7 contained no explicit content, 1 had a moderate amount, and 6 had a low amount.
  • Nearly all the explicit content found in the kids' VWs "appeared in the form of text posted in chat rooms, on message boards, or in discussion forums."
  • The Commission found more explicit content in VWs aimed at teens or adults, finding it in 12 of the 13 in this category, with a heavy amount in 5 of them, moderate in 3, and a low amount in 4 of the 13.
  • Not just text: Half the explicit content found in the teen- and adult-oriented virtual worlds was text-based, while the other half appeared as graphics, occasionally with accompanying audio.

    The report goes into measures these 27 VWs surveyed take to keep minors away from explicit content, including "age screens" designed to keep minors from registering below a site's minimum age (what the FTC calls "only a threshold measure"); "adults only" sections requiring subscriptions or age verifications (see "'Red-light district' makes virtual world safer"); abuse reporting and other flagging of inappropriate content; human moderation; and some filtering technology. "The report recommends that parents and children become better educated about online virtual worlds" and that virtual-world "operators should ensure that they have mechanisms in place to limit youth exposure to explicit content in their online virtual worlds." In the two pages of Appendix A (of the full, 23-page report + appendices), you'll find a chart of all the virtual worlds the FTC reviewed. [See also my VW news roundup last week and "200 virtual worlds for kids."]

    This is a great start. As purely user-driven media, virtual worlds are a frontier for research on online behavior. The FTC was charged by Congress "merely" with determining the level of harmful content, not behavior – I really think because adults continue to think in a binary, either-or way about extremely fluid environments that are mashups of content and behavior. Where is it really just one or the other, what is "content" in social media, and how do we define "harmful"? We also need to define "virtual worlds." Some of these properties are largely avatar chat, some are games (with quests), some are worlds with games but not quests in them. Still, we've got some great talking points and very useful data to build on.

    Labels: , , , , , ,

  • Friday, October 02, 2009

    'Red-light district' makes virtual world safer

    San Francisco-based Linden Lab, which runs Second Life, has sequestered adult content and activity in the virtual world onto a new continent called "Zindra." Residents of the virtual world have to verify that they're adults before they can search for anything on Zindra or go there (here's the page that explains how the age verification process works). The entire "world" is now classified as either "Adult," "Mature," or "PG." As Linden Lab explains these, "Adult" is what most of us think of as adult content or activity – sexually-themed or explicit, inappropriate for minors. "Mature" seems to be more about the shopping and socializing, or non-serious, side of virtual life, where there's nothing really inappropriate for kids to see but also where grownups don't particularly want to mix it up with 13-to-17-year-olds (who themselves would probably prefer Teen Second Life for socializing). Linden Lab describes the "Mature" classification this way: "Social and dance clubs, bars, stores and malls, galleries, music venues, beaches, parks (and other spaces for socializing, creating, and learning) all support a Mature designation so long as they don't host publicly promoted adult activities or content." "PG," obviously, is for everyone – the label for all educational and business activity (virtual classes, meetings, talks, etc., where only time zones are a barrier for gatherings of people planet-wide).

    "The other day, when I logged back in after quite a few weeks," writes digital-media maven Chris Abraham in AdAge.com about checking back in after all this happened, "Second Life told me so in so many words that if I want to party, I need to explicitly commit myself to that lifestyle; otherwise, I had better just be happy with PG-13. Second Life didn't kick out the brothels and porno theaters, it just put them on a different plane of existence." All of which makes high school classes and other educational programs (see links below) in Second Life much safer and more feasible now (e.g., this from ABC News Brooklyn on science class in Second Life).

    For visual aids, here's a 3 min. video interview with Second Life creator Philip Rosedale with little clips from in-world and a PG13-rated look at Zindra (on its opening day, 7/4/09).

    Related links

  • Machinima of Rochester Institute of Technology's virtual campus in Second Life (machinima is video taken in-world, so it looks like animated film)
  • "US Holocaust Museum in Second Life"
  • "The Virtual Alamo" museum in Second Life
  • A video at Teachers.tv in the UK about student projects in and with virtual worlds and my post about it
  • "School & social media"
  • "Young practitioners of social-media literacy"

    Labels: , , , , ,

  • Monday, May 18, 2009

    Teens, age segregation & social networking

    "Kaitlyn" doesn't use Facebook to hang out with school friends because it's "for old people!" she told danah boyd. She and her friends use MySpace, but Kaitlin does mix it up with her own relatives (grownups) in Facebook. "She sees her world as starkly age segregated and she sees this as completely normal," danah writes. "'Connor,' on the other hand, sees the integration of adults and peers as a natural part of growing up." They're three years apart in age (Kaitlyn 14, Connor 17) and Connor's in a slightly higher economic bracket, but in her blog post about her conversations with the two, danah writes that "the biggest differences in their lives stem from their friend groups and the schools they attend.... [Connor] told me that in Atlanta, most schools are 60% or more black but his school was only 30% black. And then he noted that this was changing, almost with a sense of sadness. Kaitlyn, on the other hand, was proud of the fact that her school was very racially diverse. She did complain that it was big, so big in fact that they had created separate 'schools' and that she was in the school that was primarily for honors kids but that this meant that she didn't see all of her friends all the time. But she valued the different types of people who attended.... Connor's friends are almost entirely white and well-off while at least half of Kaitlyn's friends are black and most of her friends are neither well-off nor poor." So Kaitlyn appreciates ethnic and racial diversity, Connor age diversity. Are these differences reflected in social network sites? To some degree, and we all wonder which is more causative offline socio-economic and -cultural differences or online ones (how much of a factor is Facebook's origin in an elite Ivy League school?). danah also wonders about inclinations or aversions to age segregation: "There's nothing worse than demanding that teens accept adults in their peer space, but there's a lot to be said for teens who embrace adults there, especially non-custodial adults like youth pastors and 'cool' teachers. I strongly believe that the healthiest environment we can create online is one where teens and trusted adults interact seamlessly. To the degree that this is not modeled elsewhere in society, I worry." I agree with her - and worry that efforts by adults not following social-media research to impose age verification will create an artificial age divide on the social Web. For a broader sweep of observations on teen social-media users, see danah's response to questions in Twitter mostly from adults.

    Labels: , , ,

    Friday, January 23, 2009

    Restricting teen access: Unintended consequences

    Age verification has been the potential online-safety solution of choice for state attorneys general. I know I've written about this plenty, but I have to add something that really struck me in reading all the technology submissions to the Internet Safety Technical Task Force: that the only way any of these technologies would really work for children is if their parents chose to use them. Only bottom-up, not top-down, adoption can really work. In other words, no government can effectively mandate their use because no government can control the global Internet or its global population of users. For example, if a government were somehow to restrict social networking only to adults, its restrictions could only affect social sites based in its country; its teens could simply go to social sites based in another country (there are so many English-language ones outside the US). This was a key factor cited in a recent European Commission report. But back to opt-in parental controls. There are many kinds - from filtering to monitoring to site moderation to ID-verification in specific sites for which parents sign up their kids. All of these can work for children with engaged, informed parents who know what's age-appropriate for each of their kids. They don't work very well for kids who aren't fortunate enough to have that kind of attentive parental support, kids who - for good or bad - find more support online than at home, if they even call it "home." Those are the youth recognized in the research summarized in the Task Force report as most at risk online as well as offline. Those are also the young people for whom age verification could have very negative unintended consequences. It's those possible consequences which have barely begun to be considered and about which my ConnectSafely co-director Larry Magid and I are concerned. We sent a memo about them to our fellow Task Force members (summarized on p. 262 of the full report, which can be downloaded at the site of Harvard Law School's Berkman Center for Internet & Society) and which Larry delineated in his CNET blog.

    Labels: ,

    Tuesday, January 13, 2009

    Key crossroads for Net safety: ISTTF report released

    Online safety has reached a major crossroads, here in the US. The Internet Safety Technical Task Force's report is being released tonight, and to me (a Task Force member), it represents a stark choice all stakeholders have going forward: continue down the road of fear-based online-safety education or together match all messaging to what the research says - be fear-based or fact-based.

    Having observed and participated in this field for more than 11 years, I think it's understandable how we got here. The US's public discussion, fueled by mostly negative media coverage, has been dominated by law enforcement. Starting in the mid-'90s, police departments representing the only really accessible, on-location expertise in online safety, filled an information vacuum. They and members of the growing number of state Internet Crimes Against Children Task Forces were the people who spoke to schoolkids and parents about how to stay safe online, and their talks, naturally, were largely informed by criminal cases. When online-safety education is carried out by experts in crime - those who see the worst uses of the Internet on a daily basis - fear is often the audience's take-away. That's not to say there aren't amazing youth-division officers who really understand children and technology giving online-safety talks - there are, we have one, Det. Frank Dannahey in Connecticut, on our Advisory Board - but their voices have so far been drowned out by the predator panic the American public has been saddled with.

    Meanwhile, over the past decade, a broad spectrum of research has been published about both online youth risk and young people's general everyday use of all kinds of Internet technologies, fixed and mobile. And now it's all reviewed and summarized in this report (downloadable here), one of three major accomplishments of the Task Force, the other two being the national-level discussion it represented, involving key stakeholders, and that it acknowledges the international nature of the Internet, essential to any policy discussion about it.

    One of the researchers' most important findings - information really helpful to parents, finally - is that a child's psychosocial makeup and the conditions surrounding him are more important predictors of online risk than the technology he uses. Not every child is equally at risk of anything online, including predation. The research shows 1) only a tiny minority of online youth are at risk of sexual exploitation resulting from Net activity, and these are at-risk kids in "real life," and 2) online risk of all forms - inappropriate behavior, content or contact, by peers or adults - has been present through all phases of the Web and all interactive technologies kids use; it doesn't show up only in social-network sites. It's rooted in user behavior, not in crime.

    As an online-safety advocate who talks to parents all the time, I kept wanting to say to the attorneys general - since they announced their online-safety prescription, age verification, 2.5 years ago at a DC conference on social-networking I attended - that focusing solely on predation, or crime, doesn't help parents. Parents need the full picture - all the risk factors and danger signs, the positives and neutrals, too, not just the negatives - in order to guide their kids.

    I think any parent gets why the full picture is needed. Most parents know they can't afford to be like deer in the headlights, paralyzed by the scary evidence coming from those focused on crime (and those covering them in the media). Kids sensing irrational fear want to get as far away as possible. They know it can cause parents to overreact and, based on misinformation, shut down the perceived source of danger. That sends them underground, where much-needed parental involvement and back-up isn't around. How, I kept wanting to ask the AGs, who are parents themselves, does that reduce online kids' risk? To young people, taking away the Internet is like taking away their social lives, and there are too many ways kids can sneak away - to overseas sites beyond the reach of any US regulation, to irresponsible US sites that don't work with law enforcement, to and with other technologies, devices, and hot spots parents don't know about it - including friends' houses, where their rules don't apply.

    Certainly the attorneys general have played an important watchdog role, here in a country where a discussion about industry best practices hasn't even begun. Now, with the release of a full research summary maybe that discussion can start. That's possible because, with a national report that says the most common risk kids face is online bullying and harassment - bad behavior, not crime (and their own aggressive behavior more than doubles their risk of victimization) - and with the Task Force's technical advisers concluding that no single technology can solve the whole problem "or even one aspect of it 100% of the time," we're moving closer to a calm, rational societal understanding of the problem - the Task Force ended up working toward a diagnosis rather than filling a prescription for one of the (certainly scariest) symptoms.

    With the release of the Task Force report, online safety as we know it is obsolete. The report lays out more than enough reasons to take a fact-based approach to protecting online kids - to stop seeing and portraying them almost exclusively as potential victims and work with them, as citizens and drivers of the social Web, toward making it a safer, more civil and constructive place to learn, play, produce and socialize.

    Related links

  • The ISTTF report download page - with links to PDFs of the full report, executive summary, research summary, and all other appendices
  • "Net threat to minors less than feared" from my ConnectSafely.org co-director Larry Magid at CNET
  • "Report Calls Online Threats to Children Overblown" in the New York Times
  • "Internet Child Safety Report Finds No Easy Technology Fix" in the Wall Street Journal
  • Over in the UK, "Bullying biggest online threat to children" at the Financial Times
  • "Teen frustrated that parents restrict access to social-networking sites" in the Lawrence (Ks.) Journal-World
  • Past blog posts on age verification in NetFamilyNews

    Labels: , , , , , , ,

  • Friday, January 09, 2009

    Data breaches way up

    Whether or not age verification would help keep kids safe online, as state attorneys general suggest, it would require the collection of children's personal information into some database(s) somewhere. Consider that possibility against the news of where we are with the security of personal information in databases right now. "Businesses, governments and educational institutions reported nearly 50% more data breaches last year than in 2007, exposing the personal records of at least 35.7 million Americans," the Washington Post reports, citing a report from the Identity Theft Resource Center of San Diego. Nearly 37% of the breaches happened at businesses and about 20% at schools, the Center found. See also "Social networker age verification revisited" and "Europe on age verification, social networking."

    Labels: , , ,

    Friday, November 21, 2008

    Europe on age verification, social networking

    As the Internet Safety Technical Task Force wraps up its year of studying potential tech solutions for youth risk on the social Web, some perspective from across the Atlantic seems timely. The ISTTF's report, which we worked on together this week as Task Force members, goes to the 49 attorneys general who formed the ISTTF at the end of the year. The European Commission last summer held a public consultation on social networking, age verification, and content rating "to gather the knowledge and views of all relevant stakeholders (including public bodies, child safety and consumer organisations, and industry)." More than 6 dozen entities responded (links to their individual comments are included here).

    Reports on those stakeholders' 70+ comments were presented at the EC's Safer Internet Forum in September. Here are the EC's conclusions on social networking and age verification, two subjects of particular interest to the US's state attorneys general and the ISTTF (so I'm zooming in on these two):

    1. Summary of European views on age verification

  • Bottom line: "There is no existing approach to Age Verification that is as effective as one could ideally hope for.”
  • Flaws a reality: “Each individual method carries its own flaws, as does any combination of methods used.”
  • "Universal" really means "universal": The effectiveness of age-verification systems already in place in the UK and Germany is "largely undermined by the availability of sites offering similar services” in countries where there is no age verification in place. It can only be effective if it is "universally accepted, inclusive, secure and relatively inexpensive."
  • Avoid false sense of security: "Concerns were also raised about the false sense of security that might be provided and the adverse effects on safety this might have."

    2. Conclusions from report on social networking

  • Significant consensus. "There was an important degree of consensus between respondents across most questions."
  • The peer-to-peer risk. "Bullying and other threats which young users inflict upon each other may be more likely to arise than threats from adults."
  • Communication not confrontation. "Parental involvement in their children's online activity is important, but principles of privacy and trust should dictate how parents help children to stay safe."
  • Education > regulation. "Education and awareness are the most important factors in enabling minors to keep themselves safe."
  • Industry self-regulation > legislation. "Industry self-regulation is the preferred approach for service providers to meet public expectations with regard to the safety of minors. Legislation should not place burdens on service providers which prevent them from providing minors with all the benefits of social networking.
  • Mandatory safety minimums maybe. "Available safety measures vary greatly from one provider to another and mandatory minimum levels of provision may need to be established."
  • More research needed. "Much is known about potential risks, but more research on the nature and extent of harm actually experienced by minors online is needed."

    Related links

  • From this week's US news: "Age verification: An attorney general's concern" in the New York Times and my blog post about it
  • "Age verification debate continues; Schools now at center of discussion" at Adam Thierer's tech policy blog

    Labels: , ,

  • Monday, November 17, 2008

    Age verification: An attorney general's concern

    The headline chosen by the European Commission's QuickLinks blog certainly cuts to the chase: "No Adults Allowed. (Marketers Welcome)." What it links to is a timely New York Times piece about the potential unintended consequences of the age verification that state attorneys general are calling for (consequences that would not please many parents). What the headline refers to is the alleged business model of some of the 2 dozen+ companies who want to help (and involve US schools in helping) verify American children's ages - apparently for the purpose of protecting them online but also reportedly to make a business out of selling data they gather on kids to marketers. Kids' social sites, virtual worlds, and other services would pay the age-verification vendor a "commission for each [child] member" a school signs up; "the [kids'] Web site can then use the data on each child to tailor its advertising," the Times reports. One of the age-verification companies the Times talked to, eGuardian, says kids are exposed to ads anyway (well, in some, not all, kids' sites), it just makes sure they're appropriate. The question is, how can that "appropriate advertising" be guaranteed? There's a pretty sexualized media culture and a lot of obesity in this society anyway, to name only a couple of issues. One of the remarkable things about this piece is the quote at the bottom from Connecticut Attorney General Richard Blumenthal, a leading proponent of age verification, saying that verifying kids' ages online to promote marketing to them would be very concerning. This is the first qualifying statement about age verification we've seen from the attorneys general since they started calling for its implementation more than two years ago.

    Labels: , , , ,

    Friday, October 10, 2008

    Online ID verification in South Korea

    The world's most connected country - South Korea, where 97% of the population has broadband Internet access - is conducting an experiment in Internet control that the world (especially the US) might do well to watch. I say "especially the US" because we're having a discussion here (at the Internet Safety & Technology Task Force) about online verification of minors' ages (see this about that). The Guardian reports that Seoul is trying to "curb online anonymity and debate." New legislation, some of which is "due to pass" next month would require all forum and chatroom users to make verifiable real-name registrations (South Koreans have national ID cards). The legislation would also make all news sites subject to the same restrictions as newspapers and broadcast media, answerable to the Korean Communications Standards Commission regulatory body, and give the Commission "powers to suspend the publication of articles accused of being fraudulent or slanderous, for a minimum of 30 days. During this period the commission will then decide if an article that has been temporarily deleted or flagged should be removed permanently." The Guardian suggests that includes blog posts, which is a problem: "Seoul's previous experience with such censorship suggest that unless the government hires thousands more people to staff the commission, which is already behind in processing some 2,000 internet-related objections, just addressing the initial complaints will be unworkable, untenable and unenforceable." The other problem is, the Korean government would also have to block all sites based overseas because it couldn't make them card Koreans at their virtual doors. Here's more from the Korea Times.

    Labels: , ,

    Friday, September 26, 2008

    The ISTTF: Chicken or egg?

    "ISTTF" stands for Internet Safety Technical Task Force, the result of an agreement last January between 49 state attorneys general (minus Texas) and MySpace. The emphasis is on the word "technical," because the attorneys general basically charged the task force, of which I'm a member, with reviewing technical solutions to online youth risk - "age verification" technology being their stated predetermined solution of choice. Why? Because they're law enforcement people. They deal with crime - not all these other subjects that have come up in online-youth and social-media research - so they probably feel that this is all about crime and technology, so some technology that separates adult criminals from online kids, or that somehow identifies every American on the Web, is what will make the Internet safe for youth.

    The problem is, we now know - via a growing body of research - that young people's use of technology for socializing is not limited to MySpace, to social networking in general, or even to the Web. Youth don't even focus on what technology or device (phone, chat, blogs, IM, Skype, computer, Xbox Live, Club Penguin, World of Warcraft, etc.) they use when they're socializing. They just communicate, produce, and socialize. So the "problem" is not technology. We're dealing with behavior, learning, adolescent development, social norm development, and identity formation, here. What technology is going to give adults (those who want it) control over that, or somehow sequester American youth into American sites that are compelled to verify ages, or separate adults and children across the entire universe of increasingly mobile, device-agnostic communications, media-sharing, and social activity?

    Besides, we also know now that only a tiny percentage - well under 1% - of US youth are at risk of being victimized by the kinds of crimes the attorneys general put the Task Force together for, and this minority is, unfortunately, already at risk in "real life." Technology probably doesn't have much of a chance at curing the age-old struggles of troubled youth - certainly not ID verification technology.

    The other thing we know, though we adults don't think about it a whole lot, is that the "problem" is changing - fast (it actually won't be that long before our teenagers are parents!). Because nobody's brains are fully developed till their early 20s, teens need our input, but so do we need theirs. For the most part, youth understand what's happening with tech and the social Web, they're the drivers of it, they're changing (growing up), and technology is changing faster than we can keep up with it, so we don't have anything close to a static "problem" to get a fix on, much less to fix.

    Which leads me to the chicken/egg question. The first day we heard at least a dozen presentations by purveyors of various technologies, many of them focused on verifying either ages (very hard with US minors, who under federal privacy law have very little verifiable personal information in public records) or identities. By the end of the day I couldn't shake off the unnerving picture of a roomful of baby boomers (digital non-natives, including me) - many of whom barely understand the "problem," much less the full picture of young social Web participants, and some of whom stand to gain a great deal from selling the Task Force on a particular technology for nationwide adoption - trying to assert control over the unruly social Web. The understanding is growing, not least because the Task Force has a research advisory board as well as a technical one, and the former is right now completing a review of all research on youth online safety to date - the first of its kind. This is brilliant! So what's wrong with this picture? Seems to me the research comes first, then - as we understand the problem - we begin to look at what the solutions should be.

    The second day we heard from a Rochester Institute of Technology sociology professor with a background in law enforcement. It's an important study (I'll blog about it more next week) because it looks at Internet use by more than 40,000 Rochester-area students all the way from kindergarten up through 12th grade, and it offered the Task Force insights into the peer-on-peer, noncriminal but negative and sometimes unethical and illegal side of the online-safety question. But youth were referred to in an extremely negative adversarial way, first- and second-graders referred to as "perpetrators" and "offenders." For example, the "four types" of middle-school "online offenders," he said, are "generalists, pirates, academic cheaters, and deceiving bullies." As useful as the data is, I don't feel this is productive language to use when trying to change behavior or inspire children about digital citizenship (see my description of an amazing such project at Bel Aire Elementary School in Tiburon, Calif., here).

    So there you have one person's (rambling) perspective. There are others available now - that of Adam Thierer of the Washington, DC-based Progress & Freedom Foundation and a more radical one from CNET blogger and Berkman fellow Chris Soghoian. [The Task Force is hosted and chaired by the Berkman Center for Internet & Society at Harvard Law School.]

    Your views are always welcome - in our forum here, posted in this blog, or via anne[at]netfamilynews.org. With your permission, I love to publish your views for the benefit of all readers.

    Labels: , , , , ,

    Wednesday, September 10, 2008

    Microsoft's age-verification concept

    Microsoft has created a euphemism to go with its age-verification plan: "digital playgrounds," where kids get digital ID cards so they can hang out in adult-free places online. It's part of Microsoft's Trustworthy Computing initiative that has involved other companies in a consortium aimed at tackling the Internet identity problem. The problem is "how to make the Internet safer not just for children, but also for adults wanting to conduct business, make transactions, and communicate with the confidence that the people they are interacting with really are who they say they are," CNET reports. What makes it so tough to solve is the need to authenticate people's identities without jeopardizing their privacy - especially children's, whose personal info is protected by US federal law. "Under the [Microsoft] scenario related to children, digital identity 'cards,' or credentials, could be based on either national identity documents created at birth or on identity documents schools use to determine age and identity for school registration, with parental permission. The data could be limited to age and proof of authenticity, and the credentials should be encrypted and require use of PIN numbers. As Internet News points out, dozens of other companies and groups will be presenting their proposed solutions to the Internet Safety Task Force later this month. [See also "Age verification: Key question for parents," "UK data security breach & kids," "Social networker age verification revisited," and other items on the subject.]

    Labels: , , ,

    Tuesday, July 01, 2008

    Data insecurity on the rise

    Here's one reason why verification of online children's ages or identities is a slightly scary concept: data breaches are up. What does this have to do with online kids? If age verification is required of Web sites, children's personal information would have to be stored in a database somewhere, so that Web sites' "bouncers," or ID-checking technology, will have a collection of information against which it can check the info kids provide. The problem is, "businesses, governments and universities reported a record number of data breaches in the first half of this year, a 69% increase over the same period in 2007," Washington Post security writer Brian Krebs reports, citing research from the San Diego-based Identity Theft Resource Center. Interestingly, hacking was "the least-cited cause of data breaches in the first six months of 2008.... Instead, lost or stolen laptops and other digital storage media remain the most frequently cited cause of data breaches. See also "UK data security breach & kids." And I seem to be seeing more news of data breaches all the time, the latest for Google employees - see CNET.

    Labels: ,

    Wednesday, November 21, 2007

    UK data security breach & kids

    A massive security breach involving the personal information of "virtually every child in Britain" has occurred in the United Kingdom, The Guardian reports. It "could expose the personal data of more than 25 million people - nearly half the country's population," CBS News reports. The data concerns "families with children, including names, dates of birth, addresses, bank account information and insurance records." Two computer disks containing the data were sent via ordinary mail between two government departments and were apparently lost in the mail. The breach was announced to the House of Commons yesterday by Alistair Darling, Britain's equivalent to our treasury secretary. He said this wasn't the first time Britain's tax agency had experienced such a breach. There was, however, no evidence that the data has fallen into criminal hands. This is a clear illustration of risky it would be to have a national database of children's personal information in the US, which is what would be required in order to establish children's age verification online (for more on this, see "Social networker age verification revisited").

    Labels: , ,

    Friday, October 26, 2007

    Social networker age verification revisited

    Parents often ask us why on Earth social-networking sites can't just block teens altogether - verify their ages or something? After all, it's all over the US news media that attorneys general are calling for age verification. Well, we have been replying for months that it just wouldn't work (e.g., see "Verifying kids' ages: Key question for parents"). But don't take it from us this time. The UK-based Financial Times has an editorial on this saying the exact same thing. Why wouldn't it work? "The practical problems are considerable. Fourteen-year-olds do not have drivers’ licences and credit cards that can be checked via established agencies. The sites could insist on verifying the parents, but anyone who believes that a teenager will not 'borrow' his father’s Visa has never been 14 years old." Also, think about how hard it is accurately to verify kids' ages in person, at the door of a nightclub, much less over the anonymous Internet with no physical evidence or view of the person's face.

    And then what would the result be? "The consequences of successful age verification, meanwhile, would be even worse," the FT continues. "Minors would be driven off mainstream sites such as MySpace and Facebook and on to unaccountable offshore alternatives or the chaos of newsgroups," which we tell parents all the time - because kids are experts at finding workarounds. "There they would be far more vulnerable than on MySpace, which now makes efforts to keep tabs on its users." In other words, parents probably want their kids in sites that have customer service departments that actually respond to abuse reports and parents' complaints. MySpace has an email address just for parents (parentcare@myspace.com), as well as ones for educators and law enforcement. [For more on age verification, see a blog post from Adam Thierer of the Progress & Freedom Foundation, complete with a podcast he did with other experts on the issue. The FT also this week published a summary on where social-networking sites, attorneys general, and all the rest of are on all this.]

    Labels:

    Wednesday, May 16, 2007

    Can online kids be verified?

    This question keeps coming up because politicians keep insisting it has to happen and ID verification professionals keep saying it’s not possible. And it’s not, actually, unless or until personal information on minors is as available as personal information on adults. By personal info, I mean credit records, mortgages, mother’s maiden name, social security number, etc., all pulled together in the kind of database credit bureaus have. There is no such database on minors for any ID or age-verification technology to check against. And does this society, particularly parents, want such a national database on children to exist, given all the database hacking and theft in the news in recent years and given the attractiveness of squeaky-clean minors’ credit records to ID thieves? In fact, there is a federal law that protects children’s personal info in the US. So, certainly, online adults’ ages and identities can be verified, but not children’s. Jacqui Cheng recently blogged about this in ArsTechnica.com, referring to a one-day conference that thoroughly vetted the options and aired many perspectives, hosted by the Washington-based Progress & Freedom Foundation; here’s the transcript. And speaking of children’s privacy and databases, check out “Half a million kids’ DNA on UK police database” in the UK’s The Register. It reports that the DNA data of 4.1 million people are now in the database, more than 520,000 of them people under 16. Britons can have their info removed (and presumably their children’s), but only 115 did last year. The comments at the bottom of the article offer a good look at the privacy implications.

    Labels: ,

    Friday, May 11, 2007

    State laws on age verification

    Though people on both sides of the social Web’s age-verification debate have great intentions, opponents really seem to know more about what’s actually possible than proponents do. Proponents say things like, “if we can put a man on the moon, we can verify someone’s age,” the New York Times reports in an article about states proposing legislation requiring verification. Opponents or skeptics view it as overkill, what I’d call a baby+bathwater result (one opposing state legislator told the Times such a law is more like a sledgehammer where a “small mallet” would work better). ID verification companies say it’s not possible without a national database of children’s personal information (civil liberties and consumer privacy organizations would have some things to say about that – not to mention many parents). Child-safety advocates say it could potentially provide a false sense of security for parents and greater risk – if kids simply go to another site parents don’t know of that is less responsible to public opinion and parents’ requests than MySpace or other popular sites laws would cover. What the article doesn’t get into is all that’s in the bathwater these proposed laws are trying to address but don’t even begin to touch (see “Predators vs. cyberbullies” as well as “Verifying online kids’ ages”).

    Labels: , ,