-In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn't warn users that this change was coming, or get their approval in advance.
- Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users' personal data – data the apps didn't need.
- Facebook told users they could restrict sharing of data to limited audiences – for example with "Friends Only." In fact, selecting "Friends Only" did not prevent their information from being shared with third-party applications their friends used.
- Facebook had a "Verified Apps" program & claimed it certified the security of participating apps. It didn't.
- Facebook promised users that it would not share their personal information with advertisers. It did.
- Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
- Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn't.
I remember very well when the first one happened. It pissed me off because someone got to see some information I thought was private after I went to the trouble of going through Facebook's previous 50 privacy settings.
My family had a major falling out with my father, to the point where myself and my sister had a restraining order against him when I was 7 and she was 9. I haven't seen him or talked to him in 16 years. After Facebook deleted my privacy settings, I had a message from him sitting in my inbox the next day. My sister called me, crying, because she had gotten the same message. Both of our accounts were previously unsearchable on the site, with all of our data being private. As soon as they rescinded that privacy, our father could tell what cities we live in (in one of our cases, a _very_ small town) and that she had gotten married, we both had changed our first and last names, and one of us had a child. He found me through my mom's friends list, he found my sister through my friends list. All of which were previously hidden.
Nothing came of it besides an unwanted "please call me" message from him, but it's not a far reach from there to actually being located physically and confronted. We sent this man to jail and changed our names to keep away from him, and Facebook, in spite of their "privacy" settings, let him get a glimpse back into our lives.
An article a few years ago talked about what privacy means to rich kids who have everything. Go to a $40k per year boarding school that mummy and daddy pay for? What do you have to hide? Not much.
Then the kids get older and decide "no secrets for anybody!" What's the harm in sharing your life? It's a net win. If you see James got a new turbo jet ski, won't you want to work harder to get one too? Sharing can save the world.
We can't seem to imagine a time when maybe you wanted to keep a secret. Maybe you're helping someone to not be found. Maybe you're helping someone through a bad time in their life. Then, with a profit-oriented privacy change, you end up in the parent's situation.
The world view of the people in charge aren't aligned with "normal." We'll see PR and lip service press releases, but steamrolling over normal people will continue.
On the contrary, don't rich people have a lot more to hide?
Just knowing that your kid is enrolled at Le Rosey signals to a criminal that she is worth kidnapping... and the last status update shows her headed to Ibizia for spring break. In contrast, nobody cares if another poor kid "likes" Justin Bieber. Over-sharing seems a lot riskier for the rich (and famous.) It would be interesting to read the article you mentioned. It's hard to imagine an argument that the rich are not more concerned with privacy than regular people.
Kidnapping for ransom is rare in the United States. It's not good risk/reward. The family of the kid enrolled at Le Rosey is probably living beyond its means, and lacks sufficient credit to pay enough ransom to cover the costs of a kidnapping operation.
The other important point: Nobody older than 25 exists (well nobody older than ~age-of-founder exists. Remember the entire "Never hire anybody over 27!" advice?). Kids don't have major privacy concerns, therefore everybody should have no privacy.
I would donate to your ACLU legal action against FB. I am consistently editing myself on FB, even in private groups, because of a sinking suspicion that with a flick-of-a-switch it can all be public.
I'm not sure that any laws were broken. It's not like we kept the restraining order active for 16 years, and his message wasn't really harassment, more like attempted atonement. At any rate, I do wish companies had to be held to their own site rules, legally. If Pystar break's Apple's TOS they get sued out of business, but if Facebook breaks their privacy settings they get a slap on the wrist from the FTC.
If you do decide to file please let us know. The entire situation sounds like a nightmare and I wonder how many others have faced this issue. On a more encouraging note I recently had to report a fake FB profile that was used to harass a family member and it was removed promptly, which surprised me.
Shit, that sucks -- people are not yet aware of the damage the lack of online privacy can bring. And I fear that because of inertia, when the damages will become visible, then it will be too late.
I kind of wish the similar Google Buzz incident had gotten more press than it did. A boy who wasn't even in a balloon was on the news for days, but the case of a woman who was being harassed on Buzz by her abusive ex was lost in the public mind after a few minutes. Both the Buzz and Facebook cases were decided by the FTC very recently (the Buzz case was settled in October of this year). Perhaps we've reached a turning point in online privacy?
But then, looking at things like Protect-IP and SOPA, perhaps the regulatory answer is to just do away with privacy altogether.
"- Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users' personal data – data the apps didn't need."
Privacy/ethics issues aside, from a pure developer standpoint, isn't this just a feature? Where do we draw the line between functionality and privacy?
User A allows user B to see her data via "Friends only." User B runs app X, whose functionality includes interacting with friends. Let's say it shows on a map where each of your friends lives. App X can see the said data for the purposes of providing functionality.
Yes, I know that by strict definition this conflicts with "friends only." You now have "friends and the application executable code only." But how is this different from, say, Gmail auto-scanning my e-mail to show ads? Is it because I trust Google and don't trust $random_fb_app_developer?
Likely one concern is that this third-party developer can disrespect (or actually, not even know about) that "friends only" setting and inadvertently make the data visible to other parties.
(Disclaimer: Don't get me wrong, I loathe/distrust most FB apps as much as the next person. Just trying to think from an honest developer's shoes here.)
That's one of the two bullet points where I can see a reasonable case for Facebook's side. It's slightly muddied because the app is running on Facebook where they control it, but I can sort of see an argument for apps' actions conceptually being actions of the user running them.
I'm somewhat sympathetic to Facebook on the other app-related claim as well, "Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate." Yes, Facebook could've done better on that, but fine-grained security is something nearly nobody has solved.
Privacy/ethics issues aside, from a pure developer standpoint, isn't this just a feature? Where do we draw the line between functionality and privacy?
This is a reasonable question. Where I personally would draw the line is, "functionality" implies to me that the app would only access user info when it needs to do so for some FUNCTIONAL purpose. If the app does not need the data and is not doing anything legitimate with it, then obviously, the user's privacy should be respected and said info should not be accessed.
This sentence: "including giving consumers clear and prominent notice and obtaining consumers' express consent before their information is shared beyond the privacy settings they have established" was the killer to me. It forces FB to much more transparent about what it does with data and could have a significant shift in its revenue model.
The trouble is, it only "forces" Facebook to do anything if there will be meaningful sanctions if they don't.
They've just been found, in a formal investigation, to have broken numerous fundamental privacy laws across several continents, and been punished with... absolutely nothing, as far as I can tell.
All this has done is teach them that they are above the law and should feel free to continue doing whatever they like without regard to the consequences for the hundreds of millions of real people who are counting on them to behave responsibly.
Those "audits" will be toothless, and the only effect we'll ever see from them is the occasional headline. Read up on Microsoft since the late 90s or IBM since the early 80s, among many others.
The FTC's presence in FB's business decisions will only lend an air of legitimacy to what would otherwise cause problems with their users, and transforms FB's business model from "users are dumb fucks" to "Mom said I could."
No, it just means that changes to privacy that affect existing data should be announced beforehand. However, I doubt that they'll tell you which data will be affected.
Have fun scrolling through your entire FB history to update permissions!
Not really. If they provide clear and transparent notice to users on what they are doing with their data, then what they are doing is fine -- "caveat emptor."
Whatever trouble Facebook has with the FTC, it will face roughly 300 million times that trouble in Europe, where the rules are much more strict, and the data protection authorities have much greater power to act on behalf of the general public and individual interests.
That whole using-personal-information-to-burn-people thing is still (barely) in living memory in Europe. There have been riots in Germany over the government trying to take a census - with good reason.
Privacy is more important than a lot of shallow people imagine.
Well, the U.S. has its own history of people who shoot at census takers, and so on. From a legal perspective, the EU and member state implementing laws are: (1) more protective of the individual over the corporation than US laws, and (2) fairly onerous and expensive for companies to comply with. In fact, compliance is a kind of red herring, since many of the data protection rules in place are ambiguous or nonsensical. Personal privacy is basically a global policy experiment at the moment.
True, though I have to admit that Neelie Kroes seems to have plenty of both executive authority and the willingness to exercise it in a muscular fashion. It helps that the EU privacy protections are closer to the constitutional than the legislative level.
-In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn't warn users that this change was coming, or get their approval in advance.
- Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users' personal data – data the apps didn't need.
- Facebook told users they could restrict sharing of data to limited audiences – for example with "Friends Only." In fact, selecting "Friends Only" did not prevent their information from being shared with third-party applications their friends used.
- Facebook had a "Verified Apps" program & claimed it certified the security of participating apps. It didn't.
- Facebook promised users that it would not share their personal information with advertisers. It did.
- Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
- Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn't.