Complexity Theory: Facebook on privacy, part 1: ‘You agreed to it’

Opinion by Adrian Liu
Nov. 1, 2019, 2:00 a.m.

Part of “Complexity Theory,” a new column on the tangled questions of our technological age.

Back in 2012, during a Senate Judiciary Committee hearing on facial recognition and privacy, then-Senator Al Franken asked Facebook’s manager of privacy and policy the following question: “Why doesn’t Facebook turn its facial recognition feature off by default and give its users the choice to turn it on?” 

At the time (they changed their default in 2019), Facebook had been automatically scanning uploaded pictures with its facial recognition software, identifying individuals and suggesting to their friends that they be tagged so others would know they were in the photograph. Franken worried that Facebook was violating users’ privacy by identifying them to other users without their affirmative consent. 

Rob Sherman, the privacy and public policy manager, saw no privacy violation. To him, Facebook was giving its users complete control of their information. By signing up for Facebook and becoming friends with other people, users implicitly opted in to the facial recognition feature. “Facebook itself is an opt-in experience,” Sherman replied to Frankel. “People choose to be on Facebook because they want to share with each other.”

Was Facebook violating users’ privacy? Let us not indifferently respond “yes, and why would we expect anything else from Facebook?” or “no, anyone who expects privacy online is a dunce.” Instead, let us consider: how should we think about privacy standards, and by such standards, how did Facebook fare? What we find, I argue, is that Facebook used a notion of privacy tailored to its practices, and one out of line with considered intuitions on what it should mean to protect privacy. 

Control was the focal point of Sherman and Frankel’s privacy discussion, and this makes sense: when my privacy has been violated, it’s not simply because someone knows some information about me. It’s because someone knows some information about me that I don’t want them to know and didn’t permit them to know. My privacy has been violated because I had no control over what was shared. And so many political thinkers have agreed that, at its most basic, the right to privacy is the right to control information about oneself.   

Not only did Sherman and Frankel take on board this idea of privacy, but they implicitly invoked an intuitive paradigm of privacy called “informed consent.” Essentially thinking of privacy as control, informed consent adds the additional barrier that a user must know what they’re getting into. So if you agree to a tedious and jargoned privacy policy without reading it, you technically have control, but you’ve made decisions without knowing all the relevant information. According to the informed-consent paradigm, then, transparency and choice are the two necessary ingredients of having control over one’s information. 

Sherman’s statements to the Judiciary Committee demonstrated a sensitivity to the informed-consent paradigm of transparency and choice. In line with Facebook’s assertion on its website that “you have control over who sees what you share on Facebook,” Sherman emphasized the fact that Facebook users can choose whom they are friends with and those whom they share their information with. Tag suggestions are not made, Sherman noted, “unless both parties to the relationship had already decided to communicate with one another on Facebook.”  

Thus, the facial recognition process did not expose any user’s information to parties with whom the user did not already agree to share information. Facebook’s responsibility regarding to choice, based on Sherman’s testimony, was to allow the user to choose which parties the user wishes to share information with, and to not share the information with other parties. 

But Facebook and Sherman interpreted the meaning of “choice” in a way that allowed them to make unjustified assumptions about how users are agreeing to share information with others. Specifically, Facebook’s interpretation of an “opt-in friend relationship” relied on a notion of privacy in which privacy implied “you control which people see your information,” but not “you control how people see your information.”

Asked during the hearing why Facebook implements an opt-out rather than opt-in framework for facial recognition, Sherman emphasized that Facebook itself was an opt-in experience, and that friend relationships on Facebook are also opt-in. Facebook does not provide another opt-in choice for facial recognition, he explained, because facial recognition does not share data with any individuals other than those with whom the user has already consented to sharing data. This consent is given when the user opts in to using Facebook and opts in to specific friend relationships. 

Scholars have noted, however, that control is not limited to deciding who is allowed to access information about you — it also involves controlling the manner of information dissemination. Andrei Marmor, for example, observes that a “violation of a right to privacy is not simply about what is known but mostly about the ways in which some information is obtained.”  

If someone sees a message you store on your phone, this doesn’t necessarily mean they have violated your privacy. It depends on how they came to see the message. If they woke your phone and saw it on your lock screen, it violates your privacy; if you took a screenshot and sent it to them, it does not. Facebook’s definition of privacy, focusing solely on users’ control over who sees what they share, ignored how the information was shared. This led Facebook to argue that certain cases regarding privacy are identical, when our intuition tells us to distinguish the cases. 

The problem is not only about facial recognition in tag suggestions, but inherent to tags themselves. Yes, tag suggestions exacerbate the problem by prompting users to tag others, and have some edge-case issues of their own. But the basic issue is that opting into a friendship doesn’t automatically mean that one has consented to being tagged. 

For instance, suppose I have agreed that my Facebook friends can see photos that I’m in. I have thus agreed that if I am tagged in a photo and a friend goes to the uploader’s profile, they will be able to see the photo. But have I also agreed that whenever I am tagged in a photo, the photo can appear in the news feeds of all my friends, with the tagline “Adrian was tagged in Andrew’s photo”?  By consenting to my friend’s seeing photos of me, if they find them, it sure doesn’t seem that I have consented to them being actively notified of the existence of these photos of me. 

Thus, opting into friendships does not entail opting into tags or tag suggestions, as these change how information is disseminated. Sherman’s justification for not having a separate opt-in for tag suggestions makes sense only if one accepts a weirdly narrow definition of privacy in which it is sufficient that I control who sees my information, even if I cannot control how they see it

Seven years after the hearing, in September 2019, Facebook announced a change in facial recognition: it would be off by default, making it an opt-in feature. But many other features on Facebook remain opt-out at best. In Messenger on iPhone, for example, an active status showing when users are online is on by default, with the application discouraging users from turning it off. Notifications can only be turned off temporarily (unless one wishes to take an excursion to the settings application), and there is no option to disable read receipts. As a Messenger user, these are features that I live with, and that am resigned to. But that doesn’t mean I’ve agreed to it in a meaningful way. 

It’s not enough to be able to control who can see what we share — we should want to have a choice in the means by which others see what we share. Whenever tags and tag suggestions were on by default, Facebook assumed that, by opting in to sharing certain things with our friends, we also opted in to allowing Facebook to present that information to them in any way it wished. And this is a notion simply too far from common sense, and too convenient for Facebook, to be overlooked.   

Choice wasn’t the only word that Facebook interpreted in a conveniently convoluted manner: transparency, the other side of informed-consent, also proved complicated in this hearing, and this subject will be taken up in the next installment of “Complexity Theory.”

Contact Adrian Liu at adliu ‘at’ stanford.edu.

Adrian Liu '20 was Editor of Opinions in Volumes 257 and 259.

Login or create an account