Complexity Theory: Facebook on privacy, part 2: 50 shades of blue

Opinion by Adrian Liu
Feb. 3, 2020, 1:00 a.m.

Part of “Complexity Theory,” a new column on the tangled questions of our technological age.

I suggested in a prior piece on Facebook and privacy that Facebook’s idea of privacy demonstrated a sensitivity to the informed-consent paradigm. Under this paradigm, transparency and choice are the two necessary ingredients of having control over one’s information, and having control over one’s information is in turn the central feature of privacy. My privacy is protected if I am given a choice over how my information is shared, and if the conditions under which I make my choice are reasonably transparent (i.e. I know what I’m getting into). 

I argued that Facebook’s notion of “choice” was conveniently framed to allow the company to make unjustified assumptions about how users are agreeing to share information with others. Facebook, in pointing to an “opt-in friend relationship,” relied on a notion of privacy in which privacy implied “you control which people see your information,” but not “you control how people see your information.” 

The opt-in friend relationship was not the only way Facebook defended its practices to the U.S. Senate back in 2012. Today, I consider another argument Facebook put forth: Users knew what was happening, Facebook said, and could always opt out.

Rob Sherman, then Facebook’s manager of privacy and policy, stressed that users could always opt out of facial recognition for tagging, and that Facebook was transparent both about the use of facial-recognition and the ability for users to opt out. Because users always had the ability to opt out of tag suggestions, this argument implied, continued use of tag suggestions without opting out constitutes consent. 

Let’s now consider this argument: namely, the argument that Facebook’s facial recognition tag suggestions respected privacy rights because (1) users had the ability to opt out, and because (2) Facebook’s website was transparent about its usage of tag suggestions. 

Sherman noted explicitly in the hearing that Facebook strives for transparency: “We do a lot to be transparent and to let people know about the [facial-recognition] feature.” In his only other mention of transparency during the hearing, he said: “I think we also work very hard to be transparent with people about how the feature works. We provide information about the [opt-out] tool on a lot of different places on the site.”  

Transparency, according to Sherman, did not necessitate notifying users about the feature through email, on-site notifications, or other channels; indeed, Facebook does not proactively notify users of such features). Rather, according to Sherman’s testimony, it meant that information about the feature could be found at certain locations on the site. 

Facebook’s understanding of its responsibility with respect to transparency, then, seemed to be that Facebook had a responsibility to make information about its tools available to users who cared enough to find out, but that it had no responsibility to ensure, through active notices, that users were aware of the information.

But Facebook’s idea of transparency, when paired with its use of an opt-out rather than opt-in approach, does not allow users to provide meaningful informed consent. Users may give implicit consent, but this consent is uninformed. Facebook’s implementation of privacy practices for tag suggestions is a paradigm case of what Daniel Solove, a Georgetown Law professor who specializes in privacy, calls “the problem of the uninformed individual,” wherein users nominally give consent, but it is dubious that their consent is well-informed or considered. 

Indeed, Facebook makes very little effort to make sure users are aware of how tag suggestions work: Its information on face recognition even today resides in the data policy of its terms of service, its privacy basics page), its help center and user settings pages, and no links to any of these pages appear when tags are suggested. Thus, none of these pages are generally accessed in the normal course of a Facebook browsing session. If someone manages to sign up for a Facebook account without reading the terms of service (not that this has ever happened), they can “opt in” to tag suggestions without ever learning how they work. 

As Solove tells us, only a minuscule percentage of people read terms of service or end user license agreements, and most people do not even read privacy notices. While Facebook is transparent insofar as it makes information about tag suggestions available on its website, it makes no effort outside of the terms of service to ensure that users are aware of how the feature works.

Based on empirical evidence, then, it would be dubious for Facebook to claim that the bulk of its users are making informed decisions to allow tag suggestions. Facebook’s adoption of an opt-out system further allows users to stay in the dark about how tag suggestions work, unless they expressly search for information about it themselves. 

If Facebook required users to opt in, users might be presented with the option of learning more about tag suggestions before agreeing. Likely, many would ignore this notice as well, but Facebook would then at least have a better claim to having explicitly presented users with information on how the system worked. 

The problem of Facebook’s choosing an opt-out rather than opt-in model appears in both of my criticisms of Facebook’s approach to privacy, and relate to two different ways in which Sherman characterized “opting in.” Sherman on one hand implied that opting into a friendship on Facebook constitutes opting into facial recognition. I argued in the last article that this was based on an insufficiently robust notion of privacy that paid too little attention to the ways in which information might be shared with others.

Sherman also suggested — the focus of this article — that because Facebook was transparent about its practices and allows users to opt out, that failure to opt out implies consent. But as we have seen, Facebook was far from transparent enough, and thus failure to opt out does not imply meaningful informed consent. This conclusion goes far beyond tag suggestions: Any new feature that Facebook releases could be justified through the same argument: We were transparent about it, and you can always opt out. But if it didn’t hold water in 2012, why should we think it does today?

Facebook is far from atypical in its transparency practices and use of an opt-out rather than opt-in model. Consider that the phrase “continued use constitutes acceptance of the terms and services” has become commonplace, or Snapchat’s snap map, which, though an opt-in rather than opt-out model, shares location far more than one might expect, or the new phenomenon of digital assistant devices snooping on conversations (an ability which, in most cases, was noted in the relevant terms of service, and could be turned off). 

My criticism of these practices, then, reflects a general thesis held by privacy scholars that informed consent — the idea that privacy involves transparency and choice is not a viable privacy model for internet platforms. Facebook could improve its privacy practices by adopting a definition of privacy wherein a user can control not only who can see what they share, but also how its information is disseminated. Doing so, however, would still not address the fact that informed consent, as a whole, does not in practice allow users to provide meaningful informed consent for usage of their data.

Contact Adrian Liu at adliu ‘at’ stanford.edu

Adrian Liu '20 was Editor of Opinions in Volumes 257 and 259.

Login or create an account