Mistreated moderators and the pervasive violence of the internet

Opinion by Sarah Myers
March 6, 2019, 1:00 a.m.

Recently, The Verge published a look inside one of Facebook’s deals with a content moderating contractor. Facebook hires these moderators to screen posts reported by users for violating their community standards. These moderators look at reported posts and decide whether to delete them or allow them. Author Casey Newton was able to convince some former Facebook moderators, who are generally prohibited from discussing their work by NDAs, to tell her about their experiences. Their stories are deeply upsetting; they are routinely forced to witness extreme violence, constantly monitored and held to incredibly high standards for speed and accuracy. Accuracy is determined by how often moderators’ decisions agree with the decisions of slightly more senior moderators; more senior moderators are given a random sample of a regular moderators’ processed posts and asked to make their own judgments. At Cognizant, for example, moderators must be “accurate” at least 95 percent of the time. Within the Cognizant work site Newton examines, some moderators have responded to constant exposure to the worst of Facebook by buying into the conspiracy theories. One person genuinely believes the Earth is flat, another has become convinced that 9/11 was not a legitimate terrorist attack and another denies that the Holocaust took place.

Reading Newton’s piece was odd to me because it was eerily similar to the experiences of censors in China, which I am currently researching for a literature review. China has made all website owners liable for content on their website, so the vast majority of censorship is actually performed by employees of social media sites. Website employees tasked with moderating content at Beyondsoft, a Chinese techservices company contracted by social media platforms, and Cognizant, an American company contracted by Facebook, are required to lock their phones in small lockers while at work and perform content moderation using computers with limited capabilities. Both companies ask that workers screen a dauntingly high number of posts per day, although Beyondsoft’s targets are higher (it’s difficult to compare exact numbers because Facebook posts may be longer than the ones Beyondsoft screens).

There are, however, some interesting differences between Facebook moderators’ work and that of Chinese social media censors. Although both companies have training programs, Beyondsoft’s program must teach employees about censored information. Many employees learn about the 1989 Tiananmen Square Demonstrations for the first time during Beyondsoft’s training. Chinese employees are required to have in-depth, detailed knowledge of all of the most controversial parts of Communist Party of China (CCP) history, and they are expected to use that knowledge to censor social media in order to protect the CCP.

Yet that cognitive dissonance might be less overwhelming than the trauma Facebook’s moderators experience. Newton reports that many of her sources found their work depressing, anxiety-inducing and horrifying. It is apparently not uncommon for employees to use alcohol, marijuana or other drugs to get through a day of screening posts. Dark humor, including jokes about self-harm, is common at Cognizant.

Last year, for my PWR 1 class, I wrote a paper on white supremacy on 4chan. A surprising number of mass shootings are committed by individuals, usually young cisgender white men, who spent a great deal of time on websites like The Daily Stormer or 4chan’s /pol/ board (a word of warning: Both of those sites contain graphic and disturbing content, and I would not recommend visiting them). Dylann Roof credited online white supremacy with inspiring his actions. Perhaps foolishly, I attempted to gain insights about why white supremacy appeals to people and even convinces them to commit terrible crimes. I attempted to do this by reading and analyzing content from 4chan’s /pol/ board.

Because this project was of my own design, and I was able to choose when and how to read the messages I collected, my experience was likely a great deal less severe than that of Cognizant employees. It was still frightening and deeply unpleasant. I learned a new vocabulary of hate, an entirely new language of slurs and insults designed to reinforce bigotry. I learned that white supremacists are at once creative in their expressions of hatred and utterly original in the content of their ideas.

I did, to some extent, accomplish my goal. I learned that these communities seem to offer users a sense of power, uniqueness and support, as long as the user is male and white. They offer a prepackaged sense of purpose (to protect the white race) and identity (a member and protector of the white race). But I also found myself constantly sad, anxious and frustrated; finishing the paper offered an enormous sense of relief and alleviated most of my malaise, but I can’t quite leave it behind.

I haven’t visited 4chan or any of the other sites I researched in nearly a year. Nevertheless, I cannot forget that every one of the posts I read was written by a human being, who can vote and buy a gun. I am living in the same America I was before this project, but now I am playing a never-ending guessing game. I look around and try to find the /pol/ users, The Daily Stormer viewers, the people who spew hateful things online and then go to the grocery store as if nothing’s wrong. I can’t find them, but now I know they must be somewhere, and I can’t quite stop looking.

I don’t know how to fix our internet problem. Bigotry and violence have permeated every platform, from 4chan to Facebook, and asking people to monitor this deluge of posts means subjecting human beings to nonstop hate. It’s clear that Facebook should be paying people more, pressuring them less and providing better mental health services. But that doesn’t really fix the problem. The source of moderators’ trauma will not change, no matter how well  Facebook treats them.

At the risk of sounding un-American, I might suggest learning from China. Many Chinese social media platforms preemptively prevent people from posting content that contains certain words. Others automatically delete posts with those words. Facebook and other companies can simply ban obviously offensive terms (thanks to my excursion into 4chan, I have a long list of terms that no one except white supremacists use). Freedom of speech is important, but it only extends to the government, meaning that the government cannot censor private citizens’ speech but private companies can. Private companies are not under any obligation to provide a platform for bigotry.

Ultimately, though, the internet seems to be an expression of society — particularly, the parts of society that people don’t like to bring up face-to-face. If social media platforms want to prevent the worst parts of society from running rampant on their sites, they must either employ moderators, and subject those moderators to traumatizing posts, or somehow eradicate bigotry and violence in society as a whole. Looking at that choice, it’s not hard to see why Facebook chose the moderators.

 

Contact Sarah Myers at smyers3 ‘at’ stanford.edu. 

Sarah Myers '21 is pursuing a BA in International Relations while also studying Physics, Mandarin, and German. She enjoys writing about politics, ethics, and current events. She spends her free time reading and convincing herself that watching Chinese television counts as studying Mandarin.

Login or create an account