In the early years of digital communication, people struggled to encapsulate the full range of human emotion in their messages. Texts, emails and social media posts were often cryptic; after all, it felt unnatural to convey tone without vocal inflection. Was that text supposed to be sarcastic? Is he mad at me? What’s the difference between “hey,” “Hey!” and “heyyy”?
Quickly, though, humans adapt. Emoticons, filler words and creative punctuation lead to new norms for introducing emotional meaning into plain text, 140 characters at a time. For instance, perfect spelling and grammar suggests a formal context for acquaintances, but an aura of hostility with close friends. Tacking “IMHO” (in my humble/honest opinion) to the end of a comment marks a statement as expressing personal belief. The endlessly versatile “thinking emoji” is a personal favorite of mine, furrowing its brow to signal skepticism or doubt. These trends represent a new form of social literacy, requiring netizens to read between the lines in the same way we intuit vocal tones and hand gestures when talking face-to-face.
Today, I rarely run into problems conveying tone online. In fact, I often feel more comfortable with this arsenal of digital signifiers than I do navigating unfamiliar communicative situations in real life. Perhaps I’m confirming social media critics’ worst fears about Gen Z, but there’s no denying the wealth of information embedded in our chat logs and Instagram captions.
Furthermore, companies now realize the value of understanding how feelings shape technology usage, and reciprocally, how tech influences our feelings. The ability to quantify and analyze these emotional data points has huge potentials for media companies, marketers and government officials alike.
It’s no surprise that Facebook is at the forefront of emotional analytics research. In 2012, Facebook data scientists and researchers at Cornell University embarked on a controversial experiment funded by DARPA, the research arm of the Department of Defense. Over the course of a week, Facebook tweaked the News Feed algorithm to show some users posts with more positive language while others saw more negative content. Then, the researchers evaluated these individuals’ own posting activity, discovering that “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.”
Although legal, the project reveal was met with outrage: users felt manipulated, turned into guinea pigs without their consent. These reactions were understandable. To many, one’s social media profile is a digital manifestation of the self. And if corporations can alter users’ emotions and beliefs on a wide scale, they wield incredible power over their identities.
On the other hand, when Facebook rolled out reactions in 2016, many users welcomed the ability to add nuance to their online activity — and avoid the awkwardness of hitting “Like” on “RIP Grandma.” Reactions didn’t replace the iconic Like button, but Love, Haha, Wow, Sad and Angry reacts were added to Facebook’s emotional toolkit.
These reactions (and the accompanying API) enable a whole new way to measure and evaluate user engagement. Although natural language processing isn’t mature enough to understand the emotional subtext of modern digital vernacular, counting the number of reactions on a post is easy. An “Angry” react to political news gives Facebook information about your ideological leanings. A “Haha” on a clever Wendy’s ad suggests that their posts are resonating with your demographic. When aggregated, reaction data provides companies with a vivid roadmap for improving their content and eliciting desired reactions.
So what’s the difference? Why are we okay with emotional data being collected in some instances, but not others? It all comes down to who is being empowered to express themselves: the user or the platform.
Platform-first design prioritizes the company’s interests. It’s a numbers game aiming to increase total time spent on the site. That often means filling our feeds with heated political debates and glamorous portraits of an ideal life; that is, addictive content that wastes away hours in anger-fueled comment wars or spirals of insecurity and FOMO. These tactics might generate more clicks, but cause resentment toward social media in the long run.
User-first design, meanwhile, encourages meaningful interactions, community-building and above all, browsing on our own terms. It proposes personalization and transparency, suggesting that people should be empowered to cultivate their own emotional experiences on social media. After all, using Facebook to complain is not a universal evil; in fact, it can be pretty cathartic. But a problem arises when we can’t consent to having our emotions shaped by a black box algorithm.
Fortunately, recent public pressure on social media companies seems to be working. It began with little changes, like Twitter exchanging starred tweets for the friendlier heart icon. In January this year, Facebook announced it would revise its approach and begin foregrounding “time well spent” — even at the cost of use time.
It’s still unclear what “time well spent” entails, but I’m hesitantly optimistic. Maybe that means letting users toggle their News Feed to show only family content or only news articles. Maybe that looks like developing native extensions that add a 5-second posting delay (to encourage thoughtfulness) or hide the number of Likes (to avoid groupthink). Finally, it could be a Twitter-style option to blacklist certain phrases or images, like police brutality, from appearing on your feed. Third party developers and UX designers are already working on these suggestions — and more! — and Facebook would do well to pay attention.
Emotions are a new form of digital currency. Likes, upvotes and retweets bestow social capital on recipients, signaling the products we prefer, the issues we care about and the communities we’re part of. The onus, therefore, is on social media companies to equip users with the power to design their own digital experience. I’ll happily give a “Love” react to that.
Contact Jasmine Sun at jasminesun ‘at’ stanford.edu.