“This is a tool, and like all tools it is a double-edged sword,” said computer science professor Fei-Fei Li on facial recognition technology (FRT) during a discussion on “Coded Bias.” “Coded Bias” premiered at the Sundance Film Festival earlier this year and follows Massachusetts Institute of Technology researcher Joy Buolamwini in her exploration of bias in FRT and other forms of artificial intelligence (AI). Full of real-world examples, the message of the documentary is clear: We are in trouble.
In the film, Buolamwini starts investigating bias in AI after noticing that a facial recognition program can’t identify her face. As a woman of color, the technology only registers her face when she puts on a white mask. After further investigation, she detects a pattern: FRT and AI technology tends to be biased against historically marginalized people.
While the bias Buolamwini discovers in the lab is frustrating, the film makes it clear that the real concern lies in the day-to-day applications of these technologies. In Britain, we meet Big Brother Watch, a non-profit organization concerned with government surveillance. They explain how police forces in the United Kingdom are starting to use FRT, despite concerns that it can be biased, inaccurate and an invasion of privacy. We then see a member of Big Brother Watch intervene when a young Black boy is wrongfully stopped by the police due to FRT.
These concerns are incredibly prevalent in the United States as well, according to the documentary. We hear stories about biased AI hiring systems and Facebook manipulating elections. We hear from frightened residents in an apartment building as they struggle to prevent the installation of FRT cameras. By packing the abstract technologies of AI and FRT into demonstrations of real-world consequences, the film makes difficult material understandable to all.
“For most of us, this technology has absolute authority,” said filmmaker Shalini Kantayya. Kantayya and Li spoke at a panel discussion on “Coded Bias” hosted by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and Stanford Arts.
In her remarks and throughout the film, Kantayya demonstrated that AI can be powerful and frightening, and needs to be regulated.
As Li explained, AI can contribute positively to society. For example, she discussed how FRT technology is being developed to help with diagnosing medical conditions in patients. However, she noted that a lot of AI is “new technology that is not even well benchmarked.”
Kantayya shared her solution to this problem: “It’s my belief that we actually need laws in place.”
“Coded Bias” is certainly a call to action. Kantayya has created a stirring documentary that is sure to make viewers question the role AI has in their lives. Through her film, she highlights how little much of the public knows about these technologies, and how dangerous that can be. Algorithms begin to regulate our lives, the film notes, and ignorance on how these algorithms work leaves the populace at their mercy.
“You all have a power in the knowledge you have on AI,” Kantayya said at the panel discussion.
Contact Kirsten Mettler at kmettler ‘at’ stanford.edu.