Timnit Gebru ’08 M.A. ’10 Ph.D. ’17, a leader of the movement for diversity in tech and artificial intelligence, spoke about the dangers of AI at a Symbolic Systems department-sponsored event on Wednesday night. Gebru is the Founder and Executive Director of the Distributed Artificial Intelligence Research Institute (DAIR) and co-founder of the nonprofit to help with visibility and inclusion for Black people, Black in AI.
In December 2020, Gebru was fired from her position as co-lead of Google’s Ethical AI research team after refusing to withdraw an as-yet-unpublished article about the dangers of Large Language Models. These algorithms, which Gebru had argued posed risks of financial harm and bias, use large datasets to translate and generate text.
She went on to found the DAIR Institute, which aims to mitigate the harms of AI, in 2021. According to its website, DAIR is “rooted in the belief that AI is not inevitable, its harms are preventable and when its production and deployment include diverse perspectives and deliberate processes, it can be beneficial.”
Gebru’s talk centered on the risks of Artificial General Intelligence (AGI). AGI can complete all tasks it is asked of, whereas AI is meant to complete only specific tasks. “Why attempt to build some undefined system that kind of sounds like a god?” she asked the audience of around 150 Stanford students and affiliates. “[Big tech] builds systems like they have one model for everything. They can’t do that, and even if they could, is that really what we want?”
Gebru has published articles about the exploitation of labor behind AI, as well as the potential for abuse of the burgeoning technology. She referenced text-to-image models’ being used to execute harassment campaigns, create “deep fakes,” and overly sexualize women and girls.
“My question is, utopia for whom? Who is getting this utopian life that [big tech] is promising [will come from AI]?” Gebru said.
Gebru drew parallels between AI, eugenics and transhumanism, (the latter of which refers to enhancing human longevity and cognition). “[AGI] has roots in first wave eugenics […] Secondly, both [AGI and eugenics] talk about utopia and apocalypse,” she said. Gebru expressed concern that AGI is promoted by “paradise engineers” as a means to abolish all of humanity’s suffering forever, while others fear that humans will lose control over AGI and find themselves in an apocalyptic scenario. She argued that transhumanism is inherently discriminatory, because it defines what an enhanced human is like, creating a hierarchical conception that mimics first-wave eugenics.
According to Gebru, AI should be made up of “well-scoped, well-defined systems.” She said the focus should be shifted away from AGI. “To me, trying to build AGI is an inherently unsafe practice […] We build what we want to build, and we need to remember that,” Gebru said.
Audience members, including Carolyn Qu ’24, commented on Gebru’s influence and insight in the fields of diversity in tech and AI. “As a Symbolic Systems student, it’s really valuable to have her here because I feel like there’s this very unique intersection of tech and humanism which […] she [has] experience [with],” Qu said.
Tiffany Liu ’23 and a team of Symbolic Systems advising fellows had been organizing the event since last school year. “We really felt that a lot of the work she’s doing, especially at DAIR, aligns with our vision for what Symbolic Systems students could be a part of,” Liu said.
Gebru emphasized the importance of targeting unethical practices in Big Tech and AI, including worker exploitation. “In terms of these tools, if they have to be ethical, many of these organizations will decide it’s not worth it […] I think the first step is to appropriately compensate everybody,” Gebru said. “The thing we should be fighting is creating unsafe products, worker exploitation and centralization of power.”