OpenAI CEO, Algorithmic Justice League founder call for responsible and equitable AI

Nov. 28, 2023, 1:21 a.m.

Sam Altman, the recently restored CEO of OpenAI, and Joy Buolamwini, founder of the Algorithmic Justice League, warned of the potential risks of emerging artificial intelligence (AI) technologies at a Nov. 7 event.

The “Joy Buolamwini and Sam Altman: Unmasking the Future of AI” event, hosted by Wall Street Journal technology reporter Deepa Seetharaman at the Commonwealth Club of San Francisco, attracted over 75 attendees.

As large language models like ChatGPT and text-to-image generators like DALLE-3 evolve alongside advancements in voice cloning technologies, AI is becoming increasingly impactful on things like societal norms, privacy and democracy. 

The discussion, part of Buolamwini’s “Unmasking AI” book tour, grappled with this issue through a conversation around responsible AI, government regulation and equitable access, among other subjects. 

A central topic was the impact of AI on the upcoming 2024 presidential election. 

“I am definitely worried about the impact that’s gonna have on the election,” Altman said, specifically expressing concerns about the “sort of customized one-on-one persuasion ability of these new models.” 

Buolamwini said she was also worried about the impact of synthetic media and deep fakes in the context of elections, citing misinformation about the Israel-Gaza conflict as an example of problems that emerge when AI tools are readily available. 

Another focal point was how companies could ensure that the voices of marginalized communities would be used to inform AI systems. 

“The responsibility will be on companies like us to make sure that we are doing everything we can to get truly global input from different countries, different communities, entire socio-economic stratum, and to proactively collect and do it in a fair and just and equitable way,” Altman said. 

Buolamwini said that governments were also integral to this effort. “Companies have a role to play, but this is where I see governments needing to step in because their interest should be the public interest,” she said. 

“I do think there would be a more cautious approach if it costs you something, for example, to translate somebody who is posting about their faith and then label them as a terrorist,” Buolamwini said. 

“We [at OpenAI] have been calling for government regulation here, I think the first and loudest out of any company,” Altman said. “We absolutely need the government to play a role here.”

Altman compared new AI technologies to the release of Google at the turn of the century, as AI will expand human capabilities. Buolamwini added that the emerging technology could exacerbate inequities in fields like education, by advantaging students with access to more resources.

Computer science assistant professor Ehsan Adeli wrote that he sees the societal benefit of AI, particularly in medicine. 

“Recent advances in AI, particularly foundation and large-scale models, have tremendous potential to transform medicine and healthcare,” Adeli wrote in a statement to The Daily. “AI could also be the solution to inequality in access to healthcare, by lowering the costs of care and extending its reach to rural populations and remote areas.” 

He also echoed the speakers on the potential harms of AI: “Advancements are heavily reliant on data, and in healthcare, this data originates from individuals in society. Consequently, societal biases could be embedded into the data and subsequently transferred into AI systems.” 

Some members of the audience were critical of some of the speakers’ comments, particularly those made by Altman. 

“I thought that some of Sam’s takes were a bit idealized and not really … based on what his product [ChatGPT] has put out,” said Emma Charity ’25, a member of the Stanford Public Interest Technology Lab.

Echoing Charity, Emily Tianshi ’25, who studies data science and social systems, said “Sam [Altman] was talking as if he didn’t have direct control over those harms he is worried about.”

Tianshi agreed with Buolamwini’s view that large language models will disproportionately benefit people with resources, especially compared to marginalized communities. 

“At the end of the day, the purpose of all of this is to help people … that’s a good reminder to route whatever we do to the stories of people who are experiencing them and to always pull perspectives from all around the world and really listen to them,” she said. 

Shreyas is a writer for The Daily. Contact him at shreyas.kar@stanford,edu

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds