Human-Centered Artificial Intelligence initiative talks AI, humanities and the arts

Feb. 15, 2019, 12:02 a.m.

At the full-day collaborative workshop “AI, Humanities & the Arts,” speakers hosted by the Stanford Humanities Center stressed artificial intelligence’s (AI) potential to supplement human capabilities rather than replace them.

The workshop gathered speakers from Stanford’s new Human-Centered Artificial Intelligence (HAI) initiative and across diverse University departments to discuss the implications of artificial intelligence for fields such as legal practice, healthcare, art, design and more.

“What sets Stanford’s HAI Institute apart from the others around the world is its embrace of the humanities and the arts,” said history of science professor Londa Schiebinger.

The workshop’s keynote speaker was Stanford Law School professor David Engstrom, who discussed legal and regulatory approaches to artificial intelligence.

“The current wave of tech is different from past waves of tech, because this wave of tech is transforming the legal system and the legal profession,” Engstrom said, pointing to predictive analytics in the criminal justice system and the development of legal technological tools as examples of changes resulting from the rise of AI technology.

Computer science professor and HAI Institute co-director Fei-Fei Li also discussed the three core intellectual principles that structure the Institute: AI technology should be inspired by human intelligence, guided by studying and understanding its human impact and applied in ways to enhance rather than replace human capacities.

“We want to be advancing humanities research, education, policy and practice in service to humanity,” Li said. “To achieve that mission, we want to be catalyzing interdisciplinary research … We want to be fostering robust ecosystems of scholars, visitors, artists, writers, technologists, journalists [and] policy makers, and we want to be promoting real-world actions.”

She cited a study by the McKinsey Global Institute that found that 50 percent of current work activities could now be technically automated.

“[AI] is going to impact so many different kinds of human behaviors and human activities,” Li said. “But already there are studies going on around campus that show that there’s so much opportunity for AI to actually complement and lift human capabilities.”

The workshop additionally featured seven half-hour sessions, in which speakers across a myriad of Stanford departments discussed the broad intersections between artificial intelligence and the sciences, humanities and arts.

The workshop concluded with a panel comprising assistant professor of digital humanities Mark Algee-Hewitt, professor of political science Rob Reich, English professor Elaine Treharne and Schiebinger. They discussed major social challenges for the HAI initiative, including addressing the biases built into algorithms, considering the ways that AI affects modern weaponry and war, encouraging the promotion of social equality through addressing gender norms and recognizing the value of diverse human expertise.

“At the heart of the Human-Centered AI Institute is how it is that by having computer scientists work alongside humanists, AI can be developed in such a way that it advances and supports, rather than undermines and subverts, human agency itself,” Reich said. “And that’s a task that I think humanists can play a really important role, in asking and answering in a variety of ways.”

 

Contact Michelle Leung at mleung2 ‘at’ stanford.edu.

Michelle Leung '22 is a writer for the Academics beat. She comes from Princeton, NJ and enjoys taking photos, making paper circuit cards and drinking tea with friends. Contact her at mleung2 'at' stanford.edu.

Login or create an account

Apply to The Daily’s High School Summer Program

Priority deadline is april 14

Days
Hours
Minutes
Seconds