Humans as the keystone: An emerging approach to artificial intelligence

Nov. 16, 2022, 11:50 p.m.

Artificial intelligence researchers and industry leaders explored what it means to center individuals, communities and society in areas like healthcare and hospitality during a Stanford Institute for Human-Centered Artificial Intelligence (HAI) conference on Tuesday. 

The human-in-the-loop model, unlike autonomous or semi-autonomous AI, is an AI approach that involves human feedback and decision-making across several stages. HAI aims to center people beyond just being “in-the-loop.”

Computer science professor and faculty director of research at HAI James Landay emphasized in the introduction that noticing and critiquing “the potential and real harms of AI” is vital to improving AI but “recognizing the negative impacts of AI is not enough.”

Some AI technologists have focused on tackling problems within high impact areas by leveraging their technical expertise. However, as Landay explained, gaps and failures within AI have widespread consequences as they can lead to negative social impacts or unsolved problems. 

As COVID-19 overwhelmed healthcare systems worldwide, AI experts created hundreds of predictive models to diagnose COVID and predict patient risk in 2020, Landay said. 

A scientific review in 2021 revealed these models often missed the mark. A research team at Maastricht University in the Netherlands, led by epidemiologist Laure Wynants, assessed 232 algorithms and found zero to be viable for clinical use. Only two showed potential for future development. 

Negative social impacts are often tied to algorithmic bias, such as when datasets reproduce systemic biases against women and gender marginalized people. 

This bias can lead to harmful outcomes: in a study of home mortgage applications in 2019, lenders were 70% more likely to deny Native American applicants and 80% more likely to deny Black applicants than similar White applicants.

“We’re in the early days of finding the right design processes and tools to practice truly human-centered AI,” Landay said. As a guide for AI research, design and product development, Landay posed the question, “how do we proactively design to avoid those issues?”

Echoing Landay, computer science professor and associate DEI dean at the Carnegie Mellon Human-Computer Interaction Institute Jodi Forlizzi spoke about design within socially responsible AI. The rise of the service economy and proliferation of AI technologies has caused Forlizzi to pivot in her work.  

“Designing with AI technologies is different and traditional design processes may not apply,” Forlizzi said. But within the past five years, recommendations for change within systemic design have emerged. Forlizzi said the industry needs to prioritize artificial intelligence and machine learning innovators and technicians with diverse and compatible skill sets early in the design process. 

Forlizzi’s research at Carnegie Mellon focuses on worker-oriented AI interventions, including processes to create technologies that better serve hospitality workers and employers. 

Forlizzi said she found that many existing AI tools for hospitality, ranging from robots that prepare beverages to inventory management tools, “hastily considered design” and “created more labor and reduced [the employee’s] ability to perform the social and emotional labor that they desire.” 

Her research team has partnered with Unite Here!, the largest hospitality union in the United States, so that “worker satisfaction, voice, safety, ownership and agency” are centered in their interventions. 

Forlizzi also highlighted the need for more regulatory decisions like the 2022 CHIPS and Science Act and the key role of research in creating effective public policy. Research into AI design, a relatively new field, is very limited, which makes it even more critical, Forlizzi said.  

Within healthcare, bioengineering professor and associate director of HAI Russ Altman Ph.D. ’89 M.D. ’90 described a regulatory agency like the FDA as “being very careful and being very aware that it’s not clear what to do, that they could squash innovation if they’re not careful.” 

Altman agreed with Forlizzi that academia, like the research at HAI, is important in “creating the evidence base so that policymakers can have something to actually go on.”

Many of the speakers expressed optimism and envisioned a bright future in technology with human-centered AI as the keystone. The first panel of speakers elaborated on design principles that create positive user interactions and technologies that offer true value. 

Ben Shneiderman, computer science professor and Human-Computer Interaction Laboratory founding director at University of Maryland College Park, said supertools like an iPhone or pacemaker “must be reliable, safe and trustworthy” so that AI can fulfill its purpose “to amplify, augment, empower and enhance people.”

In an afternoon panel on emerging technologies in health and accessibility, computer science professor at Carnegie Mellon Jeff Bingham discussed VizWiz, a mobile application similar to Quora that assists blind people in their everyday lives. Bingham said the data generated from this platform, in a later project called VizLens, led to users being able to create their own tactile interfaces, an assistive technology that helps vision impaired people use different appliances. 

Integrated health and technology professor at Cornell Tech and co-founder of HealthRhythms Inc. Tanzeem Choudhury shared another healthcare intervention through AI based on her work in “unifying the competing visions of machine learning and knowledge of the disease” to build digital tools and sensors to help inform treatment of psychiatric conditions like depression. 

When discussing the future of AI in healthcare, Choudhury cited a recent article written by Andreessen Horowitz partners Daisy Wolf and Vijay Pande echoing the need to integrate “human- and software-driven diagnostics, therapeutics and medication delivery.” 

Choudhury said the benefit of AI technologies within patient care cannot be actualized without an integrated behavioral health model that “maintains and protects the therapeutic alliance” between the patient and their care team. 

Choudhury concurred with other speakers that from the first design ideation session to the final deployment, the value of human engagement and collaboration in AI technology cannot be overstated. 

Esaite Lakew is a writer for The Daily. Contact them at news ‘at’ stanforddaily.com.

Login or create an account