“How is it going to change the way we as humans talk to each other?” communications professor Jeff Hancock asked of artificial intelligence (AI)-generated messages on services like Facebook’s Messenger and LinkedIn.
Similar big-picture questions about AI have recently been raised as high-profile data breaches, the role of AI in generating fake news and algorithmic biases in face recognition and prison sentencing algorithms have come to light, leaving some wondering whether the technology will ultimately end up contributing to human flourishing or detracting from it.
In recognition of its role in the development of AI, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) has awarded 30 seed grants to winning proposals that its website states “present new, ambitious and speculative research that will help guide the future of AI.”
The projects address a number of issues on the intersection between broader human life and AI, ranging from economic impacts of the technology to decision-support in cancer treatment to “personality design” for artificial agents and technical options for opting out of data use. The Daily took a look at some of the work planned for a few of these projects in this year’s HAI seed grants.
Music therapy
Irán Román, a Ph.D. student in the Center for Computer Research in Music and Acoustics (CCRMA) working with Takako Fujioka, assistant professor in the CCRMA, and psychology professor Jay McClelland, is exploring human physiological responses to music and ways in which AI could allow for using these responses to improve physical therapy.
“How can we make music technology be human-centered in a way that benefits not just musicians and not just amateurs?” Román asked.
Musical therapy with drums has been an experimental treatment in stroke rehabilitation for the past decade that has been found to have positive benefits for stroke victims. Severely impaired patients however currently have limited access to this treatment option, leading to the hope driving project.
“Music therapy turns out to be a motivating complement to existing stroke therapies that has been shown to improve mood, general well-being and also engagement with the task,” Román added.
Prior work established the ability to decode the musical beats listeners perceive and imagine perceiving, from electroencephalography (EEG) recordings using deep neural networks. Computer models can identify rhythms as binary (1-2, 1-2) or ternary (1-2-3, 1-2-3), and determine whether they are speeding up or slowing down.
Román’s project seeks to combine these capabilities to trigger a drum set in line with the beats decoded from patients’ imaginations.
“You can really make people who have no motor abilities engage in a more meaningful task than just having their limbs be moved by a robot or by a therapist,” Román said. “And that could potentially be the imagination of music that gets translated by this drum machine. And then they can engage and play music with family members or other patients in a meaningful way.”
‘Folk theories’
Another project funded by an HAI seed grant addresses questions of how users of AI systems view these developing technologies. In his work on developing interpretable AI, or AI which makes decisions explainable to humans, Hancock describes his fascination with our “folk theories” of AI.
“We have folk theories about everything that’s complicated, from zippers to toilets to helicopters,” he said. “We feel like we know these systems because other humans know about them.”
Folk theories of complex systems often involve not just personal conceptions of how something can be used, but also approximate or intuitive understandings of how it works. While we may not be able to draw detailed engineering drafts of the internal workings of a toilet, a vague sense of water flowing through pipes provides as much of an understanding as most of us would care to grasp.
In a recent pilot study, Hancock’s group surveyed online participants on their metaphors for AI-powered virtual assistants Siri and Alexa. Common responses included notions of “servant” and “assistant,” in line with how the companies building these systems often represent them. Other metaphors found include “little animals or kids,” relating to the slow, simple ways people talk to these systems, “drunk uncle” in reference to these systems’ often incoherent behavior, as well as the more disturbing response of “slave.”
Hancock’s work focuses on understanding these metaphors — with the goal part of using AI to develop technologies that people will find intuitive and comprehensible.
“We need to be doing research on how people are understanding these systems,” he said, “in order to make systems that are good for them, that people can trust and feel values that they care about are being represented.”
Palliative care and ethics
HAI also looks to examine ethical concerns of using AI throughout human society. In their seed project on the ethics of applying machine learning to palliative care, Danton Char, Henry Greely and Nigam Shah plan to explore significant ethical concerns in an area that has until now been largely theoretical.
In recent work with Andrew Ng, Shah helped develop a machine learning-based algorithm that uses a patient’s medical records to predict the probability that they will die within the next three to 12 months. Their algorithm was designed to alert palliative-care specialists to patients who might otherwise undergo aggressive treatment during their last days, without having a conversation about their goals of care and the trade-offs of spending the end of their life in a hospital, rather than with friends and family at home.
The use of such an algorithm, they note, comes with significant ethical challenges.
“Who do you give the mortality information to?” Char asked. “Patients? How do you give it to them without being devastating?”
“What degree of transparency, explainability or auditability is needed to engender trust in medical technologies in clinicians?” he continued. “Neural networks can’t really be explained by anyone, and can fail catastrophically, which is fine (or more acceptable) for Google searches, but unacceptable in medicine.”
Despite its persuasive advantages, such a mortality-prediction algorithm has the potential to create conflicts of interest, with the potential to influence decisions to transfer patients away from hospitals to boost performance metrics.
The mortality-prediction algorithm is planned for experimental deployment in the Stanford hospital system in the coming months, as the role of AI continues to expand into new parts of society.
Contact Riley DeHaan at rdehaan ‘at’ stanford.edu.