UC Berkeley computer science Ph.D. student Deb Raji warned of imperfections in everyday technological algorithms and proposed third-party auditor access as a method of improving artificial intelligence (AI) evaluation and accountability during a Wednesday speaker panel.
The event was part of Stanford’s Human-Centered Artificial Intelligence (HAI) Fall 2021 Conference, which was held across two days and featured four speaker panels focusing on solutions to issues in technology and artificial intelligence.
Algorithms do not always work perfectly, Raji said. When they fail, the impact is often devastating: a man was forced to live in a motel room for almost a year after his name was falsely found on arrest records by the screening company RealPage; a Michigan public algorithm once wrongly accused 20,000 people of fraudulently seeking unemployment payments; Stanford’s vaccine rollout algorithm designated only seven of Stanford’s first 5,000 available vaccines to medical residents, Raji said.
Raji’s proposal includes three components: an audit oversight board, a national incident reporting system and post-audit interventions.
The audit oversight board would grant protected access to accredited third-party AI auditors, Raji said. She added that one big challenge for third-party AI auditors is the current lack of legal protection, recognition and data access. Given that regulators within federal agencies like the Federal Trade Committee (FTC) and Food and Drug Administration (FDA) already have “inspection powers but lack capacity, expertise and awareness,” Raji proposed that the government extend these powers to other qualified third-party auditors.
Raji’s proposal also calls for a national reporting system because “those most aware of algorithmic harm have the least capacity to take action on it,” she said. A federal reporting database could collect and organize complaints, allowing repeated offenders to be fined and flagged for future investigation, according to Raji.
The post-audit intervention portion of Raji’s proposal includes direct communication between auditors and enforcement agencies, “standard setting” for AI employment and deployment and a failure reporting system that is accessible to the general public.
But Cathy O’Neil, New York Times best-selling author and founder of Mathbabe.org, works as an auditor and raised concerns about Raji’s proposal, especially regarding some of the challenges third-party auditors might face.
“There are algorithms that are so internal and so inaccessible,” she said.
External audits on such algorithms are simply impossible, according to O’Neil. She cited an example in which Facebook previously disabled accounts involved in a New York University research team’s investigation of political targeting algorithms. It is unclear if the situation might change if an audit oversight board is established, O’Neil said.
“How do we accomplish a goal while ensuring that we don’t have a suffocating regulatory burden?” said DJ Patil, head of technology at Devoted Health and former chief data scientist at the United States Office of Science and Technology Policy.
Patil said that he has been working in highly regulated fields like healthcare for a long time, where “regulatory burden really prevents innovation.”
“To drive better thinking and openness, it’s going to require a large collective of action to let it happen –– not only academics, but also technologists and even activists,” Patil said.