Almost half of “important” American government agencies have experimented with or are currently using artificial intelligence (AI)-based tools, according to researchers at a Monday campus discussion.
Stanford law professor David Freeman Engstrom J.D. ’02 and professor of law and political science Daniel Ho presented the results of their forthcoming report on the use of AI by government agencies to be released in the following weeks.
Engstrom said his research team canvased 120 of the most important federal departments to look for AI tool usage. Of these agencies, “45% of them have either experimented with or are actually using AI governance tools of one sort or another,” which are used in areas spanning “law enforcement to education and everything in between,” Engstrom said.
In most cases, he added, the technologies “were developed in house by agency technologists, not by profit-oriented contractors.”
He described the overarching purpose of this technology as working to shrink “the pool of potential violators of the law and therefore be better able to allocate scarce agency resources,” “make enforcement fairer” and “ensure more consistent decisions about who has to face the power of the state.”
Ho emphasized his belief in the utility of these technologies in government, pointing to their ability to flag human error in judicial decisions, with the potential vulnerability of being manipulated.
“One of the biggest challenges in this mass adjudicatory system has been how to ensure the accuracy and consistency of these kinds of decisions,” Ho said.
However, he conveyed his concern that “sophisticated parties” might be able to manipulate the same technology and “erode trust in the system.”
Engstrom echoed this worry, adding that the law is “built on transparency, and reason giving.” Sophisticated AI tools, he said, are “by their structure, not fully explainable.”
“We have this basic collision between the legal requirement of reason given on the one hand and the blackbox nature of some of the tools,” he said.
A member of the audience asked the two professors if there was anything that Stanford could do to get its technical students to consider careers in public service and social responsibility.
“The short answer is yes,” Ho said.
Because Stanford has no official public policy school, Ho said, he and his team recruit students through impact labs and policy practices.
“We are very much in the process of trying to figure out how to build the right institutional vehicles to empower Stanford faculty and students to actually do that kind of impact-oriented work” he said.
Engstrom added that he sees a “real appetite for thinking about how to make the world a better place.”
Ho and Engstrom began their research last year, collaborating with California Supreme Court Justice Mariano-Florentino Cuellar Ph.D. ’00 and New York University law professor Catherine Sharkey. Their team consists of lawyers, engineers, graduate students and one undergraduate student.
Engstrom is the Bernard D. Bergreen Faculty Scholar and an associate dean at Stanford Law School, as well as a faculty affiliate at the Stanford Institute for Human-Centered AI (HAI); CodeX: The Stanford Center for Legal Informatics; and the Regulation, Evaluation, and Governance Lab (RegLab).
Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, a professor of political science, and senior fellow at the Stanford Institute for Economic Policy Research at Stanford University. He is RegLab’s director, as well as a faculty fellow at the Center for Advanced Study in the Behavioral Sciences and an associate director of HAI.
Contact Emma Talley at emmat332 ‘at’ stanford.edu.