Researchers found that industry-leading AI models have low transparency ratings, according to a report released earlier this month by the Stanford Human-Centered Artificial Intelligence (HAI) Center and Stanford Center for Research on Foundation Models (CRFM).
The report showed significant room for improvement, with a mean transparency score of 37 out of 100 indicators as assessed by the Foundation Model Transparency Index (FMTI) released with the findings.
Although artificial intelligence has ballooned into the quintessential Silicon Valley buzzword, companies have increased their product secrecy and shielded their AI practices from consumers and even developers. This index is the first of its kind to contextualize where companies stand and additionally holds benefits to stakeholders, developers and consumers alike.
According to Percy Liang, an associate professor of computer science and the principal investigator of this study, the transparency index measures three main categories of each company: development, creation and public consumption. “The exact indicators are based on various basic principles, but also where policymakers and academics have advocated for transparency along some of the dimensions,” Liang said.
The researchers looked at 10 major AI companies, including Meta (Llama 2), OpenAI (GPT-4), Stability.ai (Stable Diffusion 2), Google (PaLM 2), ANTHROP\C (Claude 2) and Amazon (Titan Text).
When the team scored these companies using their 100-point index, they found plenty of room for improvement: Meta ranked the highest in transparency at 54% and Amazon at the lowest at 12%.
Rishi Bommasani, society lead at the CRFM and lead author of the FMTI report, said that transparency has been an overarching goal of the initiative since its inception two years ago.
“Our broad belief is that transparency is just one thing that we are trying to improve in the ecosystem, but it tends to be a precondition for many more substantive things,” Bommasani said.
Earlier in the year, Bommasani and his team built ecosystem graphs to track the supply chain of companies’ products and tried to document different parts of it. “We realized that in spite of our efforts, transparency was declining,” he said.
Kevin Klyman, co-author of the index and a J.D.-M.A. candidate at Harvard Law School and Stanford’s Freeman Spogli Institute, noted that the lack of transparency with OpenAI has contributed to a major shift with company practices surrounding transparency. According to Kylman, “In the 2010s, companies such as Google gave out more public information.” A decade later, with competition being of the utmost importance, these same companies are now prioritizing secrecy over consumer and developer trust and transparency.
However, the Stanford index findings faced pushback from companies fearing lawsuits and a lack of secrecy.
“What you want is that transparency is sort of seen as a kind of capability rather than a kind of compliance process,” said Shakir Mohamed, co-founder of Google AI model DeepMind. “That creates a kind of research process which looks very different from the way we used to do research, where we wouldn’t have considered those kinds of things.”
Yet, Bommasani says that this index is “asking for fairly basic information.”
“And the fact that even basic information is not public is a pretty clear indication of how opaque things are,” he said.
He added that because the “bar of transparency is so low, it reduces the extent to which [competition and transparency] are in contention.”
Others agreed with Bommasani: Graduate School of Business lecturer David F. Demarest, who teaches business strategy and was unaffiliated with the study, said that transparency can actually uplift businesses.
“Trust is built through transparency,” Demarest said. “Trust involves a rationale that is built over time and based on a track record, which is where the ‘rigid’ index comes into play. If you can quantify what builds trust it can be helpful for companies to understand where they are. It should give them tools to improve.”
“Although major companies may feel victimized by these ratings,” Demarest said if he was in a position of leadership at one of the companies, he would think about how the score allows him to be more transparent and gain trust.
“This foundation model holds a very objective perception of transparency — you either get a point or not,” Demarest said. “That objectiveness is beneficial.”