Gov. Newsom vetoes AI safety bill

Oct. 20, 2024, 11:17 p.m.

California Gov. Gavin Newsom vetoed SB 1047, a landmark artificial intelligence (AI) safety bill meant to impose the first AI regulation in the United States, in late September. The bill drew opposition from tech giants in the homegrown industry.

“The outcome is not what we’d hoped for,” said Sneha Revanur ’26, founder and president of Encode Justice, a global youth movement for AI safety. Revanur has been advocating for the bill since it was first introduced by California Sen. Scott Wiener.

After tech companies like OpenAI expressed concern that the bill would hinder innovation, Newsom announced the state will instead partner with several AI experts, including computer science Professor Fei-Fei Li, considered the “godmother of AI,” to build AI “guardrails.”

Li did not respond to a request for comment. 

Revanur said despite the outcome, it was really exciting  “to see how far we came, how many allies we activated in the process and just how much we moved the needle when it comes to this broader conversation about AI.” 

If passed, the bill would have mandated tech companies to perform an overall safety test of large AI models, with the purpose of holding AI companies as legally accountable for harm caused by their algorithms. It also would have required tech powerhouses to enable a kill switch, in case their AI technology was misused for biowarfare or resulted in mass casualties. 

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom wrote in a statement.

Earlier this year, Wiener, the bill’s co-author, invited Encode Justice to help garner enough support for the bill to reach the governor’s desk. Encode Justice helped coordinate much of Hollywood’s support, resulting in 120 celebrities — including Mark Ruffalo and Shonda Rhimes — signing a letter to send to Newsom about the importance of such a bill. 

Encode Justice was “essentially sending a message to Gov. Newsom that in order to avoid repeating the mistakes that we made on those other issues, and in order for Gov. Newsom to carry forward his legacy as a progressive leader on a range of issues, he would have had to sign this bill into law,” Revanur said. 

Despite some technology industry leaders like Elon Musk voicing their support of the bill, other AI experts like Li remain apprehensive. 

“Take computer science students, who study open-weight AI models. How will we train the next generation of AI leaders if our institutions don’t have access to the proper models and data?” Li said in an interview with Fortune in August. “A kill switch would even further dampen the efforts of these students and researchers, already at such a data and computation disadvantage compared to Big Tech.”

Russell Wald, executive director of Stanford’s Human-Centered Artificial Intelligence, said he believes nobody should govern AI technology itself. Instead, regulations should focus on the technology’s impact. 

Wald said much of the fear around AI is not grounded in fact. He is concerned this fear, if acted on prematurely, could impact academic innovation. A majority of safety research is conducted at universities like Stanford, so the regulations from a bill like SB 1047 would hinder experimentation.

“Are we going to put laws on companies for something that’s speculative, and then through that process, prevent students from being able to do the good work and safety research?” Wald said. 

One aspect of AI safety he believes should be considered is mandatory incident reporting. Similar to how the aviation industry follows specific safety requirements, integrating incident reports into the AI realm would help consumers to notify companies of concerns and for companies to track incidents independently, Wald said. 

Stanford’s teaching staff helps students develop critical thinking skills by assigning coursework that uses emerging tools like AI, said University spokesperson Luisa Rapport. 

“The Board on Judicial Affairs has been monitoring these emerging tools and will be discussing how they may relate to the guidelines of our Honor Code,” Rapport said. 

Still, Revanur believes there needs to be a balance. She said that AI innovation at Stanford primarily revolves around pure technological advancements, with only a fraction of students focused on the policy and governance around AI risk management.  

“People who are excited about building [and] people who are excited about inventing should not view regulation as an attempt to clobber that at all,” Revanur said. “I think the two things can coexist, and that is one thing that we’re trying to prove to the world.”



Login or create an account

Apply to The Daily’s High School Winter Program

Applications Due NOVEMBER 22

Days
Hours
Minutes
Seconds