From the Community | SB 1047 and the enduring tech-policy divide

May 12, 2025, 4:50 p.m.

“Senator, we run ads,” Mark Zuckerberg responded with a hint of incredulity. In 2018, the Senate summoned the chief executive for a hearing concerning Facebook’s data privacy concerns. Despite chairing the Republican High-Tech Task Force, Senator Orrin Hatch needed Zuckerberg to explain the basics of Facebook’s ad revenue business model during questioning. 

Senator Hatch’s ignorance may seem like an isolated incident, but it highlights an ongoing disconnect between the rapidly accelerating tech industry and the slow-moving policymakers that govern it. The government’s inability to keep up with the ever-changing landscape of artificial intelligence (AI) is a clear example of this growing divide, where proposed state bills like California’s Senate Bill (SB) 1047 ignite controversy surrounding AI regulation — a symptom of a State Capitol issue that has continued to involve and impact Stanford through a growing disconnect between those who create technologies and those who regulate them. 

Public debates over the bill’s vague language and convoluted protocols fuel this controversy, yet these are just symptoms of an overarching issue  — policymakers don’t understand tech. The lack of technical expertise among legislators has steered SB 1047 away from addressing demonstrable AI risks such as algorithmic bias, data privacy breaches or artificially generated disinformation. Instead, the proposal leans toward the restriction of AI technology itself in fear of imagined future catastrophes — a point countered time and time again by prominent Stanford AI researchers who further cement the university’s critical role in advancing and shaping policy discourse around AI. If we fail to promote AI literacy and education within governing bodies, we risk enacting misguided policies like SB 1047 that needlessly stifle Stanford’s technological innovation, ultimately impeding student learning opportunities and narrowing faculty research. 

The California bill’s inception, however, stemmed from good intentions of state legislators and AI safety advocates. State Senator Scott Wiener, a previous proponent of state AI safety legislation, proposed SB 1047 in February 2024. The bill was heavily inspired by a draft from Dan Hendrycks, director of the Center for AI Safety (CAIS). SB 1047 establishes strict protocols on large “covered model(s)” during pre-training and post-training, focusing on the threat of catastrophic risks rather than immediate issues. Covered models include those whose training costs more than $100 million or uses an inordinate amount of computing power, along with their fine-tuned derivatives. 

Before training, developers are required to take “reasonable” precautions to assess if their covered model could enact mass casualties or more than $500,000,000 in damages, defined as “critical harm” via safeguards and kill switches. After training, risk assessments and annual third-party auditing assess the model’s compliance. Reports are sent to the Attorney General and penalty fines range from $50,000 to $10 million, potentially swallowing up large portions of funding for research-based universities like Stanford for infractions, incidental or not. The proposal also introduces the California Government Operations Agency’s “Board of Frontier Models” (BFM), which would oversee and define state regulatory standards.  

Despite the California Senate bill’s ambitious goals, SB 1047 establishes vague compliance measures and overregulation. The cumbersome provisions are born out of fear of an existential threat by policymakers who lack AI literacy. The bill’s language is filled with ambiguities as it naively plans for future catastrophes involving biological, chemical, and even nuclear warfare. Researchers and developers at Stanford’s cutting-edge computational research labs would be given considerable discretion regarding what reasonable safeguards may indicate for compliance. However, they would face harsh penalties if their definition doesn’t comply with third-party auditors such as the bill’s proposed Frontier Model Division

The bill’s imprecise language will create apprehension around the deployment of cutting-edge models, as evaluating any risks before training is notoriously difficult, let alone the risks posed by “derivative” models, which others may fine-tune or jailbreak beyond the original developer’s control. In the face of convoluted regulation and harsh penalties, the most rational course for Stanford researchers who wish to release potentially groundbreaking findings may be to release nothing at all. Despite its goal to increase AI safety, the bill instead impairs AI progress and the technology that empowers Silicon Valley’s key players, like Stanford, on the global stage.

Dr. Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute and CS professor, says it is impossible to train new AI leaders without access to proper models and data. Li has been instrumental in the last decade of AI advancements in academia and industry and is actively engaged with national policymakers to promote ethical progress in the field. She believes a kill switch would “further dampen the efforts of these students and researchers, already at such a data and computation disadvantage compared to Big Tech.”

Although it would remedy the bill’s failures, it’s not necessarily a policymaker’s responsibility to be tech-literate. Measures like mandatory AI training sessions may distract lawmakers from immediate legislative concerns, especially when they can consult with AI experts independently. However, policymakers commit themselves to representing the public’s concerns. These matters span across a vast array of domains, including the widespread adoption of AI technology. Thus, addressing AI regulation is a legislator’s duty as it continues to embed itself into our everyday lives. Policymakers must stay educated on AI matters to efficiently govern its risks through engagement with tech literacy coursework and experts. Such education could better address AI’s immediate threats to society, like algorithmic bias or misinformation.

The failures of SB 1047 exemplify an urgent need for AI literacy among government officials. Legislation mandating AI educational programs, assessments and technical consultations for lawmakers would boost government AI literacy and contribute to more effective AI policymaking. 

I believe our campus, holding a unique position as a leader in AI innovation and policy, is where meaningful change starts. It is not enough for CS students to neglect the societal impact of their technical work, just as students in policy and governance must not disregard the technical complexity of legislation they study (or later enact). Bridging this gap requires mutual interdisciplinary engagement — immersing technical students in ethics, governance, and advocacy while making policy-focused students comfortable with tech literacy. It is imperative that we deeply understand the systems we seek to shape and advance in the future through interdisciplinary methods. 

Stanford — I urge you to engage with our local representatives on AI matters that affect you. Speak up about AI policy by any means — through academics, advocacy or even direct civic action. Keep your officials accountable for learning about the role AI plays in your lives and theirs. AI is moving at breakneck speed; we must ensure those in power keep up.

Andrew Bempong ’25 is a co-terminal Master’s student at Stanford University studying Computer Science.

The Daily is committed to publishing a diversity of op-eds and letters to the editor. We’d love to hear your thoughts. Email letters to the editor to eic ‘at’ stanforddaily.com and op-ed submissions to opinions ‘at’ stanforddaily.com.

Login or create an account