Committee proposes ‘golden rule’ in AI use

Jan. 23, 2025, 7:14 p.m.

The AI at Stanford Advisory Committee released a report on Jan. 9 outlining how to strike a delicate balance between fostering innovation and maintaining ethical standards in artificial intelligence (AI) adoption. Students expressed a range of opinions about the report’s recommendations.

The report was drafted following Provost Jenny Martinez’s appointment to the committee in March 2024 to evaluate AI’s role at Stanford and identify potential policy gaps. Composed of 10 faculty and staff members from various campus units, the committee was tasked with assessing AI use in administration, education and research. The report’s recommendations are not binding, but will be considered by various University offices in shaping future AI policies at Stanford.

A key recommendation in the report is the “AI golden rule,” which encourages users to employ AI in ways they would want others to use it with them, aimed at promoting ethical considerations in AI across education, research and administration.

The report also emphasized the importance of human responsibility in any AI devices created or used at Stanford, and advises establishing a clear streamlined protocol for system procurement and maintenance. The report specifically warns against providing confidential information to such generative AI tools, which may save, reuse or even share such information.

Among students, the report acknowledges the widespread adoption of AI models such as ChatGPT, suggesting potential revisions to the Honor Code and classroom policies. The committee recommends developing frameworks that can be tailored to different courses, providing clarity on permissible AI use.

Faculty members, who may be less familiar with AI technologies, are encouraged to explore AI’s potential in teaching and research. The Graduate School of Education has already established the AI Tinkery, a collaborative space for educators to experiment with AI applications.

The Daily has reached out to the University and faculty in the computer science and linguistics departments for comment. 

In one course on energy market design, economics professor Rimvydas Baltaduonis asked students to experiment with generative AI by commenting extensions to discussion posts classmates wrote using tools like ChatGPT and the Stanford AI Tinkery, providing a brief description of how they used the tool.

Students’ opinions about the report were mixed. 

“The report could establish a clearer bright-line about when using AI for inspiration blurs into plagiarism,” Kathy Shao ’28 told The Daily.  “I’m curious to see how honor code policies will be edited,” she added. 

In an interview with The Daily, Tiffany Saade ’25 M.S. ’25, who studies cyber policy, agreed with Shao’s sentiment. “This may be purposeful, but there is a lack of specificity,” she said. “There’s an underlying assumption of knowledge, but I think for some students that are not 100% proficient with how AI works, it would be helpful to have more concrete examples or guidelines to clarify what constitutes ethical AI use.”

Saade named the principles of ethical AI as “fairness, transparency, explainability, accountability, privacy and, of course, security.” She also expressed concerns about how “AI systems can reflect, enhance and amplify biases that are present in their training data” and data privacy issues. 

Saade acknowledged benefits of the report in opening the conversation and encouraging students and faculty at Stanford to consider the impacts of AI on others. 

“I think that we need as many voices as possible on the AI governance and AI development table, so people from all different types of backgrounds, whether it’s political science or computer science and also, of course, from different parts of the world,” she said. “I think this diversity is what makes the AI field promising and rich.”

Xavier Millan ’26 is a voting member of the Academic Integrity Working Group (AIWG), which aims to understand and address the causes of academic dishonesty. He told The Daily, “I think it’s unrealistic for professors to assume that students won’t use AI when it becomes as ubiquitous as Google search.”

Millan added that ethical AI should be embraced in education. “The reality is, if you teach students [in] a world without AI, there’s this lag period where education isn’t moving as fast as technology,” he said. “So the education that students are receiving at Stanford [isn’t] preparing them for the world where AI exists.”



Login or create an account