Stanford faculty weigh in on the use of AI tools in research

Published Jan. 8, 2026, 10:02 p.m., last updated Jan. 8, 2026, 10:02 p.m.

As artificial intelligence (AI) tools become more widely used in academic research at Stanford, faculty across disciplines are debating how to adopt the technology without compromising scholarly rigor, ethics and human judgment. 

Kathryn Olivarius, an associate professor of history, approaches AI with caution. Her research on the 19th century United States relies on archival materials, close readings and original interpretations, which often reside in physical archives that remain undigitized.

“ChatGPT or generative AI is not the archive plugged in,” Olivarius said, noting that many students assume historical sources are fully accessible online.

Olivarius shared that she has used generative AI and found it to be a “very good copy editor,” emphasizing, however, that she would not use it to generate drafts. “For most academics, your thinking and good ideas all come through this slog of writing,” Olivarius said. “I don’t see myself ever really outsourcing that part of any research I do.”

Olivarius also raised ethical concerns surrounding AI use in historical research, describing the current moment as “the Wild West.”

“There is no consensus about the ethics of this yet,” she explained. “If it’s not your idea, it’s not your idea, I think you can plausibly call it [AI] plagiarism.”

Her concerns extend into the classroom. After testing AI by having it generate an essay in her field, Olivarius said it made significant interpretive errors. “It got things wrong, major interpretive things wrong, and only I would know that,” she explained. “The problem is, when you are not an expert, you won’t be able to catch the things that it gets wrong.”

As a result, some faculty members are focused on helping researchers use these tools more effectively. Jooyeon Hahm, the head of Data Science Training and Consultation at the Center for Interdisciplinary Digital Research, helps Stanford researchers create quantitative, computational and algorithmic analyses of data. 

“Consultation requests have shifted from general AI awareness to highly specific tool evaluations and technical applications,” Hahm said, highlighting a growing interest in cost-efficient application programming interface (API) usage strategies, which help organizations get the most out of AI tools while controlling costs. “This shift reflects a maturing relationship with AI technology,” Hahm said, moving “from initial experimentation toward informed, critical and ethically grounded integration into research workflows.”

According to Hahm, researchers are also seeking deeper explanations of how AI systems work, including “transformer architecture [the model design that helps AI understand and generate language efficiently] and the underlying mechanisms that power these tools alongside increased attention to ethics and best practices.

In qualitative research, Hahm said scholars are experimenting with large language models (LLMs) for tasks such as coding and data extraction — while remaining aware of “the danger of hallucinations and inaccurate outputs that could compromise research validity.”

“I don’t think AI is fundamentally reshaping the skill set researchers need,” Hahm shared. “If anything, the traditional skills of critical reading, writing and thinking have become more important, not less.”

Jef Caers, a professor of earth and planetary sciences, works in a field where AI-driven analysis is already deeply embedded. His research focuses on decision-making under uncertainty in mineral exploration and geothermal energy.

Caers said AI allows geoscientists to analyze complex datasets that humans cannot process alone, helping inform decisions across the mining value chain. He emphasized that the value of AI extends beyond efficiency. 

“When people do mineral exploration, they don’t worry about the environmental impact,” Caers explained, arguing that incorporating sustainability and community considerations earlier ultimately saves time and money.

In large-scale mining projects, Caers said AI can help integrate environmental, social and economic factors, reducing waste and improving long-term outcomes. “Operations get optimized, not for productivity but for including other factors, such as sustainability,” he said.

Despite attention on generative AI, Caers agreed that its role in research is often overstated. “AI is not going to do that,” he said. “It won’t understand the complexity of the systems.”

As Stanford continues to develop norms and guidance around AI use, faculty say the challenge is not adopting the technology itself but ensuring it strengthens scholarship without eroding the intellectual rigor, accountability and collaboration that define academic research.



Login or create an account