Stanford Libraries’ Office of Scholarly Communications recently implemented iThenticate, a service that can detect the use of artificial intelligence (AI) in research papers, to help faculty review their work before publication and ensure it meets publishers’ guidelines.
iThenticate compares scholarly writings such as manuscripts, grant proposals and papers against a database of published content to check originality and detect the inclusion of AI-generated text. The program, invented in 2002 by Turnitin as a plagiarism detection tool, may be used by Stanford faculty to review their own work before publication.
iThenticate’s implementation comes several months after Stanford communications professor and social media researcher Jeff Hancock admitted to using fabricated AI-generated citations to draft a court statement. Hancock’s statement, written as expert testimony to support a Minnesota law banning AI deepfakes in elections, was found to contain several AI “hallucinations” in the form of fake citations.
After work is submitted to the iThenticate web interface, the program generates a similarity report, highlighting text in the work that matches text in iThenticate’s database. It also detects word patterns characteristic of AI-generated text and flags content that may have been created by a chatbot.
Support for the tool was coordinated by Stanford’s Office of Scholarly Communications, a unit of the library that offers “guidance on the complexities of academic publishing,” per the office’s website.
“I think that we’re in a period of what I would call the Wild West, where people are doing a bunch of stuff [with AI] and instructors are not being clear and can’t even decide what their policies [on AI] are,” said Russ Altman, M.D ’90 Ph.D. ’89, a bioengineering professor who heads Stanford’s AI advisory committee.
“iThenticate came to Stanford now because all these issues around research integrity and publication integrity have been so high-profile over the past few years,” Rochelle Lundy, director of the Office of Scholarly Communications, told The Daily. Lundy added that plagiarism accusations and retractions of published research have increased in frequency and received greater national attention in the past few years..
Stanford’s former president Marc Tessier-Lavigne resigned from his post in Aug. 2023 and retracted or corrected five scientific papers following a University investigation which concluded that research he oversaw contained manipulated data.
Former Harvard president Claudine Gay also resigned in Dec. 2023 amid investigations into her research and dissertation over accusations of plagiarism and inadequate citations.
“We know that our researchers are interested in protecting their reputation and making sure that any work they put out there is in line with what they want to produce,” Lundy said regarding the University’s reasons for implementing the tool.
Stanford’s libraries and the Office of the Vice Provost and Dean of Research received a number of inquiries about a tool that would help researchers check their work before distributing it outside of the University, Lundy added.
According to Lundy, Stanford opted in to the generative AI (gen-AI) detection tool due to increasing restrictions imposed by publishers about the use of gen-AI in scholarly writing.
Altman, however, urged people to experiment with AI and learn how it works, so that “they become familiar with what it’s good at… and when it’s appropriate to use.”
Although iThenticate cannot currently be used to detect AI-generated text in student coursework, Altman said he would support this in the future.
French professor Dan Edelstein, who sits on the Faculty Senate, believes that students should be able to exercise discretion regarding the use of AI in their own coursework.
“Students are really aware of the threat that [generative AI] poses to their own education and to the development of critical thinking skills and communication skills,” Edelstein said.
Edelstein said he could not gauge whether or not faculty are using gen-AI. Within humanities departments, “it’s not something that people see as really all that helpful, at least for research,” Edelstein said. Edelstein expressed greater concern over student use of AI.
“It feels like gen-AI has now gotten to the point where it’s such a temptation; it’s at your fingertips. In light of these changes, it would be worth having a broader conversation about [AI and] the honor code,” he said.