Stanford students and professors alike are grappling with the rise of ChatGPT, a chatbot powered by artificial intelligence, and the technology’s implications for education.
Some professors have already overhauled their courses in anticipation of how students might use the chatbot to complete assignments and exams. And according to an informal poll conducted by The Daily, a large number of students have already used ChatGPT on their final exams.
Whether the new technology will necessitate a revision of the Honor Code, the University’s standards for academic integrity, remains to be seen: A University spokesperson confirmed that the Board of Judicial Affairs is aware of and monitoring these emerging tools.
“Students are expected to complete coursework without unpermitted aid,” wrote spokesperson Dee Mostofi. “In most courses, unpermitted aid includes AI tools like ChatGPT.”
The chatbot was created by San Francisco-based artificial intelligence company OpenAI and launched in November. Stanford dropout Sam Altman is among the company’s co-founders.
When OpenAI released the tool, it quickly took off, attracting over a million users in under a week and drawing widespread attention for its ability to generate almost any level of text-based fodder using technology from a field of machine learning called natural language processing. It not only acts as a robotic study buddy, but can also write lengthy essays on almost any subject and generate ideas and outlines for projects.
For example, when asked for a “witty opening paragraph for a Stanford Daily article about ChatGPT and the Stanford Honor Code,” the chatbot produced the following:
“ChatGPT, a popular artificial intelligence tool, has recently sparked a debate on campus about its role in academic integrity. Some argue that using ChatGPT to complete assignments is a violation of the Honor Code, while others claim it’s simply a tool for idea generation. The question remains: is ChatGPT a helpful helper or a dishonest cheat? One thing is for certain: the stakes are high, and the future of AI at Stanford hangs in the balance.”
The power of the technology to instantaneously generate swaths of human-like text has frightened some educators across the country.
Among other districts that have cracked down on its use, New York City’s education department has blocked the site on its networks, citing “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content,” according to education department spokesperson Jenna Lyle in a statement to Chalkbeat New York.
Awareness of the technology has also reached Stanford’s faculty, confirmed Mostofi, the University spokesperson. Mostofi, citing a recent story in the Stanford Report featuring faculty weighing in on ChatGPT, wrote, “Many Stanford faculty are highly engaged in researching new large language models and implications of AI in the learning environment.”
Mostofi said student assignments will continue to be designed to “support students in developing linked thinking and writing skills,” including the drafting and revising processes, as well as citing sources.
Some colleges and universities have already incorporated the new technology into their academic integrity policies. Washington University in St. Louis and University of Vermont in Burlington are among the institutions that have amended their academic integrity policies to include the usage of AI tools like ChatGPT.
Mostofi wrote that at Stanford, conversations will soon be underway about ChatGPT and the honor code.
“The Board on Judicial Affairs (BJA) has been monitoring these emerging tools and will be discussing how they may relate to the guidelines of our Honor Code,” Mostofi wrote.
But while the University plans to discuss ChatGPT, some students have already used the tool to complete their finals, according to an anonymous poll conducted by The Daily on the social media app Fizz, which requires a stanford.edu email to join.
According to the poll, which had 4,497 respondents (though the number may be inflated) and was open from Jan. 9 to Jan. 15, around 17% of Stanford student respondents reported using ChatGPT to assist with their fall quarter assignments and exams.
Of those 17%, a majority reported using the AI only for brainstorming and outlining. Only about 5% reported having submitted written material directly from ChatGPT with little to no edits, according to the poll.
According to another informal poll conducted by The Daily on the same app, a majority of student respondents believe that the use of ChatGPT to assist with assignments is currently or should be a violation of the Honor Code. However, there is a difference in what students believe should be considered a violation.
The news that some students are already using ChatGPT on assignments has spread to professors, some of whom are revamping their courses as a result.
In a message to the public computer science (CS) department Slack channel, computer science associate professor Michael Bernstein ’06 asked if any other professors had encountered homework that was generated by ChatGPT.
“In this case,” wrote Bernstein, “it was easy to tell because part of the submission included: ‘As a large language model trained by OpenAI…’”
Computer science lecturer Julie Stanford BA ’98 MA ’98 added in the Slack that the student’s submission was “like robbing a bank and caring so little about being caught that you try to take a selfie with the security camera on the way out.”
Some professors on campus have recently added course policies to their syllabi cautioning against the use of ChatGPT, arguing that it is a form of plagiarism, while others have switched to more traditional methods, attempting to eliminate technology from the picture.
One class, COMM 108: “Media Processes and Effects” has dedicated a whole section of its syllabus dedicated to the usage of AI tools. “Using Artificial Intelligence (AI) Agents (e.g., ChatGPT, StableDiffusion) to generate assignments or parts of assignments is generally discouraged. If you choose to use an AI agent for generating portions or aspects of an assignment, you must disclose this use and cite it in the same manner as you would cite any external source,” reads the syllabus.
In the computer science Slack, senior lecturer Keith Schwarz MS ’11 said that he has “switched back to pencil-and-paper exams,” citing concerns related to open computers that could be operating ChatGPT, even suggesting that he might consider “requiring students to leave all their backpacks and electronics at the front of the exam room.”
Stanford AI Alignment (SAIA) student leaders Gabriel Mukobi ’23 Michael Byun ’24 urged students to be wary of using ChatGPT for academic work as the tool has not been fine-tuned for use in academic settings.
“AI tools like ChatGPT are clearly here to stay,” the two wrote.
A previous version of this article incorrectly stated that Elon Musk dropped out of Stanford. That portion of the article has since been removed, and The Daily regrets this error.