Instructors generate different approaches to AI

April 4, 2024, 10:19 p.m.

As generative artificial intelligence (AI) technologies like ChatGPT change academia and raise concerning implications for education, instructors across the University have taken varying responses through their course policies — from embracing generative AI to banning it.

Per the University’s generative AI policy guidance, “instructors are free to set their own policies regulating the use of generative AI tools in their courses, including allowing or disallowing some or all uses of such tools.” Some have exercised this prerogative by banning the technology.

Senior lecturer Nick Parlante, for example, prohibited the use of generative AI for widely-popular class CS106A: “Programming Methodologies.” Students are not allowed to use AI as an aid for their work, just as they are not allowed to copy and paste information from the internet.

To monitor this, the course uses a system called Moss (Measure Of Software Similarity), which flags collaborations between students and usages of AI by checking students’ assignments for sections of similarity. 

“No one thinks that we’re 100% going to be able to catch stuff,” Parlante said. “But we should still have the policies and systems that we feel absolutely set out the way we want the class to run.”

In contrast, writing and rhetoric studies lecturer Ruth Starkman allows students to use AI to a certain extent. Starkman, who first incorporated generative AI into PWR 2STA: Ethics and AI in 2022, said that incorporating the technology into a classroom setting opens space for discussions of its allowances and its limitations.

“My colleagues across PWR have a whole range of responses, but I believe most of us are in the position that this machine is here, the cat’s out of the bag,” Starkman said. “Rather than policing our students for use, let’s equip them to critique it.”

Writing and rhetoric studies lecturer Harriett Virginia-Ann Jernigan also uses generative AI in her two classes, PWR 1HT: “The Rhetorics of Ethnic and Racial Identity” and PWR 2HT: “The Rhetoric of Satirical Protest.” Jernigan allows her students to use the technology for brainstorming and sorting information, in addition to examining cultural appropriation and implicit bias.

Jernigan said that examining the output of generative AI through a cultural lens sheds light on both the technology’s capabilities and its limitations. She said she sees both the benefits and drawbacks of the increasingly prevalent use of AI in education, and considers AI to be useful as an aid rather than an answer.

“We must see AI as supplemental rather than superior to human thought, ingenuity and creativity,” Jernigan said. “It is a tool, and it must be treated as such.”

While no University- or department-wide standards on the use of AI currently exist, the topic of AI and its role within education remains a pressing point of discussion within the University. The Stanford Institute for Human-Centered Artificial Intelligence (HAI), for example, has aimed to advance interdisciplinary AI research, education, policy and practice since its founding in 2019.

As instructors grapple with questions of responsible and productive AI usage or lack thereof, part of the Institute’s work also engages with similar issues, such as the Ethics and Society Review (ESR) — a board of faculty who evaluate the ethical considerations and societal risks of researchers seeking funding from HAI.

“As we know with social media, there are a lot of consequences that we didn’t intend to happen,” HAI Director of Research Programs Vanessa Parli said. “With AI, we should be thinking about what those consequences might be before releasing it to everybody.”

Rebecca Louie is a writer for The Stanford Daily.

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds