A New Wave of Education: From undergraduates to Ph.D.s, how is AI shifting classroom policies?

Published March 7, 2026, 2:08 p.m., last updated March 7, 2026, 2:08 p.m.

Due to rapidly evolving artificial intelligence (AI) technologies, professors now face the challenge of creating classroom policies and curricula that teach students how to walk the line between beneficial and unethical uses of AI to give students the tools to learn, but also prepare them for a world that AI is incorporated into.

A recent survey from Copyleaks found that nearly 90% of university students across the world use AI to help with their education, with roughly a third using AI tools on a daily basis. This adoption doesn’t look to be slowing down either, with almost 75% noting their AI usage has increased since 2024.

“The computer science department is very strongly looking at modifying our curriculum as a whole because of the way the world’s changing with AI,” computer science professor Chris Gregg said. 

Instead of trying to improve AI detection technology, professors are changing policies and syllabi to promote hands-on learning. CS106B has begun introducing in-person assessments. This new addition allows teaching assistants to meet individually with students and assess their comprehension of course material in real time, according to Gregg.

While this development is still in its early stages, it has been largely welcomed by professors and students alike, and Gregg noted that there are plans to expand it to CS 106A and other courses.

Additionally, he explained how more weight is now being placed on in-person midterms and finals instead of take-home assignments. Gregg noted how data across CS 106B showed that students who used AI during assignments didn’t perform as well on tests as students who refrained from using AI.

For Gregg, the CS department is uniquely tasked with finding ways to familiarize students with AI innovations while also teaching them the fundamental skills necessary for graduates.

“The programmers using AI daily absolutely have those basic skills. Nobody is getting a job at Google, Meta, Apple or wherever if they don’t know those basic skills,” Gregg said. “So where the AI fits in is important, but that doesn’t diminish the fact there’s all these very core skills [students] need to know.”

To navigate this line, Gregg explained how professors are emphasizing the importance of limiting AI usage within introductory courses such as CS 106A and CS 106B, which they hope the in-person assessments help achieve. This allows students to overcome struggles on their own, which he notes is fundamental for learning coding essentials.

“That struggle is the part where the learning happens,” Gregg said. 

However, AI is also making it harder for students to adopt this mentality. Gregg explained how the department has seen a drop in attendance at LaIR helper hours, a set of office hours that run Sunday through Thursday for students in CS 106A and CS 106B, noting this could be in part due to AI usage increasing.

“I hate to say this, but it’s actually true. I can’t trust anything that happens outside my eyeballs,” Gregg said. “A student leaves the room and does whatever assignment. They could be using AI, and the AI is probably going to do a very good job with it.

Meanwhile, in humanities departments, heightened AI policies are being put in place to ensure human-driven work.

“We, in the Program in Writing and Rhetoric (PWR), aim to demonstrate to students that their distinctive abilities as language-users, including as readers, writers, and revisers, cannot be replaced by technology, and that turning to AI as a kind of ghost reader, writer, and researcher severely limits students’ growth and development in those areas,” Marvin Diogenes, Associate Vice Provost for Undergraduate Education and Director of PWR, wrote in an email to The Daily.

Diogenes noted that PWR works to help students draw from personal experience and knowledge to analyze and develop research assignments. However, AI tools take away from this ability, if not applied properly. He explained how the overuse of AI replaces key experiences of learning through writing.

The rise of AI has led to the creation of AI Meets Education at Stanford (AIMES), an initiative led by the Vice Provost of Undergraduate Education that offers teaching strategies and resources for professors and students alike, as they navigate proper usage of AI in the classroom.

The Office of Community Standards (OCS) is also developing learning suggestions that address AI concerns. According to Lawrence Marshall, Interim Director of OCS, the center is working with the Academic Integrity Workshop to identify areas of academic dishonesty and its causes to recommend policy changes.

Marshall noted how students should use AI with caution, as it can take away from the education opportunities at Stanford. “It’s like paying for a gym membership but only pretending to lift the weights,” he wrote.

Marshall also emphasized how impermissible usage of AI isn’t a “perfect crime,” and they are taking action against students who do. “OCS has a steady stream of disciplinary actions involving students impermissibly using AI,” he wrote. “And it is safe to say that none of these students believed they might be caught and none of them look back and say that violating the Honor Code was worth the high costs.”

While early undergraduate courses have adopted strict AI policies, for more advanced coursework, students have more free rein. Gregg noted how capstone courses like CS 194 and CS 210 are in fact explicit in allowing AI usage.

“It would almost be wrong to say you can’t use these [tools], because what’s the point?” Gregg said. “The projects end up better in a lot of cases, the outputs are better, and the students are still demonstrating that they can put a big project together.”

This approach in some upperclassmen courses has also been adopted in graduate education.

Kenneth Goodson, Vice Provost for Graduate Education and former Chair and Vice Chair for Mechanical Engineering, said that most graduate classes and research are more lenient on AI policies.

“Students at the graduate level are just a little bit further along, and they know that they need to partner with faculty in using AI in a way that will help them, but not replace the thinking,” he said. 

Goodson said he believes that a one-size-fits-all AI approach isn’t as effective, advocating instead for faculty to be given “more autonomy” to decide where AI fits best. This is because graduate courses tend to be more specialized to the professor’s expertise.

Across all disciplines, graduate students are integrating AI into their research projects. 

“What we’re suddenly seeing is that if you go to thesis defenses across the university, increasing fractions will have AI in the title,” Goodson said. “Students have a chance to push the knowledge base in these areas.”

The expansion of AI has opened doors for new learning opportunities, but educators are still exploring the unknowns to better understand how all industries are responding to AI. Goodson emphasized how this shift is leading to more research and education, but is also creating more questions of how to use AI correctly.

“You realize it’s important to become the architect of your own learning and to look after how you’re doing it,” Goodson said.

Sterling Davies ’28 is a Vol. 269 News Managing Editor. He was previously a Vol. 268 Local Desk Editor and a Vol. 267 Public Safety Beat Reporter. Contact Sterling at sdavies ‘at’ stanforddaily.com.

Login or create an account