As a Stanford student juggling two incredibly different majors, political science and computer science, I’ve witnessed firsthand the impact of generative AI on my learning and coursework. As students, it’s like we’ve been handed a life calculator – powerful, potentially game-changing, but also fraught with ethical landmines. Students and faculty alike are struggling to figure out what policies are fair or reasonable to both preserve learning and prepare students for a workforce that will inevitably use tools like generative AI. I myself am unsure how to find a balance. But my hypothesis is that students’ mindsets towards generative AI matter a lot more than any one policy in the long run, and a lack of action in this space may leave this generation in a critical thinking crisis.Â
Let’s be real: when Claude AI can solve my entire CS111 systems homework in 30 seconds, the temptation to cut corners is real. But after experimenting, failing and learning, I’ve started to develop a nuanced approach to AI that maximizes productivity without compromising my own intellectual growth.
First, let’s talk about research. Perplexity has become my go-to source finder. Unlike the rabbit hole of infinite Google searches, it provides curated, credible sources with concise summaries. Research that used to take hours now takes minutes – and crucially, the sources are verifiable and trustworthy.
When diving into scholarly articles, I’ve found Notebook LM to be a game-changer. It’s not about replacing reading, but rather enhancing comprehension. After reading a complex text, I use AI to generate a study guide, highlighting key takeaways and ensuring I’ve truly understood the material.
Grammar and punctuation editing? Absolutely fair game. I use ChatGPT as my final polish, catching those pesky capitalization errors and cleaning up my prose. These are mechanical tasks that don’t impact my core thinking – they’re just about presentation.
But here’s my cardinal rule: never, ever ask AI to generate ideas. The moment you do, you’re outsourcing your most valuable asset – your unique perspective. Trust me, I’ve tried. After facing writer’s block for a political science paper on authoritarian regimes, I used a generative AI thesis algorithm only to receive a wordy sentence full of nothing. I’ve become certain that my best writing comes from wrestling with concepts, not having them pre-packaged by an algorithm (although perhaps edited by one). I fear for my generation of students and for the ones to come – if AI ideas replace our original thoughts, we lose the powerful, intellectual personal process it takes to reach a conclusion.
The most critical skill for students in this AI era is discernment. We must ruthlessly eliminate tasks that don’t contribute to critical thinking while embracing tools that free up mental bandwidth for deeper learning. To our faculty – that also means allowing students to use generative AI in the classroom as a tool to learn how to manage rather than an unfair advantage. If a student can get an A in your class with a generative AI essay… that seems like a different problem to me. And perhaps if ChatGPT is starting to meet class standards, the expectations of students can become higher.Â
Different departments are crafting AI policies faster than we can keep up and faster than the AI updates. From my experience, the most effective approach isn’t blanket prohibition, but teaching responsible use. AI is here to stay – our job is to harness it intelligently whether it means transitioning to oral examinations or debugging GPT generated code in class.
To my fellow students: Nothing can replace your brain, not even AI. AI is a supplement, a tool, a catalyst for more efficient learning. Use it wisely, use it sparingly, and never let it replace the most important muscle – your own thinking.
The future belongs to those who can work alongside AI, not those who are replaced by it.