Verslaff | Fundamentals (3): Learning with Machines — Four Principles for Using AI at Cornell University

AI Basics


Every semester now begins with the same quiet contradiction. One syllabus declares:ChatGPT is not allowed.“Others promote it as a research aid. And somewhere in between, most of us use it anyway. In a recent national survey, more than half of undergraduates reported using artificial intelligence in the past week – even summarizing reading in the middle of the night. We’re slow to admit it, but instructors also use it to sketch out lectures, check quotes, and polish their expressions before hitting “send.” Technology hums in the background of university life, fast, fluent, available, and quietly essential. But despite its ubiquity, few people feel comfortable with it. We worry that it’s overthinking it for us, or that it’s replacing something we don’t quite understand the name of. Like calculators in the 1980s and Wikipedia in the 2000s, AI has gone from novel to necessity before deciding what it means to us. Urgent questions are emerging on our campuses, including: How can we use AI wisely without hollowing out the very work that makes learning human??At issue is the difference between explanation and engagement, between knowledge provided and knowledge created.?

Nowhere has that question felt more pressing to me than in the classroom itself. That’s what I explored in my course, “The Past and Future of Holocaust Survivor Testimony,” and I plan to explore it again next spring. There, we place AI in the most ethically demanding situations imaginable. In other words, we ask what happens when machines try to interpret the fear, silence, and moral ambiguity in the testimonies of Holocaust survivors. The first answer, and the many failures along the way, have something to teach every student and every instructor on campus.

At first glance, the biggest temptation of AI is its smoothness. It produces smooth sentences, arguments that fall neatly into place, and summaries that look like they’ve already been edited twice. For students juggling five classes and endless deadlines, this fluency may feel like a mercy. But the very sophistication that makes ChatGPT and its ilk so attractive also makes them dangerous. Their answers are almost never wrong, but they are almost never alive. When we first asked ChatGPT to summarize textbook chapters and survivor testimonies, the results were spotless and hollow. All points were correct, but the heartbeat was gone. There was no conflict, no doubt, no sense of discovery. AI did not create our obsession with grinding. It simply completed it. For decades, higher education has valued fluency over friction, performance over reflection. We admire smooth discussion, clean prose, and active participation. These are exactly the characteristics that machines can automate. As a teacher, I can’t blame students for demanding what AI can now do perfectly. When learning starts to look too perfect, AI simply holds up a mirror to us.

Teaching with and against AI has crystallized what many of us already feel instinctively. AI can mimic understanding, but it cannot replace the act of thinking. Part of its charm is that it never hesitates. You won’t lose your train of thought or misread sentences. But that’s also its flaw, and, let’s be honest, our flaw too. Some colleagues worry that AI will make students lazy. I worry that they will become too fluent before they can really think about anything. The challenge ahead is not prohibition, but purpose. AI is not a scandal. It’s a design challenge. As my colleague Laurent Dubreuil argues in his excellent new book on AI and the humanities, AI can generate content without limit. Only humans can create meaning. You can’t outwit automation by walling it up, and you can’t do that right now. We can only overcome it by teaching both its advantages and disadvantages. Learning is inefficient by design. It means being demanding, uncertain and sometimes time-consuming. What we need now on campus is not more strict rules, but a renewed curiosity about what counts as an idea. In other words, we need to teach discernment, not resistance, how to think with AI rather than through it.

If AI has shown us what it looks like when learning is too easy, our job is to reframe learning around friction. It begins with a new kind of literacy, one rooted in interpretation rather than coding and compliance. I have come to see AI literacy as an ethical and intellectual habit. That means the reading machine outputs the way we read text. That is, ask what is missing, what is assumed, and what is quietly distorted. In my classes, students use AI reflectively, not covertly but openly. In this way, it becomes a collective investigation and we find out what is wrong and why it matters. Over time, I distilled four simple habits that can guide both students and faculty.

1. Curiosity: Start with questions that are actually important to your course and goals, rather than questions that just fill out a prompt. AI can be a useful shortcut for summarizing what you’ve read, organizing notes, and brainstorming ideas, but it can’t do the entire task.

2. Transparency: Recognize what AI is making more visible and less visible. Keep track of what went right and how many things went wrong. The practice begins before class and continues long after class is over.

3. interpretation: Treat the answer as a beginning, not a conclusion. Learning is full of hesitation, and small disruptions push us toward deeper understanding.

4. Dialogue: Rather than outsourcing it, use it to sharpen your own thinking. Interact with AI and ideally with other people when using AI.

What we need now is not more vigilance or regulation, but a common language of machine thinking. These are not advanced technologies. They are humane and call for not only individual habit change, but also institutional support for faculty workshops and cross-disciplinary conversations about AI. Together, they will transform AI from oracle to companion, a tool that reflects rather than replaces. They remind us that technology is not a threat to learning as long as we remember that our thoughts are, and always will be, ours.

The more time I spend teaching with AI, the more I become convinced that learning depends on imperfection. Perfect sentences, polished essays, and well-organized answers are not signs of intelligence, but of aftermath. The real work happens in the chaos itself. It’s when ideas collide, sentences fall apart, and there are long stretches of silence before something new emerges. Last semester, I asked students to take an AI-generated argument and critique it as a group. Slowly the writing began to come to life – hesitantly, personal, and alive. Each version contained traces of struggle and discovery, of visualized thought. AI can reproduce forms of intelligence, but it cannot feel the discomfort of being seen, of being wrong, of correcting, of thinking again. That feeling, the discomfort of being wrong together in a community, is what makes learning human. AI can help us start, but it cannot finish for us. The goal is not mastery. It’s mindfulness – learning how to use these tools without letting them use you. This is because, even at its best, daily education does not aim for perfection. It’s about caution, reflection, and the courage to remain unfinished.

The Cornell Daily Sun is interested in publishing broad and diverse information. content From Cornell and the greater Ithaca community. We’d love to hear your thoughts on this topic and our work. here are some guidelines About the submission method. And our email is: associate editor@cornellsun.com.


Professor Jan Wenceslas

Jan Burzlaff is an opinion columnist and postdoctoral fellow in the Jewish Studies Program. Office hours (open door version) This is a weekly dispatch of a professor to the Cornell University community. A professor’s reflections on teaching, learning, and the small moments that humanize campuses. Readers can submit comments and questions anonymously through the tip sheet here. You can also contact me at profjburzlaff@cornellsun.com.


read more





Source link