This academic year, the Student Arts and Sciences Advisory Committee (ASACS) met regularly and hosted conversations between students, faculty, and Dean Harris about issues related to AI. We spoke with two ASACS members, Lucy Nowicki and Priya Kaila, to discuss the work ASACS has been doing and their own thoughts on AI.
Please tell us about yourself.

Lucy: So, I’m Lucy. I’m a senior. I’m majoring in psychology and philosophy. Outside of the Student Arts and Sciences Advisory Committee (ASACS), I play piccolo in the marching band. I belong to TRIO Student Support Services. I’m a first-year student. I also play in a concert band and research childhood development.
Priya: I’m Priya. I’m a senior. Outside of ASACS, I am a research associate at the Jackson School, studying health data security in a variety of settings. Currently, my research focuses on physical health wearables such as watches and smart rings. I am also an undergraduate instructor for the FIG program, so I teach during the fall semester.
How did ASACS decide to focus its conference on AI this fall?
Lucy: The first ASACS conference of the year was held in early October 2025. The goal was to meet with each other and Dean Harris to decide what topics we would discuss this year. AI has definitely arrived. Everyone thinks about it. Since then, we’ve had four different faculty members from the College of Arts and Sciences present on how their departments are handling AI. In 2026, we plan to invite more faculty to speak on marketing and communications. We want to bring in talent at a higher departmental level who can help with a top-down communications plan that brings clarity to different departments and promotes the larger message about AI.

How are you using AI in your professional life?
Lucy: It uses AI. I always use ChatGPT. Where this helps me the most is in writing emails, especially outreach emails. This will help you develop a professional and formal demeanor when contacting potential ASACS conference speakers and guests.
Priya: It makes me nervous because I’m very cautious about using this just because I’m in a privacy position. That’s why I tried to stick to using ChatGPT only. Because I think the more platforms you use, the more places your data is stored. I will occasionally use it to assist with research. For example, if you’re researching a topic, you might ask ChatGPT what related papers you can find on that topic. I don’t usually like to use it when writing. Because I feel like it takes away innate skills.
What are the biggest issues with AI in the classroom?
Lucy: The idea that students are losing academic ability because of AI is a topic that is constantly being talked about. This lack of skill shows up in student work in the classroom and even in class conversations.
Priya: I think AI is becoming more important to students. Some students think that if they use ChatGPT, they don’t need to study. In the FIG class I teach, we had a project where we had students make posters. One of my students asked me if I could create a poster template using Canva’s AI. It felt like it was just a combination of colors. I’m sure you guys can do it. Relying on AI for really small things can lead to us distrusting ourselves and our abilities. The more you offload the load to the AI, the more skills you lose because you use them less.
What do you want instructors to know about student perspectives on AI?
Lucy: Some faculty members appreciate aspects of AI and allow it to be incorporated into the classroom in ways that do not constitute academic dishonesty. However, some faculty members have completely zero-tolerance policies. This worries me because AI is relevant, here to stay, and can be used in useful and good ways. So I would like to see faculty with zero-tolerance policies take a more thoughtful approach to how they use AI. Be open to conversation. Honestly, I think this openness will actually help discourage inappropriate use of AI in the classroom.
Priya: Let me just say that it also depends on the type of work you are assigned. The idea of ”busy work” came up at a recent ASACS conference. Some of the homework assigned in class is really repetitive and unhelpful. When you come across such a task, you don’t want to leave it to AI. But I agree with Lucy. We shouldn’t take such a rigid approach. There should be more trusting relationships between students and teachers. At the moment, AI is the elephant in the room. It needs to be addressed and integrated in a way that students are still learning.
What do you think about current AI policies in the classroom?
Lucy: That’s one of the goals of the ASACS conference: to bring professors from different departments together and ask them directly about the kind of guidance they’ve received in AI. Some received guidance, others heard nothing. Therefore, it is inconsistent. At the same time, I don’t think there can be a campus-wide policy because each department is so different.
Priya: I agree that it’s very decentralized. And it varies from class to class. At the ASACS conference, we heard from a sociology professor who spoke directly to his class about the nuances of using AI. I wish we could have more conversations like this in the classroom. That’s probably the first step to improving policy.
Any final thoughts?
Lucy: In fact, we created a survey asking students what they think about AI, how they use it, and to what extent they use it. Since we only received 80 responses, we haven’t shared the results yet. However, at the end of the survey, there was an open-ended question asking participants to share anything else they had in mind. And more than a third of the 80 respondents said they really dislike AI and think it’s terrible. I was really surprised by the fact that more than a third of these students were passionate enough about hating AI and had more to say. This is why we need to involve students in conversations about AI, especially when policies are created.
Priya: I think there’s no escaping AI now because it’s basically the next wave of everything. But I think it’s really important to understand what we stand to lose by using more and more of these services, and the impact that AI will have on the environment. I’m still super wary of it. I try not to use it unless it’s actually necessary. And I think if we can talk openly about it as a tool rather than a replacement, we’ll be in a better direction.
