Student use of AI is discussed in the Faculty Senate

Applications of AI


The Faculty Senate discussed the role of generative artificial intelligence (AI) in education at its Thursday meeting, attempting to strike a balance between encouraging independent critical thinking and recognizing the usefulness of AI.

Jay Hamilton, vice provost for undergraduate education, briefed the Senate on how students are currently using AI and how its use is impacting learning.

Hamilton said students primarily use AI to understand difficult concepts. As a result of this usage, office hour attendance and in-class test scores declined, and scores on take-home problem sets skyrocketed, Hamilton said. He cited the Daily Editorial Board’s article “Stanford University Students Would Rather Not Think” as evidence of this problem.

Hamilton shared several solutions devised by students enrolled in a class that examines AI policy regulation, “CS 283: Governance of Artificial Intelligence: Laws, Policies, and Institutions.” He read four papers in his class that proposed explicit AI policies in the syllabus. Mr Hamilton also proposed adopting a “barbell” model for the syllabus, allowing the use of AI to be accepted or banned.

Mr. Hamilton expressed the risks of policies that require the use of AI between these extremes. “Some kind of use becomes full use,” he said.

Hamilton argued that an ideal AI policy would teach students both “augmented writing and coding and unassisted writing and coding” and help them achieve the goals of a liberal arts education, such as critical thinking skills and networking.

Faculty expressed uncertainty about the effectiveness of more specific AI policies in the syllabus. Finance professor Jonathan Burke said Hamilton had neglected to discuss concerns about “externalities”. The idea is that if students sense that their classmates are using AI, they will also use AI to “catch up.”

Hamilton and other teachers agreed that “externalities” play an important role in the use of AI in education.

Comparative literature professor David Palumbo Liu took a critical stance on the use of AI in the classroom.

“Stanford University believes that using this [AI] It causes very bad mental health problems,” Palumbo-Liu said.

To reduce the use of AI in the humanities, Palumbo-Liu suggested that Stanford expand its offerings of seminars rather than lectures, or even introduce “tutorials like the Oxford model.” Palumbo-Liu said this small setting could allow professors to change the structure of their assignments, making it harder to use AI. He pointed to the final exam for the comparative literature class he teaches, where students meet with him individually to discuss their writing choices.

Several faculty members supported this proposal and discussed how to expand seminar and tutorial offerings. Proposals included capping majors such as computer science and increasing the number of faculty members.

The Senate also debated how to restore public support for higher education. SLAC Vice President Kam Mueller and political science professor Brandis Keynes Roan spoke on the topic and shared plans to conduct an internal investigation and study into the decline in public trust in universities. Mohler and Keynes-Roan also advocated for faculty outreach and messaging to correct this decline.



Source link