Colleges and universities that did not create institution-wide policies that define acceptable and unacceptable uses of AI put students at high-reputation risk. Students can be accused of academic injustice, and students may be lacking in workplace preparation.
A recent survey led by Hui Wang of the University of Arizona found that of the top 100 universities in the US, more than a third had unknown or undecided policies regarding AI use, and more than half of the decisions for individual instructors.
It makes sense to leave it to the faculty, especially as it can be required by academic freedom, which the Association of University Professors (AAUP) defines as “the right to select faculty, determine the approach to subjects, determine assignments, and evaluate students' academic performance in teaching activities where faculty may be individually responsible.”
However, faculty is deeply divided as to whether AI uses constitute academic injustice. Lance Sweaton, director of faculty development at College Unbound, has collected AI-Policy Statements of 164 faculty members from institutions around the world. Eaton's corpus shows widespread disagreement about whether AI tools should be banned, permitted or encouraged. Some faculty unconditionally allow AI, especially in STEM and business, while others allow them to only use it for certain tasks, such as research and editing. The humanities department tends to ban AI support writing entirely, given that it is unethical and opposes academic integrity policies.
advertisement
Students facing multiple, often conflicting guidelines may not be sure when and how AI tools can be incorporated into their coursework. Second, teachers may not be sure when and how AI can be implemented into their curriculum or how it can be used for scoring, education, or scholarships.
Furthermore, there is no much support provided by guidelines created by the American Psychological Association (APA) and the Modern Language Association (MLA) to cite the use of AI. They do not explain the scope of work or the extent of AI support among the people involved, making it difficult to accurately attribute or assess the exact attributes of the work.
advertisement
Furthermore, by asking writers to quote all the phrases generated by AI, they both fail to deal with the reality of how writers use AI today. For example, a single phrase of a ratiophor, hypothesis, and argumentation point – could emerge from interactions with multiple AI tools such as induction, consensus, confusion, incitement, or litmap. Writers may develop further phrases by generating podcasts with basic measurements using NotebookLM. It is not practical to expect writers to quote what could be 12 AI tools. Additionally, due to the additional requirements of MLAs to list the prompts used to generate phrases, two-page articles may be followed by a 20-page prompt.
In a recently published position statement, “Building a culture of generative AI literacy in university language, literature and writing,” the Joint Task Force (CCCC) with the MLA's University Composition and Communications Conference (CCCC) argued that “first year writing courses have a special responsibility to teach students how to use AI critically and effectively across students and literacy lives.” However, by assigning this responsibility primarily to first-year writing courses, the guidelines misalign AI literacy.
These failures by universities and professional associations undermine the mission of modern research universities. This is to prepare students with the literacy capabilities they need to thrive in a workplace that is being transformed by AI. According to Microsoft's 2024 Work Trends Index, 75% of knowledge workers are currently using AI in their workplaces, almost double the number in the past six months, based on a survey of 31,000 workers in 31 countries.
Furthermore, students themselves quickly embraced AI. A 2024 survey by the Digital Education Council of approximately 4,000 students from 16 countries found that 86% reported using AI for academic purposes. However, an astonishing 80% felt that the integration of AI into the university's curriculum did not meet their expectations, while 72% felt that the university should provide more AI training.
AI Advice and Evaluation in Higher Education
I share my colleagues' concerns about AI. It seems unethical that Openai and other companies have absorbed a huge amount of internet content, including copyrighted material, to train AI models without permission or compensation to the original creator. As the author of a Writing Commons article – Open Educational Projects and Encyclopedia for Writers – I am pissed that my work has been cut off without my consent. It took me decades to write those articles. Similarly, I don't think it's ethical for an academic publisher like Taylor & Francis to sell faculty scholarships without permission.
I am afraid of the environmental impact of AI systems, particularly their contribution to global warming and water consumption. I'm worried about the nuclear power plants Google, Amazon and other Megatechs are investing in running mammoth data centers.
I'm worried that AI will limit human agents. Most technology experts investigated by researchers at Elon University are troubled by the belief that AI will eliminate critical thinking. Reading and decision-making ability. And there is a healthy, face-to-face connection that leads to more mental health issues.
advertisement
Still, I'm here.
As of this year, the GPT-4 scores in the 93rd percentile on SAT evidence-based reading and writing tests, and can write in the same way as Smart High School students. Recently, Openai O1, a new AI model that argues that companies can infer that way through complex tasks, has scored 124 in the Norwegian Mensa IQ test.
advertisement
These dramatic changes to meaning creation and literacy practice cannot be ignored. Teaching AI literacy today is similar to teaching reading and writing in the times following the invention of printing presses.
However, it is understandable that teachers worry that AI-assisted writing may undermine students' ability to write and critical thinking. In fact, these results seem likely to merely interact with the AI system as a passive consumer and provide refractory content for assignments without real engagement.
To address these concerns, university-wide AI policies must acknowledge the need to create an environment in which students and faculty are critically involved in AI tools in a way that nurtures human agents. These policies must assert that writers can develop their reasoning and improve their communication by engaging in internal dialogue about what they want to say and what they need to say. From this perspective, AI is a tool and not a replacement for human writing.
Can I detect AI-written content?
To maintain academic freedom, the university's AI policy should allow faculty members to refuse to write AI support. Just as some photographers still prefer analog films to digital files, some teachers may not want to engage in AI-assist writing. However, while universities should not require faculty to teach important AI literacy, they should encourage faculty and students to experiment and study how AI tools can be used to promote critical thinking, structure, and human agency.
To maintain academic integrity and accurately measure student efforts, university AI policies should attach footnotes to coursework that elaborate on how AI was used, for example, as a research assistant to collect and integrate sources. Or as a composition assistant for pre-registration, drafting and organizing. Or, as an editor to hone your prose, you may follow standard written English and ensure appropriate references.
Additionally, the university's AI policy requires students to archive chat logs related to their coursework. If they wish, teachers can review these logs to assess whether students are critical and thoughtful in their AI tools. If an AI-footnote or Chat Log(s) demonstrates that students thoroughly review and refine AI-generated content and demonstrate meaningful interactions with all word tools, they should only award credits for AI assist submissions. Strict penalties, including failure to courses, should be enforced on submissions that do not present evidence of human involvement, such as uncritically unacceptable hallucination references and standard prose.
The bottom line is that if we continue to act as if the old-fashioned concepts of author, composition and academic integrity still apply, we risk surrendering institutions and creativity to the machine. It's time to look up and fight for human institutions and creativity. Writing has changed, so we have to do so too.
Joseph M. Moxley is an English professor at the University of South Florida and is an expert in rhetoric and technology.
advertisement