All across university campuses, professors are tackling a new kind of plagiarism panic. It's the fear that students are giving ideas to ChatGpt and other generative AI tools for them.
But one education researcher said the real crisis is not fraud. That means that while higher education continues to test AI's best performance, others cannot be ignored.
In a conversational essay published Sunday, Anitia Lub, an associate professor at Northwest University in South Africa, said the university is focusing on the use of AI “focusing solely on the police” instead of asking more basic questions. Whether the students are really learning.
She writes that most assessments still reward memorizing and learning of memorization. “The tasks AI performs most often.”
Lubbe warned that unless the university rethinks how it teaches and evaluates its students, there is a risk that it will produce graduates who can use AI but cannot criticize its output.
“This should include the ability to evaluate and analyze texts created by AI,” she wrote. “It's an essential skill for critical thinking.”
Instead of banning AI, Lubbe said universities should use it to teach things that machines can't do, such as pondering, judgment, ethical reasoning.
She suggests five ways educators can fight back.
1. Teach students to evaluate AI output as a skill
She said the professor should have students question the output of the AI-generating tool – they ask them to identify inaccurate, biased or shallow locations in which the AI-generated answers are inaccurate, biased or shallow before using them in their work.
It's a way for students to learn to think critically about information, rather than simply consuming it.
2. Assigning scaffolds across multiple levels of thinking
Rather than letting AI handle every step of the project, she encouraged her teachers to design tasks that guide students from deeper levels of thinking from basic understanding – by moving from basic understanding to analysis and ultimately to original creation, the entire process cannot be simply delegated to the machine.
3. Promote the ethical and transparent use of AI
Students must understand that responsible use begins with disclosure, she said.
She said that openness not only helps build integrity, but also helps to split AI as learning partners instead of secret weapons.
4. Encourage peer review of AI-assisted work
When students criticize each other's AI-generated drafts, she said they learn to evaluate both the technology behind it and the human thinking.
The process, in her view, restores a sense of dialogue and collaboration that pure automation erases.
5. Reflections not only on the outcome but also on the reward
She said that students should consider how they used AI – whether they documented their processes, justified their choices, or demonstrated learning through comparisons with machine inference.
“But focusing solely on the police force will miss the bigger question: whether the students are really learning,” writes Lub.
Wideer academic alarm
Lubbe's warnings reflect the wider unease among educators that students are quietly outsourced their thinking to AI.
Last week, Kimberly Hardcastle, a business professor at Northumbria University, wrote that AI allowed students to “produce sophisticated produce without the cognitive journey traditionally necessary to create them,” calling it an “intellectual revolution.”
Hardcastle fears that AI is screaming for critical thinking, but Ted Dintersmith, a former venture capitalist educator, warned that schools are already training students to think like machines.
Last week he told BI that the school is already “treating away” that “it's “training kids to follow in the footsteps of AI and not teaching “Chatgpt flaws” creativity, curiosity and collaboration.” Skill machines cannot be replicated.

