Since CHATGPT came into being almost three years ago, the impact of artificial intelligence (AI) technology on learning has been widely debated. Are they useful tools for personalized education or a gateway to academic injustice?
Most importantly, there were concerns that using AI could lead to widespread “idiotty” or reduced ability to think critically. If students use AI tools too early, discussions may lead to failure to develop basic skills for critical thinking and problem solving.
Is that really true? That's what it seems to be, according to a recent study by MIT scientists. Helping you write essays using ChatGpt can lead to “cognitive debt” and “potentially lowered learning skills.”
So, what did this study find?
The difference between using AI and brain alone
Over the course of four months, the MIT team asked 54 adults to write a series of three essays using AI (ChatGPT), search engines, or their own brains (“brain-only” groups). The team examined electrical activity in the brain and measured cognitive involvement through linguistic analysis of the essays.
Cognitive involvement among those using AI was significantly lower than in the other two groups. The group also struggled to remember the quotes from their essays and felt they had less possessiveness than they did.
Interestingly, participants switched roles in the final fourth essay (the brain-only group uses AI and vice versa). The AI-to-brain group had poor performance and engagement during the first session that was slightly better than the other groups.
The authors argue that this shows how long-term use of AI has led participants to accumulate “cognitive debt.” When they finally got the opportunity to use their brains, they were unable to replicate engagement or perform like the other two groups.
Carefully, the authors only point out 18 participants (6 per condition) who completed the fourth final session. Therefore, the findings are preliminary and require further testing.
Does this really show that AI makes us stupid?
These results do not necessarily imply that students using AI have accumulated “cognitive debt.” In our view, the findings are based on the specific design of the study.
Changes in neuroconnectivity in the brain-only group in the first three sessions may be the result of familiarity with research tasks, a phenomenon known as conventional effects. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategies adapt accordingly.
When AI groups finally started “using their brains,” they were performing tasks only once. As a result, they were unable to match the experiences of other groups. They achieved slightly better engagement than the brain-only group during their first session.
To fully justify the researcher's claims, participants from AI to brain must also complete three writing sessions without AI.
Similarly, the fact that the Brain-to-AI group used CHATGPT more productively and strategically may be due to the nature of the fourth writing task, in which essays on one of the previous three topics, must be written.
They recalled much better what they had written in the past, as they needed to write without AI and therefore require more substantial engagement. Therefore, they mainly used AI to search for new information and refined what they wrote before.
What does AI mean in evaluation?
To understand the current situation of AI, we can look back at what happened when the calculator first became available.
Back in the 1970s, their impact was regulated by making exams more difficult. Instead of doing calculations, students were expected to use calculators and devote cognitive efforts to more complex tasks.
In effect, bars were raised so much that students worked equally harder (if not difficult) than before the calculator was available.
The challenge with AI is that in most cases educators don't raise the bar in a way that makes AI a necessary part of the process. Educators require students to complete the same tasks and expect the same standard of work as they were five years ago.
In such circumstances, AI can actually be harmful. Students can mostly offload important involvement with learning to AI, resulting in “metacognitive laziness.”
However, like calculators, AI can accomplish tasks that were previously impossible and still require important engagement. For example, you might ask students to teach them to use AI to create a detailed lesson plan. This is evaluated in oral exams for quality and pedagogical health.
In the MIT study, participants using AI were generating “same old” essays. They coordinated their involvement to provide the standard of work that they would expect.
The same thing happens when students are asked to perform complex calculations with or without a calculator. Groups that do calculations by hand are sweating, but those using calculators barely turn their eyes off.
Learn how to use AI
Current and future generations need to be able to think critically and creatively and solve problems. However, AI has changed the meaning of these.
Creating essays with pen and paper is no longer a demonstration of critical thinking ability. Making long divisions is no longer a demonstration of computer power.
Knowing when and where to use AI is key to long-term success and skill development. Reducing cognitive debt, prioritizing which tasks can be offloaded to AI, is just as important as understanding which tasks require true creativity and critical thinking.![]()
![]()
This article will be republished from the conversation under a Creative Commons license. Please read the original article.
