Frequently linking AI chatbot usage to reduce programming course scores

Applications of AI


A new study from Tartu University suggests that computer science students who most frequently earned AI chatbots in their core programming courses tend to achieve lower grades. While many have found tools that can help with rapid support, this study points to the delicate risk that frequent trust can hinder skill development.

How the research was set up

The study involved 231 students enrolled in an object-oriented programming course built around Java. The class followed the flipped model. Each week included lecture videos, online quizzes, homework tasks and in-person seminars. Two major and final exams set performance benchmarks along with ongoing coursework.

Students were invited in week 8 to complete a detailed survey of their experiences. Approximately 72% of the class responded, allowing researchers to link survey responses directly to grades. Of these, 68% were males and 32% of females, with a wide, typical gender distribution of discipline.

Who used the chatbot and who didn't use it

Almost 80% of respondents said they tried AI assistants at least once during the course. Most used tools from time to time, but about half of the users were engaged in them more regularly. Only 3.9% of the entire class report weekly use, indicating that heavy dependence is rare.

20% of people avoided chatbots gave a variety of reasons. He pointed out clarifying course instructions and said no additional help is needed. Others preferred traditional approaches such as peer support and official documentation. One student explained that “Google problems often give clearer and more accurate answers,” while another admitted that they simply enjoyed “getting things out with their own head as much as possible.”

How students used AI tools

Among users, the most common applications were debugging, understanding code examples, and checking allocation solutions. Students also turned to chatbots for more unusual tasks, such as translating work Python code into Java, splitting unclear instructions in a second language, and generating data for group projects. Small groups used tools like private tutors to discuss concepts step by step before starting their own work.

Speed ​​and constant availability were the most powerful attractions. One student described his assistant as “like a private teacher who answers quickly,” while another highly praised the freedom to “speak stupid questions without being embarrassed.” Students said these features help resolve errors faster than searching online or waiting for staff guidance.

But frustration was equally common. Many said that assistants sometimes “made something instead of admitting that they didn't know,” while others were frustrated when the solution included advanced topics that weren't taught in the course. Some complained that tools often unnecessarily rewrite code rather than simply pointing out the problem.

Performance links and statistical findings

The most impressive result was the relationship between chatbot usage and exam performance. Spearman's correlation analysis showed moderate negative links with the first programming test (r = –0.315) and weak negative links with the second, final exam, and overall course points. In contrast, there was no measurable connection between grades and how students could help them find the tool.

This suggests that frequent users were often more struggling users. As the authors pointed out, this pattern could mean that weaker students will look more frequently at AI, or that dependence itself will limit learning.

Changes in study habits

Frequently users reported mixed effects on learning behavior. Many felt they were not struggling with their homework and were motivated to try more tasks. However, they also admitted that they explored fewer solution paths and didn't seek much help from teaching assistants. As one student observed, “The more I use AI, the less I think I have.”

In particular, most students rejected the idea that chatbots prevented them from being involved in course materials, while some suggested the risk of overreliance.

Wide lessons

This study highlights both the promise and pitfalls of AI support in education. Students rated the chatbot as a fast, unjusted helper, especially for debugging. However, overuse appears to be linked to weak performance, raising questions about when and how these tools should be integrated.

The authors warned that the findings reflect a single course in one institution and rely on self-reported data. They suggest that future work should include multiple universities and combine research with direct use logs to create a richer picture.

For educators, the results indicate the need for structure. Chatbots can enhance learning when used as supplements, but unchecked reliability can limit the development of problem-solving skills. Integrating AI into course design may provide a path that balances efficiency with deeper learning rather than letting students navigate the tools independently.

Read next:

• Consumers want AI labels but doubt their skills

• Indian courts with disputed government over X and content takedown rules


[ad_2]
Source link

Leave a Reply

Your email address will not be published. Required fields are marked *