New research shows that the generator AI produces “discomfort” and “discomfort” at the third level and within further educational institutions.
Research published by Quality and Qualification (QQI), the institution responsible for standards within educational services, shows that there is “limited awareness” for both students and staff of existing guidance and policies conducted at the institution for the effective use of artificial intelligence.
This explains the “urgent” need for literacy in the field of generation AI for both educators and learners, so disruptive techniques can be used both “ethical and effective.”
Both staff and students expressed anxiety and discomfort that the use of generated AI for admission purposes could lead to issues surrounding fairness and clarity in the hospitalization process.
However, this study was conducted in two parts between December 2024 and February 2025, one for educators and students. Many surveyed participants believe that AI is “appropriate” to be used for assessment.
Many respondents said they believe that the existence of generative AI is very likely to change the way students' work was assessed over the next five years.
This study revealed a specific gap between further education and what third-party educators believe AI will use and what they actually use.
Less than a tenth of students surveyed said they now use AI almost every day, but 30% of those asked said they had never used generated AI in academic work in the past 12 months.
The potential use of generative AI tools such as ChatGpt and Gemini by students has already poses significant issues at the secondary level, with teachers and unions expressing discomfort by the Ministry of Education authorizing the use of AI to use it for project work, if properly referenced.
This approach, coupled with a redefine leaving CERT where project work and continuous assessments currently count 40% of the final grades of some subjects, led to vote for potential industrial action by members of the ASTI coalition in protest of inappropriate legal guidance on the development of AI.
