The majority of academics do not want artificial intelligence to be used to evaluate the next Research Excellence Framework (REF), according to a new report.
Ahead of guidance changes for REF 2029 due to be published this month, the survey also found that senior university staff are generally more supportive of the use of AI and are less likely to succumb to “moral panic” around its use.
The report, led by the University of Bristol and funded by Research England, found that some universities are already using generative AI to assess the quality of research.
However, it showed wide variation in how they are used, with some universities using AI tools to gather evidence of real-world impact, while others are building new tools to streamline the REF process or evaluate research.
In a survey of around 400 academics and professional services staff conducted as part of the study, the majority of respondents strongly opposed all aspects of the use of AI in the REF.
Two-thirds strongly opposed the idea that universities should use it to support internal evaluations of REF research output, and three-quarters strongly opposed that REF panels use it when assessing output.
A further 86% disagreed with AI being used to support the REF committee’s assessment of impact case studies.
The funding required for REF 2029 is likely to be even higher than the £471 million spent in 2021, and lead author Richard Watermeyer, professor of higher education at the University of Bristol, said AI had the potential to ease some of the burden.
“GenAI has the potential to revolutionize research assessment at the national level, helping create a more efficient and level playing field.”
Some respondents to the report highlighted the benefits of using AI tools to handle some “difficult aspects” of REF preparation and reduce the significant burden placed on academics when reviewing outcomes for REF institution selection.
But Watermeyer said GenAI does not provide a complete solution and acknowledged the “vocal opposition” to GenAI’s inclusion in the REF that the survey revealed.
“It may also create new bureaucratic challenges of its own, such as establishing new requirements and procedures for proper use.”
The report found mixed opinions among the 16 professional deputy prime ministers interviewed, with some urging caution in the midst of an “AI bubble” until the limits of the technology are clearer, while others were concerned about the extent to which AI can be trusted.
But another said: “I think just sitting back and saying it’s not going to happen, that we’re not monitoring it, is very limiting of what the future holds…I think there’s quite a bit of moral panic going on.”
Watermeyer said that while opposition to AI is concentrated in specific academic fields, particularly the arts, humanities and social sciences, professional services staff tend to be far more enthusiastic about AI’s potential.
Stephen Hill, research director at Research England, said the findings provided “both a warning and a call to action”.
“It equally warns against haste and complacency, while urging us to lead the field with principles, collaboration and informed critique. With the right safeguards, the integration of GenAI will help maintain excellence, fairness and trust in UK research assessment.”
The authors recommended that all universities establish and publish policies regarding the use of GenAI for research purposes, and that staff receive adequate training on the responsible and effective use of AI tools, and strong state oversight.
The majority of interviewees warned that without standardized tools across the sector, using GenAI in REF preparation would “deepen structural inequalities in resource-poor institutions.” The report therefore also calls for the development of a high-quality, shared AI platform for the REF, which should be accessible to all institutions.
patrick.jack@timeshighereducation.com
