Experts warn funders should rethink research funding before system becomes ‘broken’
The rapid rise in the use of AI means the funding system is becoming “overloaded” as the volume of grant applications increases, and experts have warned that the grant system could “collapse” within 18 months if funders don’t tackle the problem.
Research by James Willsdon (pictured), executive director of the Institute on Research, and Geraint Rees, research associate professor at University College London, says funders are now under pressure to address a number of new issues posed by the increased use of AI.
Writing in Nature, the pair said that since ChatGPT, one of the most popular large-scale language model (LLM) AI tools, was launched in 2022, there has been a 57% increase in research grant applications across 12 funding bodies in the UK and Europe.
Marie Skłodowska-Curie Actions in the EU has seen a 142 per cent increase in applications for fellowships since the launch of ChatGPT, and the Wellcome Trust in the UK has seen a 100 per cent increase.
“There is growing evidence that the use of generative AI is proliferating across science,” Willsdon and Rees write. They cited an Elsevier study that found that 58 percent of the 3,324 researchers who responded had used AI tools in their research, and of those, 41 percent used AI tools to draft grant proposals.
“If we don’t change our approach, it will probably take 18 months before the sheer volume of applications completely collapses the system,” Wilsdon (pictured) told Research Professional News.
It’s not just ChatGPT that could cause problems, Wilsdon said, adding that other AI software, like Claude, is also likely to be increasingly used in some researchers’ proposals.
As a result, grant reviewers may also receive a flood of high-quality applications, Willsdon and Rees said in the article, which may force funders to make more arbitrary choices about what and whom to fund.
“Funding committees are always faced with difficult choices, but at least they can argue that they are distinguishing between good ideas and just good ideas,” Reese said.
“Agentic AI is making that claim increasingly hollow. Funders do not face a remote threat; data suggests the system is already under strain,” he continued.
In their paper, they warned that policymakers as well as funders need to “rethink how they allocate research funding before the system breaks down.”
The European Research Council recently announced stricter measures for grant applicants in a bid to curb the onslaught of applications received in recent calls.
ERC President Maria Repchin said the measures, which include excluding researchers who had previously been rejected, were a last resort in response to a “dramatic” increase in the number of applications that exceeded the institution’s evaluation capacity.
tackle the problem
Wilsdon and Rees noted that proposals to address “overloaded” funding systems such as the lottery would address the issue of quantity, but would be useless if quality measurements were no longer reliable.
They suggest that funding agencies should focus more on research teams and principal investigators than on written proposals to address this problem. The Medical Research Council announced in March that interviews would be mandatory for all finalists for grants.
Wilsdon and Rees noted that subsidy demand management may also be needed to curb the problem, as methods like MRC can end up being “labour-intensive”.
Willsdon told RPN that even with such methods there are still “fundamental” problems. “If the process of shortlisting an interview proposal is at least off the back of an expanded proposal. [by AI]”We haven’t really gotten around the problem, especially given the volume issues that the sector is currently facing,” he said.
Radical approaches could also include handing out proposals on an individual basis or splitting the funding pot among eligible researcher populations on a national basis, Wilsdon and Rees added in their article.
AI REF?
Wilsdon told RPN that if not addressed quickly, this issue could also impact on the next Research Excellence Framework, which is expected to be delivered in 2029.
“By 2029, these systems will be very pervasive. I don’t see why they wouldn’t default to AI REF, unless you lock the members of the REF panel in a Faraday cage and force them to run in real time,” he said.
However, another way this can be circumvented is if these agent AI software and LLMs are “already embedded”, even if the panel explicitly prohibits that kind of use.
“While the terms of trade have been settled for the next REF, the speed of change here will clearly create further challenges that we will need to address,” Wilsdon added.
