This article is fully accessible through your institution.

Institutions spending research funds must have clear reasons for rejecting grant proposals amid a surge in applications.Credit: Matt Cardy/Getty
Last month, the European Research Council (ERC) announced policy changes for some grants, extending the period during which some unsuccessful applicants cannot reapply. The ERC, Europe’s main research funder, which will spend more than 16 billion euros (US$19 billion) between 2021 and 2027, was responding to a surge in applications that appears to be partly driven by the use of artificial intelligence tools.

Could agent AI disrupt the subsidy system?
But last week, funders adjusted the changes after researchers protested. Many said it was unfair, too abrupt, too direct, and would discourage bold proposals and prevent researchers from responding to new advances. The city council was right to reconsider. In the process, it showed others how to listen to community concerns. However, the question of how to handle AI in grant procurement remains. Equity must be at the core of the solution.
Neuroscientist Geraint Ries and social scientist James Wilsdon write: nature Last week, funding bodies from Australia to the UK reported a sharp rise in the number of applications for 2022 and beyond (G. Rees and J. Wilsdon). nature 6521119–1121. 2026). This coincides with the emergence of OpenAI’s ChatGPT, the first AI chatbot to be made publicly available worldwide. And there is ample evidence to suggest that much of this increase is due to AI. Researchers are using AI tools not only to scan the literature and summarize research, but also to propose project ideas, draft grant applications, and revise applications based on predicting grant review committee reactions.
Current guidelines from some of the world’s major research funders allow limited use of generative AI in grant applications. In such cases, the guidelines state that it must be recognized, declared and done responsibly and in accordance with ethical and legal requirements. In contrast, those reviewing grant proposals for funders are prohibited from uploading their proposals to generative AI tools for the purpose of generating reviews. This is partly for confidentiality reasons, but also because funders want reviewers to make their own decisions and not rely on machines.

Europe’s prestigious science funder scraps strict rules after backlash from researchers
In practice, these policies are not always followed. If anything, the research world is in a situation where, while it has become easier to write and review grant applications, the way we test the effectiveness of AI has not improved.
Researchers are beginning to show how such validation might occur. Pangram Labs, a New York City company, has developed and is testing a tool to detect AI-generated text. Separately, researchers at Northwestern University in Evanston, Illinois, used different methods to compare evidence of AI use in grant applications from two universities to U.S. federal agencies. A team led by computational social scientists Dashun Wang and Yifan Qian accessed publicly available grant summaries from a database of U.S. federal grants from 2021 to 2025. (Y. Qian) others. Preprint at arXiv https://doi.org/q435. 2026). To identify the use of AI tools, they acquired an AI model to rewrite human-written summaries in 2021 (before the release of ChatGPT) and compared the human and AI versions of the same text. This allowed them to learn the telltale signs that differentiate the two types.
drastic review
Mr Rees and Mr Wilsdon are among those who argue that the arrival of AI requires a fundamental rethinking of the subsidy system. They argue that as the quality of AI-assisted applications improves over time, funders will find it difficult to distinguish which proposals to fund and which to reject. Funders’ resources are always finite, so many proposals will still have to be rejected, but without a clear basis for the decision, the funder’s trust with researchers will be at risk.

European funders need to improve their ability to meet scientists’ ambitions
Various measures have been proposed. This includes using lotteries to distribute grants and having grant applicants review each other’s work. Such a model is believed to be at least as fair as distributing subsidies through existing methods. In other cases, relatively large amounts of funding are sent to institutions as block grants, further rebalancing grants so that institutions can spend according to their needs.
At the same time, it is important to evaluate the pros and cons of different responses before deciding where to land. For example, Rees and Willsdon call for “shifting the focus of evaluation away from the proposal and toward the principal investigator, his or her research team, and previous and ongoing research programs.” This is likely to benefit not only researchers at research-intensive universities, but also individuals and institutions with strong track records. As the authors themselves acknowledge, less established researchers and laboratory groups, as well as those working in emerging fields, will be at a disadvantage. Although they propose ring-fenced funding for these purposes, a substantial focus on principal investigators risks reversing the gains from increased diversity in science and the quality of questions that come from diversity. As mentioned previously, it is unwise to invest too much power in the lead researcher. In the world of team science, authority and responsibility need to be distributed more evenly.
AI is transforming science. Funding bodies, as well as researchers, publishers and policy makers, need to adapt quickly. All stakeholders need to consider what steps they need to take to ensure that AI is used responsibly and transparently. It is not necessarily a fundamental or disruptive change, but it is what is needed to ensure that the funding system continues to support the highest quality proposals in a fair and equitable manner.
