Image source: Leo Garbutt
International projects show the potential of technology – if structure and value are correct, Dennis Newman Griffith says
Artificial intelligence has already influenced the way researchers carry out projects, write publications and grant applications. But where do AI and machine learning technologies apply to research funding and evaluation? And how do funders support the best of the system that helps them shape AI to be used responsibly in their work?
For the past two years, research on the London Institute of Research (RORI) has worked with a consortium of international funders to answer these questions.
Grail Project (short for taking responsibility for AI and machine learning in research funding and assessment) brought together 13 public and private funders from 11 countries in Europe, North America and Australia to build a community of practice to better understand how funders explore AI and machine learning, apply machine learning, and achieve responsible futures.
Today we are beginning to raise financial funds with algorithms: a handbook for the responsible use of AI and machine learning by research funders. This sets out key information funders need to know for their work, as well as important lessons learned from a detailed discussion of AI in funding and evaluation.
You need to know
The handbook outlines core concepts that anyone using AI should understand. This includes a wide range of technologies that go beyond the long history and other large-scale language models (LLMs) under its umbrella. It highlights the policy context of motivational funders to explore AI and the organizational challenges involved in implementing it.
This handbook is based on real-world practices that explain the steps involved in AI applications and show case studies from participating funders such as the Swiss National Science Foundation (SNSF), the Novo Nordisk Foundation of Denmark, the Lakaisha Foundation of Spain, the Norwegian Research Council, and UK Research and Innovation.
We discovered that exploring AI and bringing it into responsible practice is not actually a technology issue. The key lessons are about the people, processes and practices involved in the use of AI. It's about placing the structure centered around it, the team that implements and manages it, and the appropriate policies, values and culture.
Rather than seeking claws that hit an AI hammer, the key to using technology effectively is to start with problems and goals and work to understand whether and where AI technology can help.
With technology changing rapidly, it can feel like all kinds of guidance and best practices will become outdated within six months. However, we found that exploration or application of AI is not changed by technology, but based on the skills and questions that actually build resilience in AI's changing winds.
AI thinking
Based on funders' experience, the AI thinking model explains the practical impact of the critical capabilities that teams need to use these technologies responsibly. Starting by matching the real problem with the right technology in a particular context, it helps you understand how to work in the current situation and how new technologies will respond as they emerge.
In general, funders are cautiously optimistic about AI. They consider the possibility of helping them work more efficiently and effectively, and even posting the right information in front of the right people at the right time and bringing new insights into their work.
For example, one of Grail's funders highlighted how exploring AI in peer reviews highlighted how reviewers could identify unexpected patterns of how they match their proposals and what they would be featured in their reviews. However, funders are fully aware of the organizational, reputational and regulatory risks regarding issues such as data security, as well as the impact of AI on research culture.
There is a spectrum of attitudes and approaches to AI. Some funders are early adopters such as SNSF and La Caixa, and use AI to match AI with reviewers. Norwegian Research Council used AI to analyze the social impact of projects in their funding portfolios.
Other funders are just beginning their journey and will explore how to get started and what applications AI can help. Even those who are not currently researching AI in their processes have more than a third in recent Global Research Council surveys, but it is not necessary to understand how applicants are using it and to set standards and guidance.
The AI landscape is rapidly evolving in funding and valuation. In the near future, we will likely see chatbots that will help applicants generate funding resources, AI summaries of reviewer feedback, and even non-academic summaries of funded research.
These utilize many technologies, such as the general LLMS and bespoke machine learning. Rori's Handbook provides a strong foundation on how to start exploring future applications and how to ensure that AI is used ethically and responsibly and useful for applicants, funders and research systems.
Denis Newman-Griffis is a senior lecturer and theme lead in AI-enabled research at the Machine Intelligence Center for Machine Intelligence at the University of Sheffield, and a co-leader of the Grail Project. They will be speaking at the Metascience 2025 conference on June 30th.
Research Professional News is a media partner of Metascience 2025 and will be held in London on June 30th, July 2nd.
