Experts speak to how AI has transformed the research industry | Features

Machine Learning


A European Union-funded project sought to employ artificial intelligence to enable faster, more thorough border control. When the $5.2 million project made global headlines, public backlash was widespread and violent.

“It was a nightmare,” said Keeley Crockett, a U.K.-based researcher involved in the project. “I had to be removed from the web everywhere because people wanted to kill me.”

The project, iBorderCtrl, was a research initiative that ran from 2016 to 2019. It brought together state-of-the-art hardware and software technologies — biometric verification, automated deception detection, document authentication and risk assessment — to reduce the cost and time spent per traveler at land border crossings.

Crockett, a professor in computational intelligence at Manchester Metropolitan University, led a team of researchers developing automated deception detection.

The idea was to adapt a system she had worked on with her former Ph.D. supervisor: the Silent Talker, a psychological profiling system that uses artificial neural networks to measure lie detection through detecting nonverbal facial behavior. The Silent Talker proved successful through 10 to 15 years of development and testing, according to Crockett.

“I was in a false sense of security at the beginning, because we were hitting the milestones,” Crockett said. “Even the European Commission did an article on their website saying, ‘We’re really happy with the progress.’”

Then came the turning point: The media caught wind of the project, and many articles were published that were “just not true,” according to Crockett.

The models used were a special set of neural networks, and researchers could explain the technology to a domain expert specifically trained in psychological profiling and lie detection, who could understand facial cues. However, for an everyday person impacted by this technology without this skillset, it would have been very difficult to understand it.

Therefore, the European Commission never adopted iBorderCtrl due to the backlash and risk. Now, six years later, explainability is no longer the primary issue.

“The issue is, should some systems never, ever be subjected to machine learning?” she said.

Crockett sees a striking parallel between her lived experiences and more recent societal development alongside the adoption of generative AI.

“All this kind of stuff that’s coming now, it’s the same conversation,” Crockett said. “But everyone’s so driven by the hype of using it for productivity and efficiency, they’re not really thinking about the consequences of where it might be. And because I’ve lived through the nightmare, that’s what I want to try and raise (awareness about).”

The adoption of AI is particularly critical to consider in research, as findings in scientific inquiry fundamentally shape society’s future. While machine learning has had yearslong applications in science, rapid AI development has led to a drastic uptick and permeation of usage across the research industry.

At this early stage of AI development, researchers and journal editors are excited for the opportunities to accelerate scientific discovery, but also cautious against blind adoption amid persistent issues concerning accuracy and reliability. In light of the hype surrounding AI, experts underscore the necessity for widespread AI literacy training and guardrails to ensure responsible usage.

An abbreviated history of AI in research

The term AI encompasses a range of technologies applied across different scientific disciplines: machine learning, natural language processing, computer vision and more.

The 2010s ushered in the so-called “AI boom,” an era of high interest and funding for AI advances. Deep learning, a subset of machine learning, enabled several breakthroughs: AI outperformed world champions in complex games such as chess and Go and expanded into computer vision and speech recognition.

The present decade has further accelerated this trend. OpenAI’s release of ChatGPT in late 2022 made a powerful, general-purpose chatbot globally accessible to the public. Within two months, it had accrued more than 100 million users, making it the then-fastest-growing app of all time. Since then, the applications of AI across different fields of science have proliferated: genomics, drug discovery, climate modeling, astrophysics and more.

Furthermore, the Trump administration issued an executive order in November 2025 launching the Genesis Mission: a national effort to “double the productivity and impact of American research and innovation within a decade,” according to the U.S. Department of Energy’s press release.

The department was tasked to create a platform enabling technology companies to access federal scientific datasets for the purposes of developing AI agents to generate hypotheses and automate research workflows. Companies involved as collaborators include Microsoft, IBM, OpenAI, Google, Anthropic and Quantinuum.

While many liken the AI boom to past industrial revolutions, Crockett said this time is different.

“I worry that knowledge and expertise of individuals will be lost in many of the processes, because the art of doing research is to read, learn and synthesize knowledge,” she said. “We learn by doing it in our brains. … We’re going to get very lazy if we just rely on a system that’s going to do something like this.”

Efficacy of AI for research

Although AI has not been widely accepted for research analysis and synthesis, it has proven useful for literature search, as in the case of Andrea Wisenöcker, a research associate at Johannes Kepler University Linz in Austria.

It was 2022 — Wisenöcker was seated in her office, hard at work on her AI-assisted meta-analysis about student learning loss during the COVID-19 pandemic. She had two monitors displayed in front of her. On one, she referenced a list of literature she knew to be relevant; on the other, she trained the AI tool ASReview by searching for the studies on the list and manually marking them as relevant.

Having started her research career in 2021, Wisenöcker said she was familiar with the applications of AI within her first year in the field.

Therefore, she and her research team decided to employ ASReview — which assists in screening titles and abstracts using machine learning algorithms — in their literature search. After researchers trained the tool by marking about 300 studies as relevant or irrelevant, ASReview ranked the roughly -30,000 potentially relevant studies it was fed in order of relevance.

The benefit of using AI in the literature search, according to Wisenöcker, was the ability to quickly screen an exceptionally large body of studies, which is especially critical for meta-analyses, where researchers aim to capture all existing literature on a given topic.

“You can imagine that looking through 30,000 studies yourself would be basically impossible,” Wisenöcker said. “If we had opted to not use an AI tool, we would have had to either limit the number of databases we look for, or we would have had to make the search term narrower, with the risk of missing relevant papers.”

Wisenöcker also noted enlisting ASReview in literature search came with potential drawbacks — the main one being that it was difficult to tell whether it really understood relevance.

The following methods were designed to counteract potential limitations: conducting a small-scale literature search the traditional way as a foundation, training the AI tool with studies they personally categorized as relevant and irrelevant and manually screening the AI’s ranked output until they found 100 consecutive irrelevant studies.

These measures made Wisenöcker more comfortable employing AI in her research, despite her skepticism regarding reliability. Part of her worries stemmed from the fact she had not been formally trained to use AI.

As the meta-analysis was the first time she had used AI on such a large scale, she said the biggest challenge was learning how to train the tool and recognizing when it was adequately trained. The experience changed her perspective in acknowledging its limitations.

“I became a bit more critical, not in whether or not it should be used, but how it should be used,” Wisenöcker said. “These broad question marks that remained a bit throughout the study made me think, ‘OK, maybe it needs more actual research on AI-assisted research.’”

AI’s place in research writing, peer review and publishing

Then-electrical engineering master’s student Yaohui Zhang had heard a lot of complaints from other researchers about receiving poor-quality, AI-generated peer reviews after submitting their research manuscripts to journals and conferences in 2023.

“All the people are complaining about that,” Zhang said. “So we think that it might be good to do some quantification about how many people’s (reviews) or how many sentences (in the reviews) are written by the AI.”

So began Zhang’s work concerning how AI impacts society — and academia, in particular — which served as his research focus for his master’s degree, which he received from Stanford University in 2025.

One of Zhang’s studies aimed to quantify the usage of large language models in scientific papers through systematic analysis. His team estimated the prevalence of LLM-modified content over time by measuring the frequency of words in scientific texts published before the release of ChatGPT and comparing it to more recent texts that appeared to be AI-modified.

In examining more than 1 million preprints and published papers from 2020 to 2024 on arXiv, bioRxiv and Nature Portfolio journals, Zhang found a steady increase in the usage of LLMs in scientific writing. Broken down further, computer science, with an estimated 22% of LLM-modified sentences, was identified as the field with the largest and fastest growth in LLM usage. Math and Nature Portfolio papers showed the least LLM modifications, at 8% and 9%, respectively.

Given the rapidity of AI developments, research journals have been working to define their AI policies.

For example, Springer Nature, is a leading academic publisher in the world, with a portfolio of more than 3,000 journals. According to Alice Henchley, director of communications: integrity, ethics and editorial policy at Springer Nature, the company’s stance is that while AI should not replace human expertise, it may be powerful for supporting it. As such, the company has backed the usage of emerging technologies for more than 10 years.

“We believe AI, when used ethically and responsibly, can improve the quality and pace of discovery and enable us to better serve our scientific communities and the wider public,” Henchley said in an email.

Nature is the leading international weekly journal publishing interdisciplinary scientific literature under Springer Nature’s Nature Portfolio. Yann Sweeney is a manuscript editor at Nature in London who handles research submissions in AI, computing and robotics.

While academics have been familiar with AI and its potential for automating parts of the science workflow for years, recent developments — primarily, the release of ChatGPT — have increased scientists’ usage of LLMs compared to previous AI tools, according to Sweeney.

“From what I gather, people are doing that quite a lot, and from my point of view, without really assessing whether they’re sufficiently skeptical of the answers they can give,” he said. “(LLMs are) not really being trained and fine-tuned to give accurate answers that much; it’s kind of trained to give possible answers.”

Amid the hype for AI, scientists fall into two categories, according to Sweeney. There are some who are very excited about the possibilities of AI and believe it will transform how science is conducted. Another subset is sick of the hype — and while they acknowledge its utility, they also recognize the marketing surrounding it.

AlphaFold 2, for example, is perhaps the most groundbreaking case in recent years illustrating the opportunities AI holds for scientific discovery. By predicting structures of proteins through deep learning and neural network models, it solved a 50-year key challenge in biology and its creators were awarded the 2024 Nobel Prize in chemistry.

However, Sweeney referenced AlphaFold 2 with caution: While it is a powerful AI tool capable of protein structure prediction, it doesn’t necessarily mean new drugs will immediately be developed using it. Rather, there are many other steps that need to happen first along the way, as AI is currently still in its early stages, and potential real-world, tangible changes remain unclear in most disciplines.

Hallucinations — a phenomenon in which AI presents inaccurate or misleading information and incoherent reasoning — are one of the biggest concerns of using AI for science more generally, according to Sweeney. “The promise from the tech companies is that, oh, they’ll solve this issue, they’ll work out,” Sweeney said. “It’s been three years, and they still haven’t really figured out how to do it. … It just doesn’t seem like the right technology to do a lot of the work that scientists need to do.”

Springer Nature’s AI policy requires researchers to take accountability for their work, meaning AI cannot have authorship. While AI can be used for research processes, it must be transparently disclosed — aside from uses in copy editing grammar, spelling, punctuation and tone.

Peer reviewers, however, are not allowed to use AI, according to the journal’s policy. Sweeney said he personally is “very against” the idea, as AI tools are not trustworthy yet, and the fundamental value of peer review is having human experts engage with the process.

“It’s very tempting to reach to AI as a solution to this creaking system,” Sweeney said. “But once you do that, you just open the floodgates to lots of low-quality reviews, low-quality submissions. It’s going to be even worse, so I think we need to hold the line there.”

Looking ahead to AI literacy training and guardrails

Given the speed of development and adoption of AI, Crockett envisions a future in which being able to use AI will be “a core requirement” of a job, particularly for research environments. Therefore, she advocates for AI literacy.

“There has to be different levels of AI literacy programs available to different types of members of the public,” Crockett said. “I think there’s a bottom-line duty of care that all citizens should have free access to AI literacy.”

Crockett gave an example of how the U.K. has been working to address this issue: Most universities have AI workshops where every student learns how to use AI and what the caveats are.

Discussions about bringing AI literacy to younger students have already begun. As such, the idea is children will already be equipped with AI skills by the time they get to university and become active members of society.

“I think it’s going to be tackled top-down and bottom-up, so it’s going to come up through schools, and it’s going to come down into industries,” Crockett said. “One of the challenges is making sure those courses are available at the right level for the right person in the right context of their job.”

For research-related jobs, she said a baseline understanding of ethics and responsible research must be standardized across all disciplines. However, the languages and levels of technical understanding for each field may differ.

Moreover, guardrails for each specific type of job — be it grant reviewers, researchers or peer reviewers — must also be in place to audit where AI can and can’t be used.

AI has swiftly transformed society and the question of how to audit it remains one of the most pressing in the near future.

“The genie is out of the bottle,” Crockett said. “Everyone’s using (AI), but how do we put guardrails around it?”



Source link