AI in scientific research increases speed and limits scope

Applications of AI


AI is turning scientists into publishing machines, quietly bringing them together in the same crowded spaces of research.

This is the conclusion of an analysis of more than 40 million academic papers, showing that scientists using AI tools in their research They publish more papers, accumulate more citations, and reach leadership roles faster than their peers.

But there’s a catch. As individual scholars soar through the ranks of academia, the curiosity of science as a whole shrinks. AI-heavy research deals with less topical areas, focuses on the same data-rich problems, and has less continuous engagement between studies.

This finding highlights the tension between individual career advancement and collective scientific advancement, which is not surprising, as tools like ChatGPT and AlphaFold seem to emphasize speed and scale.

“There’s a contradiction between individual motivation and science as a whole,” says James Evans, a sociologist at the University of Chicago who led the study.

And as more researchers follow the same scientific trend, some experts worry about a feedback loop of conformity and a decline in originality. “This is very problematic,” he says Luis Nunez Amarala physicist who studies complex systems at Northwestern University. “We’re digging the same hole deeper and deeper.”

Evans et al. published their findings in the journal Jan. 14. nature.

A long-standing interest in how science evolves

For Evans, the tension between efficiency and exploration is familiar territory. For more than a decade, he has used massive publication and citation datasets to quantify how ideas spread, stagnate, and sometimes converge.

In 2008, he showed that the shift to online publishing and search has increased the propensity of scientists to read and cite the same high-profile papers, accelerating the spread of new ideas while narrowing the range of ideas in circulation. Subsequent research detailed how career incentives quietly steer scientists toward safer, more complex questions rather than riskier, original questions.

Other studies have tracked how the rate of conceptual innovation tends to slow down in large fields over time, even as the volume of papers grows explosively. And more recently, Evans has begun turning the same quantitative lens on AI itself, investigating how algorithms reshape the organization of collective attention, discovery, and knowledge.

That early work often included the caveat that the same tools and incentives that make science more efficient can also compress the space of ideas that scientists collectively explore. New analysis suggests that AI may be pushing this dynamic into overdrive.

How AI will impact your career and research topics

To quantify the impact, Evans and his collaborators at the Beijing National Information Science Research Center trained a natural language processing model to identify AI-enhanced research across six natural science fields.

Their dataset includes 41.3 million English-language articles published from 1980 to 2025 in the fields of biology, chemistry, physics, medicine, materials science, and geology. Fields such as computer science and mathematics, which focus on developing AI methods themselves, were excluded.

The researchers tracked the careers of individual scientists, examined how their papers gained attention, and then zoomed out to consider how entire fields became intellectually concentrated or decentralized over time. They compared about 311,000 papers that incorporated AI in some way, such as neural networks or large-scale language models, with millions of other papers that did not.

Bar chart comparing annual citation counts using AI tools across different research fields. From top to bottom, the categories are biology, chemistry, geology, materials, medicine, physics, and general science. The introduction of AI increases the scientific impact of individuals, and researchers using AI consistently receive more citations than those without AI.Veda C. Story

The results revealed significant trade-offs. Scientists who embrace AI increase their productivity and visibility. On average, scientists publish three times as many papers, receive nearly five times as many citations, and become team leaders one to two years sooner than scientists without AI.

However, when these papers are mapped into a higher-dimensional “knowledge space,” AI-heavy research occupies a smaller intellectual footprint, clustering more tightly around popular data-rich problems, and creating a weaker network of sustained engagement between studies.

This pattern has held across decades of AI development, from the early days of machine learning to the rise of deep learning to the current wave of generative AI. “If anything, it’s intensified,” Evans said.

Intellectual narrowness is not the only unintended consequence. As automated tools have made it easier to mass-produce manuscripts and conference submissions, journal editors and conference organizers have witnessed a proliferation of low-quality and fraudulent papers and presentations, often produced on an industrial scale.

“We became obsessed with the number of papers.” [that scientists publish] “We don’t think about what we’re researching and how it contributes to reality, health, and a better understanding of the natural world,” says Nunez Amaral, who last year detailed the phenomenon of AI-powered research paper mills.

Automate the most tractable problems

Apart from recent publication distortions, Evans’ analysis suggests that AI is primarily automating the most tractable parts of science, rather than expanding its scope.

Models trained on rich existing data excel at optimizing well-defined problems such as predicting protein structure, classifying images, and extracting patterns from large datasets. Some systems are beginning to suggest new hypotheses and directions for investigation. This is a glimpse of what some call “AI co-scientists.”

But unless intentionally designed and incentivized, such systems and the scientists who rely on them are unlikely to venture into poorly mapped areas where data are scarce and questions become more complex, Evans says. The danger is not that science will slow down, but that science will become more homogeneous. As individual laboratories rush to move forward, there is a danger that entire companies will converge on the same problems, methods, and answers. This is a faster version of the same refinement Evans first documented when search engines replaced library stacks.

“This is a really frightening paper in terms of how the second- and third-order effects of using AI in science will play out,” said Katherine Shea, a social psychologist who studies organizational behavior at Carnegie Mellon University’s Tepper School of Business in Pittsburgh.

“Certain types of questions are better suited to AI tools,” she points out. And in an academic environment where papers are the main currency of success, researchers naturally gravitate towards problems for which it is easiest to master these tools and turn them into publishable results. “Over time, it becomes a self-reinforcing loop,” says Shea.

Is the narrowing temporary?

Whether this trend continues will likely depend on how the next generation of AI tools are built and deployed across scientific workflows.

In a paper published last month, Bowen Zhou of China’s Shanghai Institute of Artificial Intelligence and colleagues argued that applications of AI in science remain fragmented, with data, computation, and hypothesis-generating tools often deployed in siled, task-specific ways, limiting knowledge transfer and slowing innovative discoveries. But when these elements come together, AI systems for science can help expand scientific discovery, said Chou, a machine learning researcher who previously served as chief scientist at IBM Watson Group.

Probably, Evans says. But he doesn’t think this problem is built into the design of AI algorithms. More important than technological integration, he argues, what may be most important may be an overhaul of the reward structures that shape what scientists choose to work on in the first place.

“It’s not about the architecture itself,” Evans said. “It’s about incentives.”

Evans says the challenge now is to intentionally redirect the way AI is used and rewarded in science. “In some ways, we are fundamentally under-investing in the true value proposition of AI for science: what AI can do that we couldn’t do before.”

“I’m an AI optimist,” he added. “My hope is that this [paper] “This will be a challenge to use AI in different ways.” This is a way to expand the types of questions scientists want to pursue, rather than simply accelerating research on the most tractable problems. “This is a big challenge if we want to grow new areas.”

From an article on your site

Related articles on the web



Source link