One-third of corporate researchers are still not using artificial intelligence (AI) tools in their work, according to findings from Elsevier’s latest Researcher of the Future report.
The report is based on responses from both academic and business researchers and provides a window into how researchers view the current and emerging landscape shaped by rapid technological change, including the adoption of AI.
Time savings and possibilities
Many corporate researchers who have used AI tools report clear benefits. The impact on day-to-day work is particularly pronounced, with 63% of respondents saying AI saves them time. Just over half (54%) believe that AI will empower their professional activities, and 47% feel it will provide them with increased autonomy.
Looking to the future, 76% of business researchers surveyed expect AI to save even more time in the next two to three years. Furthermore, 49% believe that AI will facilitate the creation of new knowledge, and 44% expect AI to improve the quality of their work.
Concerns and limitations
Despite these perceived benefits, the report highlights hesitancy and concerns about the widespread use of AI in research. A key issue identified is trust in AI-generated responses. Only 27% of respondents believe that AI tools are trustworthy. A significant portion (29%) claim that AI will provide answers that are not useful, while 46% agree that AI will provide answers that are useful.
There is even more hesitation about using AI for higher-value research tasks. According to the report, 44% of corporate researchers will not use AI to write or draft papers, 47% will not use AI to generate hypotheses, and 49% will not use AI to design experiments. Currently, the use of AI is primarily for administrative tasks rather than core research activities.
Training and governance
Elsevier’s research also highlighted challenges around skills and organizational governance. Only 35% of corporate researchers feel they are adequately trained in the use of AI, and only 41% believe they have good AI governance in place within their organizations. In contrast, 21% disagreed that their organization maintains effective AI governance. These gaps suggest that there is scope to improve researchers’ confidence and readiness to use AI tools.
Among researchers already using AI, nearly a third (31%) rely on general-purpose AI tools rather than research-specific AI tools. The report suggests that research-specific, customized AI platforms with verifiable outputs could help build greater trust and encourage use in more substantive research functions.
What researchers are looking for
Corporate researchers identify specific features that will drive widespread adoption of AI tools in the workplace. 70% cite the need for automatic citations and transparent sources, 64% want clear factual accuracy and safety training, and 63% want confidential treatment of research input. These preferences highlight the importance of trust, transparency, and data security in scientific research conducted with AI support.
Industry perspective
AI has great potential to accelerate discovery, but general-purpose tools have never been built to achieve the precision and traceability required for scientific research. As this study shows, researchers need transparent AI that cites reliable sources and explains its reasoning. Above all, it must meet the same standards of evidence and reproducibility as your own work. Achieving that will depend on domain-specific data, rigorous validation, and collaboration across the research ecosystem.
Commenting on the findings, Stuart Wayman, Elsevier’s President of Corporate Markets, said:
Report background
The Researcher of the Future survey examines not only the role of AI in research, but also changing attitudes towards research integrity, collaboration, and expectations for researchers to demonstrate the impact of their work. The findings are based on responses from 122 business researchers regarding their perspectives on the technology’s evolving role in AI use, research practices, and innovation.
