Academic writing can now be fully automated, but what does this mean for the future of research?

Machine Learning


Until recently, the role of AI in research felt like a helpful assistant. Summarize papers, clean datasets, create abstracts, and more. Researchers were still in charge of that thinking.

That changed in late 2025, when cutting-edge “frontier” AI models were able to reason and reliably plan on their own. An important feature of these models is “Tool Call”. This is the ability to not only describe the world, but also interact with external tools to act on it.

This marks the rise of agent AI. Agent AI is a system that not only responds to instructions, but also can plan, execute, and iterate independently. In science, as in other fields, chatbots have become colleagues who can autonomously complete real-world tasks end-to-end.

One example is The AI ​​Scientist from Tokyo-based Sakana AI. Announced in mid-2025 and currently in its second version, the Japanese technology company is touting it as “the first comprehensive system for fully automated scientific discovery.”

AI scientists scan existing literature, generate hypotheses, write and run code, and analyze results to write complete research papers, with little human involvement. Just like any young scientist, you reason, make mistakes, and revise.

What’s the evidence? AI Scientist’s academic paper describing a “pipeline for automating the entire scientific process end-to-end” was accepted by the International Conference on Learning Representations, peer-reviewed, and published in the scientific journal Nature in March 2026.

This represents something truly new. Autonomous AI systems pass a milder version of the Turing test by demonstrating scientific quality, if not machine intelligence (yet).

A peer-reviewed paper in AI Scientist explains. Video: Matthew Berman.

Other significant achievements include Singapore-based startup Analemma’s live demonstration of its fully automated research system (Fars) in February. I wrote 166 complete machine learning research papers in about 417 hours for about US$1,100 (£810). That’s the equivalent of one academic paper every 2.5 hours, or the cost of maintaining a research assistant for several weeks.

Google Cloud AI Research also recently announced PaperOrchestra, which takes researchers’ raw experiment logs and rough notes and turns them into submittable manuscripts with figures and verified citations. In a blind evaluation by 11 AI researchers, it easily outperformed existing autonomous systems in this area.

After spending 20 years researching disruptive innovations, I believe we have crossed a critical threshold. Although AI systems still have a long way to go to match the best achievements of humans, the era of fully automated research has arrived.

Impact on academia

The advent of autonomous research systems is impacting academic systems that are under severe strain in many countries. Over the past decade, the number of articles submitted to academic journals has grown far faster than the number of qualified reviewers, leading to suggestions that the scientific publishing system may be “overwhelmed.”

If a system like Fars can produce thousands of papers per year, the scientific publishing infrastructure will be faced with volumes of papers it was never designed to handle. Some academic reviews have already identified the use of AI-generated content. As the number of submissions continues to grow, the role of published academic papers as definitive signals of the quality and skill of human researchers may change.

The optimistic view is that AI has the potential to move academia away from a heavy reliance on volume-based metrics and toward a focus on the impact and innovation of publications. This is a reform that critics of the current system have been calling for for years.

Less optimistically, as AI research expands, an academic system designed for consistent and methodologically defensible contributions may lead to an increasing proportion of incremental rather than fundamentally novel scientific contributions. As a result, both the quality and originality of the research can be compromised.

Science has always needed heretics to advance. Italian astronomer Galileo, known as the “father of modern science,” was forced to withdraw his support for heliocentrism before the Catholic Church’s Inquisition. Hungarian doctor Ignaz Semmelweis died in a psychiatric hospital after failing to convince his colleagues that handwashing could save lives.

But historically, the ability of scientific institutions to encourage radical approaches has also been a mainstay of scientific progress. To maintain this, AI systems must be trained to maximize novelty and transformation, rather than relevance and incremental progress.

How AI will impact creative industries

The transformative impact of this new kind of AI extends far beyond scientific research. A notable example is the Epstein files. This fully AI-generated podcast topped the UK Apple Podcasts and Spotify charts in early 2026, collecting 700,000 downloads in its first week.

The music evolves further and more conflicts arise. By mid-2025, The Velvet Sundown, a completely AI-generated band, had over 1 million monthly Spotify listeners. In 2026, AI tracks began replacing human music in popular playlists, forcing the platform to introduce artist protection features, while Deezer, faced with around 50,000 AI-generated uploads each day, began removing them from its curated list.

Ownership remains the elephant in the room. A US court has ruled that AI-generated works cannot be copyrighted, as human authorship remains a legal requirement. AI can be produced on an industrial scale, but the product cannot be legally owned.

This is important far beyond intellectual property law. In the creative industries, it threatens the royalty streams, licensing agreements, and catalog valuations on which artists, labels, and publishers have built their entire business models for generations.

Meanwhile, in science, the entire incentive structure based on the fundamental assumption that knowledge is generated and owned by humans has been destabilized. When that assumption collapses, so do many of the institutional logics that have defined how expertise is produced, rewarded, and trusted.

Across all these fields, the question is no longer whether AI can produce the work. Rather, it’s about whether you’ve given enough thought to what you’ll gain and what you’ll lose when it happens.



Source link