One person claims to have written 113 academic papers on artificial intelligence this year, 89 of which will be presented this week at the world's leading conference on AI and machine learning, raising questions among computer scientists about the state of AI research.
Author Kevin Zhu He recently earned a bachelor's degree in computer science from the University of California, Berkeley, and currently runs Algoverse, an AI research and instruction company for high school students. Many of them are co-authors of papers. Zhu himself graduated from high school in 2018.
His papers over the past two years have covered topics such as using AI to locate nomads in sub-Saharan Africa, assess skin lesions, and translate Indonesian dialects. He touts on his LinkedIn that he has published “over 100 top conference papers in the past year” that have been “cited by OpenAI, Microsoft, Google, Stanford, MIT, Oxford, and more.”
Hany Farid, a computer science professor at Berkeley, said in an interview that Zhu's paper was a “disaster.” “I’m pretty sure it’s all just vibe coding from top to bottom,” he said, referring to the practice of using AI to create software.
In a recent LinkedIn post, Farid called attention to Zhu's prolific publication, which has sparked discussion among AI researchers about other similar cases, saying the newly popular field is facing a flood of low-quality research papers, fueled by academic pressure and, in some cases, AI tools.
In response to questions from the Guardian, Mr Zhu said he oversaw 131 papers and said they were a “team effort” run by his company Argoverse. The company charges high school and college students $3,325 for an optional 12-week online mentoring experience that includes assistance with submitting work to conferences.
“At a minimum, I help review the proposal's methodology and experimental design, and I read and comment on the full draft of the paper before submission,” he said, adding that projects on topics such as linguistics, medicine, and education involve “principal investigators and mentors with relevant expertise.”
In response to a question about whether the paper was written with AI, he said the team used “standard productivity tools such as reference management and spell checking, and in some cases language models for copy editing and clarity.”
Bot watchers in confusion
Review standards for AI research are different from most other scientific fields. Most AI and machine learning research does not go through the rigorous peer-review process of fields like chemistry and biology, and is instead often presented less formally at major conferences, such as NeurIPS, the world-class machine learning and AI gathering where Zhu will be presenting.
Farid said Zhu's case illustrates a larger problem in AI research. Conferences, including NeurIPS, have been overwhelmed by the increase in submissions. NeurIPS submitted 21,575 papers this year, up from less than 10,000 in 2020. Another top AI conference, the International Conference on Learning and Representations (ICLR), reported a 70% increase in annual conference submissions in 2026, with nearly 20,000 papers submitted, up from just over 11,000 the previous year. 2025 Conference.
“Reviewers are complaining about the poor quality of papers, and even suspect that some were generated by AI. Why has this academic feast lost its flavor?” asked Chinese technology blog 36Kr in a post about ICLR in November, noting that the average score awarded to papers by reviewers had fallen year-on-year.
Meanwhile, students and academics face increasing pressure to accumulate publications and keep up with their peers. Academics say it's unusual to produce two-digit numbers of high-quality computer science papers in a year, let alone three times as many. Farid says that sometimes students create papers that are “tone coded” to increase the number of publications.
“So many young people want to get into AI. There's a frenzy right now,” Farid said.
NeurIPS reviews submitted papers, but the process is much faster and less thorough than standard scientific peer review, said Jeffrey Walling, an associate professor at Virginia Tech. This year's conference involved a large number of doctoral students vetting papers, which the NeurIPS area chair said jeopardized the process.
“The reality is that conference referees often have to review dozens of papers in a short period of time, usually with little or no revisions,” Walling said.
Walling agreed with Farid that there are too many papers being published now, and said he has come across other authors who are publishing more than 100 a year. “Academics are rewarded for the quantity of their publications rather than their quality…Everyone loves the myth of hyperproductivity,” he says.
Zhu's Algoverse FAQ page answers how the company's programs can benefit applicants' future college and career prospects, stating, “The skills, accomplishments, and publications you achieve here are highly regarded in academia and can actually strengthen your university application or resume. This is especially true if your research gets you into a prestigious conference, an honorable feat even for professional researchers.”
Farid said he is currently “enthusiastic” about the AI research field and advises students not to go into AI research because of the amount of low-quality research being produced by people hoping to improve their career prospects.
After newsletter promotion
“It's just a mess. You can't keep up, you can't publish, you can't do good work, you can't be thoughtful,” he said.
slop flood
This process still produces many great works. Famously, Google's paper on Transformers, “Attending Is All You Need” (the rationale for the AI advances that led to ChatGPT), was published at NeurIPS in 2017.
NeurIPS organizers agree that the conference is under pressure. In comments to the Guardian, a spokesperson said that the growth of AI as a field has resulted in “a significant increase in the number of paper submissions and an increased value of peer review acceptance at NeurIPS”, placing “a significant strain on our review system”.
NeurIPS organizers said Zhu's applications were primarily for workshops within NeurIPS, which have a different selection process than major conferences and are often where early career work is presented. Farid said this doesn't really explain why one person would have their name on more than 100 papers.
“I don't think it's a convincing argument to put your name on 100 papers to which you have no meaningful contribution,” Farid said.
This problem is bigger than the flood of papers on NeurIPS. According to a recent article in Nature, ICLR used AI to screen a large number of submissions, resulting in apparently hallucinatory citations and feedback that were “very redundant with many bullet points.”
The sense of decline is so widespread that finding a solution to the crisis has become the subject of a paper in itself. A May 2025 position paper (an academic, evidence-based version of a newspaper editorial) written by three South Korean computer scientists proposed a solution to the “unprecedented challenges of a rapid increase in paper submissions, with growing concerns about peer review quality and reviewer responsibility,” and won the Outstanding Work Award at the 2025 International Conference on Machine Learning.
Meanwhile, Farid said, big tech companies and small AI safety organizations are now dumping their research on arXiv, a site once reserved for preprints of rarely viewed math and physics papers, and the internet is full of research presented as science but not subject to review standards.
The trade-off, Farid says, is that it's nearly impossible for journalists, the public, or even experts in the field to know what's actually going on with AI. “The average reader trying to understand what's going on in the scientific literature has no chance at all. The signal-to-noise ratio is basically 1. I go to these conferences and can barely understand what the heck is going on.”
“What I tell students is that if you're trying to optimize the publication of your paper, it's honestly not that difficult. Just do really crappy, low-quality work and blow up the conference with it. But if you want to do really thoughtful, careful work, you're at a disadvantage because you're effectively unilaterally disarmed,” he said.
