summary: A new AI system developed by computer scientists will automatically screen open access journals to identify potentially predatory publications. These journals often charge high fees for publication without proper peer review, which undermines scientific reliability.
AI has analyzed over 15,000 journals, flagged over 1,000 as suspicious, providing researchers with a scalable way to spot risk. The system is not perfect, but it acts as the important first filter for human experts to make the final call.
Important facts
- Looting Publishing: The journal utilizes researchers by charging fees without quality peer reviews.
- AI Screening: The system flagged more than 1,000 suspicious journals out of the 15,200 analyzed.
- Science Firewall: Protecting against bad data helps maintain confidence in your research.
sauce: University of Colorado
A team of computer scientists led by the University of Colorado at Boulder has developed a new artificial intelligence platform that automatically explores “suspecting” scientific journals.
The study was published in the journal on August 27th.Advances in science,“We are tackling amazing trends in the world of research.
Daniel Acknya, the research lead author and associate professor in the Department of Computer Science, reminds him of that several times a week in his email inbox. These spam messages come from people who claim to be editors of science journals.
Such publications are sometimes referred to as “predatory” journals. They target scientists and convince them to pay hundreds or even thousands of dollars to publish their research without proper review.
“There is growing efforts among scientists and organizations to examine these journals,” Acuña said. “But it's like a carriage. You catch one and then usually something else comes up from the same company. They just create a new website and come up with a new name.”
His group's new AI tools automatically screen science journals and evaluate their websites and other online data on specific criteria. Does the journal have an editorial board featuring established researchers? Does their website contain many grammar errors?
Acuña emphasizes that the tools are not perfect. Ultimately, he believes that human experts, not machines, should ultimately call out whether the journal has a high reputation.
But in an age where prominent figures question the legitimacy of science, it has become more important than ever to stop the spread of questionable publications.
“In science, you don't start from scratch. You're built in addition to other people's research,” Acuña said. “So, when the foundation of that tower falls apart, the whole thing collapses.”
Shakedown
When a scientist submits new research to a reputable publication, the research is usually subject to a practice called Peer Review. External experts read the research and evaluate it for quality. At least that's the goal.
More and more companies are trying to avoid that process in order to make a profit. In 2009, CU Denver librarian Jeffrey Beall created a “predatory” journal to explain these publications.
In many cases, they are aimed at researchers outside the US and Europe, such as China, India, and Iran, where scientific institutions may be younger.
“They'll say, 'If you pay $500 or $1,000, we'll review your paper,'” Acuña said. “The truth is, they don't provide a service. They just take the PDF and post it on their website.”
Several different groups have tried to suppress practices. Among them is a non-profit organization known as the Open Access Journal (DOAJ) Directory.
Since 2003, DOAJ volunteers have flagged thousands of journals as suspicious based on six criteria. (For example, reputable publications tend to have a detailed description of their peer review policy on their website.)
However, responding to the spread of those publications has been difficult for humans.
To speed up the process, Acuña and his colleagues turned to AI. The team trained the system using DOAJ data and asked AI to sift through a list of nearly 15,200 open access journals on the Internet.
Among these journals, AI initially flagged more than 1,400 as potentially problematic.
Acuña and his colleagues asked human experts to review a subset of suspicious journals. Humans say the AI made a mistake and flagged an estimated 350 publications as likely to be legal. It still leaves behind more than 1,000 journals researchers have identified as suspicious.
“I think this should be used as a helper to take over a large number of journals,” he said. “But human experts should do the final analysis.”
Firewall for Science
Acuña added that researchers don't want the system to be “black boxes” like other AI platforms.
“When you use ChatGpt, for example, you often don't understand why it suggests something,” Acuña said. “We tried to make it as interpretable as possible.”
The team discovered, for example, that suspicious journals publish so many articles. We also included authors with more affiliations than legal journals, and authors who cited their own research to an unusually high level rather than other scientists' research.
The new AI system is not available, but researchers want it to be available immediately to universities and publishers. Acuña thinks the tool is calling it one way researchers can protect their field from bad data: the “science firewall.”
“As a computer scientist, I often give examples of new smartphones when they come out,” he said.
“We know that mobile phone software is flawed and hopes that bug fixes will come in the future. Perhaps the same should be done in science.”
About this AI and scientific research news
author: Daniel stock
sauce: University of Colorado
contact: Daniel Stock – University of Colorado
image: This image is credited to Neuroscience News
Original research: Open access.
Daniel Acuñaetal. “Estimating the predictability of suspicious open access journals” by Advances in science
Abstract
Estimate predictability of suspicious open access journals
While questionable journals threaten the integrity of global research, manual review is slow and flexible.
Here we explore the possibility that artificial intelligence (AI) can systematically identify such venues by analyzing website design, content and publication metadata.
Assessed against a wide range of human-solved datasets, this method achieves practical accuracy and reveals previously overlooked journal legitimacy indicators.
By adjusting decision thresholds, our method can prioritize either comprehensive screening or accurate low-noise identification.
The Balanced Threshold publishes hundreds of thousands of articles in bulk, receives millions of citations, grants funding from major institutions, and flags magazines of more than 1,000 suspects that attract authors from developing countries.
Error analysis revealed challenges that include obsolete titles, book series misclassified as journals, and outlets from small societies with limited online presence.
Our findings demonstrate the potential for scalable integrity checks in AI, while also highlighting the need to pair automated triage with expert reviews.
