In 2023, science fiction literary magazine Clarksworld stopped accepting new submissions due to too many works being generated by artificial intelligence. As far as the editors know, many submitters pasted the journal’s detailed article guidelines into the AI and submitted the results. And they weren’t alone. Other fiction magazines have also reported large numbers of AI-generated posts.
This is just one example of a universal trend. Legacy systems relied on writing and recognition difficulties to limit volume. Generative AI overwhelms the system because humans on the receiving end cannot keep up.
This is happening everywhere. Newspapers, like academic journals, are inundated with AI-generated letters to the editor. Lawmakers have been inundated with AI-generated comments from voters. Courts around the world are being flooded with AI-generated filings, especially by self-represented individuals. AI conferences are filled with AI-generated research papers. Social media is full of AI posts. The same goes for music, open source software, education, investigative reporting, and employment.
Similar to Clarkesworld’s initial response, some of these institutions suspended their submission processes. Other companies have some degree of defensive response to AI-input attacks, often involving counter-use of AI. Academic reviewers are increasingly using AI to evaluate papers that may have been generated by AI. Social media platforms are turning to AI moderators. Court systems use AI to triage and process large volumes of cases through AI. Employers use AI tools to review candidate applications. Educators use AI not only to grade papers and administer exams, but also as a feedback tool for students.
All of this is an arms race. Adversarial rapid iteration to apply common technologies to opposing ends. Many of these arms races have clearly detrimental effects. If courtrooms are filled with frivolous lawsuits engineered by AI, society will suffer. There are also harms when established measures of academic performance, such as publications and citations, accrue to researchers who fraudulently submit letters and papers written by AI rather than those with the most influential ideas. The concern is that the fraud enabled by AI will ultimately undermine the systems and institutions on which society depends.
AI benefits
But some of these AI arms races have surprising hidden benefits, and the hope is that at least some organizations will be able to change in ways that strengthen them.
Science is likely to become stronger thanks to AI, but it faces problems when AI makes mistakes. Consider the example of filtering AI-generated meaningless phrases into a scientific paper.
Using AI to help scientists write academic papers can be a good thing, if used carefully and with disclosure. AI is becoming increasingly important as a key tool in scientific research, including literature reviews, programming, and data coding and analysis. And for many, it has become an important support for expression and scientific communication. Before AI, well-funded researchers could hire humans to help write their papers. For many authors whose primary language is not English, hiring this kind of assistance has been an expensive necessity. AI provides that for everyone.
In fiction, fraudulently submitted AI-generated works harm both human authors, who are exposed to increased competition, and readers who unknowingly read machine works and feel cheated. However, some news organizations may welcome AI-assisted submissions with proper disclosure and certain guidelines, and leverage AI to evaluate them against criteria such as originality, suitability, and quality.
Some companies may reject AI-generated work, but this comes at a cost. It is highly unlikely that human editors or technology will maintain the ability to distinguish between human and machine writing. Instead, outlets that want to publish only humans should limit submissions to a set of authors they trust without the use of AI. When these policies are transparent, readers can choose their preferred format and enjoy reading from either or both types of media.
It also wouldn’t be a problem if job seekers used AI to polish their resumes or write better cover letters. Wealthy and privileged people have long had access to human assistance with these things. But when AI is used to lie about identity or experience, or cheat in a job interview, that crosses a line.
Similarly, democracy requires that citizens be able to express their opinions to their representatives and to each other through media such as newspapers. The rich and powerful have long been able to hire writers to turn their ideas into compelling prose, and we think it’s a good thing that AI is providing that assistance to more people. This is where AI mistakes and biases can be harmful. Citizens may be using AI for more than just a time-saving shortcut. It enhances their knowledge and abilities and may generate statements about historical, legal, or policy factors that they cannot reasonably be expected to independently check.
scam booster
What we don’t want is lobbyists using AI in their AstroTurf campaigns, writing multiple letters and passing it off as personal opinion. This is another old problem that is being made worse by AI.
What separates the positives from the negatives here are power dynamics, not technology-specific aspects. The same technology that reduces the effort required for citizens to share their lived experiences with legislators also enables corporate interests to misrepresent the public on a large scale. The former is a power-equalizing application of AI that enhances participatory democracy. The latter are power-intensive applications that threaten it.
In general, we believe that the writing and cognitive supports that have long been available to the rich and powerful should be available to everyone. The problem arises when AI makes it easier to commit fraud. Any response must balance embracing the new democratization of access with preventing fraud.
There is no way to turn off this technology. High-performance AI is widely available and can be run on a laptop. Ethical guidelines and clear professional boundaries help those who act with integrity. However, there will never be a way to completely prevent academic authors, job seekers, and the public from using these tools as legitimate aid or fraud. This means more comments, more letters, more applications, more submissions.
The problem is that those affected by this AI-powered deluge are unable to cope with the increase in volume. What can help is developing assistive AI tools that benefit institutions and society while limiting fraud. And that may mean embracing the use of AI assistance in these adversarial systems, even if defensive AI will never gain an advantage.
balance harm and benefit
The science fiction community has been wrestling with AI since 2023. Clarksworld eventually resumed posting, claiming there was a proper way to distinguish between stories written by humans and those written by AI. No one knows how long or how well it will continue to work.
The arms race continues. There is no easy way to determine whether the potential benefits of AI outweigh the harms, now or in the future. But as a society, we can influence the balance between the harms it causes and the opportunities it presents as we navigate the changing technological landscape.
