
The explosion of generative artificial intelligence (AI) tools has sparked both excitement and anxiety about the potential benefits and harms of this technology. In developed countries, people are almost equally concerned and optimistic about it.
This is probably not surprising. Although AI will consume vast amounts of natural resources, it promises to save the planet. While it increases human efficiency and productivity, it has the potential to put millions of people out of work.
For many white-collar workers, the use of AI no longer appears to be an option. The message is clear. Get on board or be left behind.
Amid this uncertainty and rapid adoption of technology, a concerned public has continued its efforts to resist AI. One form of AI resistance aimed at disrupting the functionality of AI large-scale language models is data poisoning. But how accessible is it to the everyday person? And what’s wrong with its use?
What is AI resistance?
Acts of resistance to AI range from social sanctions and boycotts to strikes, protests, public protests and lawsuits. Promoting these practices is recognized as a threat to jobs, ethics, safety, democracy and sovereignty, and the environment.
AI is also said to be an existential risk to creative industries such as music, novels, and movies. In the UK, generative AI has been characterized as “industrial-scale theft” that threatens the £124.6 billion (A$237 billion) creative sector and more than 2.4 million jobs.
People have long used civil disobedience to address social injustice. Famously, Rosa Parks’ refusal to sit in the back of a bus in Alabama led to a 13-month bus boycott by tens of thousands of black residents. This policy only ended when racial discrimination in public transportation was deemed unconstitutional in the United States.
Disruption has also long been central to collective action against injustice. In the fight for workers’ rights, workers have employed a variety of tactics to reduce efficiency and productivity. They range from hotel workers putting salt in sugar shakers to farm workers breaking machinery.
Data poisoning can be considered a modern version of these historical practices.
How does data poisoning occur?
Data poisoning refers to intentionally injecting misleading, biased, or meaningless content into the data that an AI model learns from, thus worsening its output. Only 250 tainted documents in the dataset can compromise the output of an AI model of any size.
There are many ways to poison data. Some require advanced technical skills, while others can be accessed by anyone with an Internet connection, as long as text or images are used as training data.
Researchers have developed several data poisoning tools that exploit vulnerabilities in AI models. Glaze and Nightshade allow artists to create toxic visual images that cannot be used as training data. The CoProtector tool prevents abuse of open source code repositories such as Github. Monash University and the Australian Federal Police created Silverer to help social media users edit their personal images to prevent them from being used in deepfakes.
However, you don’t need tools or advanced skills to influence AI. Simply creating a website with false information, telling a joke on Reddit, giving a model its own output, or editing Wikipedia can contaminate data.
Data poisoning is commonly described as a dangerous act carried out by “cyber criminals” or “malicious actors.” But what if it is used to protect human rights?
Is data poisoning legal? Is it ethical?
Legal obligations related to data poisoning are often directed at AI developers and organizations. EU artificial intelligence law requires appropriate measures to be adopted to prevent and detect data poisoning.
The legal status of AI data poisoning by individual users is less clear. Criminal penalties may apply under the US or UK Computer Fraud and Abuse Act. Interfering with AI models can also violate the AI company’s terms of service.
Even if AI data poisoning is illegal, questions may remain about its ethical status. Philosophers have long recognized that civil disobedience can be justified in situations where legally sanctioned practices create grave injustice.
Data poisoning can constitute ethical civil disobedience if an AI company operates with state approval in a way that impacts citizens’ rights to privacy, copyright, safe and secure work, quality education, and social and sexual safety.
For philosopher John Rawls, “[civil disobedience] Although it is a means of stabilizing the constitutional system, it is illegal by definition. ”
If the goal is to prevent mass unemployment, maintain electoral integrity, and protect against social harm (suicide, child abuse, increased human isolation, loss of human creativity, environmental destruction), data poisoning may be consistent with the principles of justice that underpin democratic social institutions.
A critical problem with data poisoning is that users end up placing too much trust in AI systems, even if the model is compromised and the output becomes contradictory, misleading, or meaningless. Regardless of its quality or impact, data poisoning can amplify inaccuracies in the systems humans increasingly rely on and contribute to the harm we seek to resist.
Data poisoning is more than just an immoral cyber crime. Addressing social injustice may require ethically complex strategies. The development of AI must be in the collective interest and consistent with public values and interests. If employees at AI companies are asking, “Are we the bad guys?” history may prove that in some cases, data polluters are on the side of good.
![]()
The authors do not work for, consult, own shares in, or receive funding from any company or organization that could benefit from this article, and the authors have disclosed no relevant affiliations other than academic appointments.
/Courtesy of The Conversation. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are solely those of the authors.
