Why using AI responsibly also means knowing when not to use it

Applications of AI


Professor Sam Illingworth from Edinburgh Napier University talks about how AI literacy is a key pillar at the core of ethical AI use.

Most AI training teaches you how to get the output. Create better prompts. Refine your query. Generate content faster. This approach treats AI as a productivity tool and measures success by speed. That’s completely beside the point.

deadly AI literacy Ask different questions. Not “How do I use this?” but “Should I use this at all?” Not “How can I make this faster?” But, “What do I lose by doing so?”

AI systems have biases that most users will never see. Researchers analyze British newspaper archives In 2025, fewer than 20 per cent of Victoria’s newspapers were digitized than were actually printed. The sample skews toward overtly political publications and away from independent voices.

Anyone who draws conclusions about Victorian society from this data risks reproducing distortions baked into the archive. The same principles apply to the datasets that power today’s AI tools. We cannot interrogate what we cannot see.

Literary scholars have long understood that writing does not simply reflect reality, but helps construct it. Newspaper articles from 1870 are not windows into the past, but curated representations shaped by editors, advertisers, and owners.

AI output works similarly. They synthesize patterns from training data that reflect particular worldviews and commercial interests. The humanities teach us to ask whose voices are present and whose voices are absent.

Research published in Lancet Global Health The 2023 journal exemplifies this. Researchers attempted to reverse stereotypical global health images using AI image generation, forcing the system to create a visual of a black doctor in Africa providing care to white children.

Despite generating over 300 images, it turned out that the AI ​​was unable to generate this inversion. The person being cared for was always blacked out. The system had absorbed the existing image so thoroughly that it could not imagine a replacement.

AI slop This isn’t just another article sprinkled with “delve” and em dashes. These are merely stylistic instructions. The real problem is output that perpetuates bias without investigation.

Think about friendship. Philosophers Mika Lott and William Hasselberger discuss AI can’t be your friend, because friendship requires looking out for the interests of others for your own. AI tools have no internal benefits. It exists to serve users.

When companies sell AI as a companion, it provides pseudo-empathy without the friction of human interaction. AI cannot reject you or pursue its own interests. The relationship remains one-sided. Business transactions disguised as connections.

AI and professional responsibility

Educators need to distinguish between when AI supports learning and when it replaces the cognitive work that creates understanding. Journalists need criteria for evaluating AI-generated content. Healthcare professionals need protocols to integrate AI recommendations without relinquishing clinical judgment.

This is the job I pursue slow AIa community exploring ways to effectively and ethically engage with AI. The current trajectory of AI development assumes that we act faster, think less, and accept synthetic output as the default state. Critical AI literacy resists that momentum.

None of these require rejecting technology. The Luddites (textile workers who organized against factory owners across the English Midlands in the early 19th century) who disrupted the textile paradigm did not oppose progress. They were skilled craftsmen whose livelihoods were protected from the social costs of automation.

When Lord Byron entered the House of Lords in 1812, give his first speech He opposed the Frame Destruction Bill (which made the destruction of frames punishable by death), arguing that these were not ignorant vandals, but people driven by circumstances of unprecedented suffering.

The Luddites clearly understood what machines meant: the erasure of craft and the reduction of human skill to mechanical repetition. They weren’t against technology. They had rejected its uncritical adoption. Critical AI literacy asks us to recover that insight. Beyond “how to use” to “how to think.”

The stakes are not hypothetical. AI-assisted decisions are already shaping employment, healthcare, education, and justice. Lacking a framework to critically evaluate these systems, we end up deferring decisions to algorithms with no visible limits.

At the end of the day, critical AI literacy isn’t about mastering prompts or optimizing workflows. It’s about knowing when to use AI and when to leave it alone.

conversation
Written by Sam Illingworth

Sam Illingworth is Professor of Creative Pedagogy at Edinburgh Napier University and is internationally recognized as an expert in interdisciplinary research and science communication. His research involves using poetry and games to create meaningful dialogue between scientists and society. He holds a master’s degree in higher education, is a Principal Research Fellow at the Academy of Higher Education, and is the author of over 100 books. He is the editor-in-chief of Geoscience Communication and founder of the science and poetry journal Consilience.

Don’t miss out on the knowledge you need to succeed. Please sign up for daily briefsSilicon Republic’s digest of science and technology news you need to know.



Source link