Knowing when not to use AI is also a skill

Machine Learning


Most AI training teaches you how to get the output. Create better prompts. Refine your query. Generate content faster.

This approach treats AI as a productivity tool and measures success by speed. That’s completely beside the point.

Critical AI literacy involves a variety of questions. Not “How do I use this?” but “Should I use this at all?” Not “How can I make this faster?” But, “What do I lose by doing so?”

AI systems have biases that most users will never see. Analyzing British newspaper archives in 2025, researchers found that less than 20 per cent of Victorian newspapers were digitized than those actually printed. The sample skews toward overtly political publications and away from independent voices.

Anyone who draws conclusions about Victorian society from this data risks reproducing distortions baked into the archive. The same principles apply to the datasets that power today’s AI tools. We cannot interrogate what we cannot see.

Literary scholars have long understood that writing does not simply reflect reality, but helps construct it. Newspaper articles from 1870 are not windows into the past, but curated representations shaped by editors, advertisers, and owners.

AI output works similarly. They synthesize patterns from training data that reflect particular worldviews and commercial interests. The humanities teach us to ask whose voices are present and whose voices are absent.

A study published in The Lancet Global Health in 2023 proves this. Researchers attempted to reverse stereotypical global health images using AI image generation, forcing the system to create a visual of a black doctor in Africa providing care to white children.

Despite generating over 300 images, it turned out that the AI ​​was unable to generate this inversion. The person being cared for was always blacked out. The system had absorbed the existing image so thoroughly that it could not imagine a replacement.

AI slop isn’t just an article with “Delve” and em dashes sprinkled around it. These are merely stylistic instructions. The real problem is output that perpetuates bias without investigation.

Think about friendship. Philosophers Mika Lott and William Hasselberger argue that AI cannot be friends because friendship requires looking out for the interests of others for one’s own. AI tools have no internal benefits. It exists to serve users.

When companies sell AI as a companion, it provides pseudo-empathy without the friction of human interaction. AI cannot reject you or pursue its own interests. The relationship remains one-sided. Business transactions disguised as connections.

AI and professional responsibility

Educators need to distinguish between when AI supports learning and when it replaces the cognitive work that creates understanding. Journalists need criteria for evaluating AI-generated content. Healthcare professionals need protocols to integrate AI recommendations without relinquishing clinical judgment.

This is the work I pursue through Slow AI, a community exploring ways to engage with AI effectively and ethically. The current trajectory of AI development assumes that we act faster, think less, and accept synthetic output as the default state. Critical AI literacy resists that momentum.

None of these require rejecting technology. The Luddites (textile workers who organized against factory owners across the English Midlands in the early 19th century) who disrupted the textile paradigm did not oppose progress. They were skilled craftsmen whose livelihoods were protected from the social costs of automation.

When Lord Byron stood up in the House of Lords in 1812 to make his first speech against the Picture Frame Destruction Bill (imposing the death penalty for destroying picture frames), he argued that these were not ignorant vandals, but people driven by circumstances of unprecedented suffering.

The Luddites had a clear understanding of what machines meant. It is the erasure of craft and the reduction of human skill to mechanical repetition. They weren’t against technology. They had rejected its uncritical adoption. Critical AI literacy asks us to recover that insight. Going beyond “how to use” to understanding “how to think.”

The stakes are not hypothetical. AI-assisted decisions are already shaping employment, healthcare, education, and justice. Lacking a framework to critically evaluate these systems, we end up delegating decisions to algorithms with no visible limits.

At the end of the day, critical AI literacy isn’t about mastering prompts or optimizing workflows. It’s about knowing when to use AI and when to leave it alone.


This article is republished from The Conversation under a Creative Commons license. Read the original article.



Source link