
Attempts to communicate what generative artificial intelligence (AI) is and what it is have produced various minor phors and analogs.
author
- Eldin Mirak
Lecturer, Media School, Creative Arts and Social Research, Curtin University
From a pair of “black boxes” to “autocomplete steroids”, “parrots” and even “sneakers”, the goal is to help you understand complex technology by grounding it in your everyday experience.
The increasingly popular analogy describes the generator AI as a “word calculator.” Comparisons of calculators, partly popularized by the CEO of Openai's CEO Sam Altman, suggest that the purpose of the generator AI tool can help crunch large amounts of linguistic data, just like the familiar plastic objects used to calculate numbers in mathematics classes.
Computer analogies are correctly criticized because they can obscure more troubling aspects of generating AI. Unlike chatbots, calculators do not have built-in bias, make mistakes, and do not pose a basic ethical dilemma.
However, at its core, there is also risk in completely dismissing this analogy, given that generative AI tools are word calculators.
However, it is not the object itself that matters, but the practice of calculation. The calculations of the Generator AI Tool are designed to mimic what underpins everyday human language use.
Language has hidden statistics
Most language users indirectly recognize that interaction is a product of statistical calculations to some extent.
For example, think about the discomfort of hearing someone say “pepper and salt” rather than “salt and pepper.” Or if you order “strong tea” at a cafe instead of “strong tea”, it will have a strange look.
The rules governing how words are selected and ordered, and many other sequences in the language, come from the frequency of social encounters with them. The more you hear something being said in a certain way, the less you hear every option. Rather, other calculated sequences appear to be unplausible.
In linguistics, in a vast field dedicated to language research, these sequences are known as “colocation.” They are just one of many phenomena that shows how humans calculate multiword patterns based on whether they “feel right.”
Why chatbot output “feels correct”
One of the central achievements of large-scale language models (LLMS), and therefore chatbots, is the formalization of this “right” factor in a way that successfully deceives human intuition.
In fact, they are some of the most powerful colocation systems in the world.
By calculating the statistical dependencies between tokens (such as words, symbols, or color dots) within an abstract space that maps meanings and relationships, AI generates a sequence that can not only pass the Turing test as a human at this point, but perhaps more unstablely make the user love.
The main reason why these developments are possible relates to the linguistic roots of the generator AI, which are often buried in the story of technology development. However, AI tools, like many fields of linguistics, are the product of computer science.
The ancestors of modern LLMs, such as the GPT-5 and Gemini, are Cold War-era machine translation tools designed to translate Russian into English. However, with the development of linguistics under numbers such as Noam Chomsky, the goal of such machines has shifted from simple translation to deciphering the principles of natural (i.e. human) linguistic processing.
The process of LLM development occurred step by step from attempts to mechanicalize language “rules” (e.g. grammar) to statistical approaches measuring the frequency of word sequences based on a limited dataset, and current models using neural networks to generate fluid languages.
However, the underlying practice of calculating probability remains the same. Although scales and forms have been changed immeasurably, modern AI tools are still statistical systems of pattern recognition.
They are designed to calculate how to “language” about phenomena such as knowledge, behavior, and emotions without directly accessing any of these. It is easily mandatory to prompt a chatbot such as ChatGpt to “reveal” this fact.
AI is always just calculating
So why don't we recognize this easily?
One main reason concerns how companies explain and name the practices of generating AI tools. Instead of “calculation,” generative AI tools are “thinking”, “inference”, “searching”, and even “dreams.”
The meaning is that when humans crack equations of how they use linguistic patterns, generative AI accessed values sent over the language.
But at least for now, it's not like that.
You can calculate that “I” and “you” are most likely to cooperate with “love”, but whether it's “I” (not a person), “love”, or just that, you're also writing the prompts.
Generated AI is always just a calculation. And we shouldn't make that wrong more.
![]()
Eldin Milak does not work, consult with, own or receive funds for shares from companies or organizations that benefit from this article, nor does he disclose any relevant affiliations beyond academic appointments.
/Commentary of the conversation. This material of the Organization of Origin/Author is a point-in-time nature and may be edited for clarity, style and length. Mirage.news does not take any institutional position or aspect, and all views, positions and conclusions expressed here are the authors alone.
