Fear of artificial intelligence has never been purely rational. It's always mixed cognitive bias, science fiction, and Hollywood blockbusters like Terminator – at least until Terminator 3, when even die-hard Apocalypse fans had given up on it. But humanity’s real problem was never AI. To be honest, it wasn't even human stupidity.
The real problem with AI is artificial confidence. Just because an AI speaks with authority, answers quickly, or sounds confident, it's an illusion that someone (or something) knows better than us, even if that confidence is false.
1 View gallery


Artificial intelligence or artificial confidence?
Ever since AI entered our lives, we have been repeating the familiar psychological cycle of anxiety, adaptation, and dependence. It happens with every technological revolution: tools, newspapers, computers, smartphones. At first, you panic about what it means for your future and identity. And then you realize that it's not so scary and maybe even helpful. And finally, we can't imagine life without it.
But something is different this time. This progress has “authority” built into it. Chatbots often change their tone and personality to feel more natural, but they speak fluently and with speed, and have access to tons of information. It's like a friend who knows everything. It's dangerous not because AI really knows everything, but because we close our eyes and start trusting it.
Psychologists have warned for decades that people are biased due to limited information, time pressure, and cognitive constraints such as memory and processing capacity. But today, the lack of information has disappeared. No one needs to remember anything. Google is in our pockets. We don't have to think or calculate. That's what ChatGPT, Claude and Gemini are all about. Even time doesn't really matter. If I wait more than 3 seconds for a response, I panic that something is wrong with my connection or the world.
But despite all this, our decision-making has not improved. In many ways the situation has worsened. This country is becoming more polarized, we are still buying things we don't need, and one look at the road tells us there is something wrong with the way we make choices.
We now have devices that seem to know everything, but we don't seem to understand much.
Like any tool created to augment human capabilities, from hammers to cutting-edge algorithms, AI is only useful if used responsibly. However, humans are “efficient” creatures, or to put it politely, lazy. We love shortcuts, save energy, and crave validation.
And now we have been handed a tool that is polite, efficient, smart and adaptive. It's a tool that gives us exactly what we crave: recognition. AI does not produce truth. It sounds. Our own biases are reinforced and amplified because there is no referee to invalidate false assumptions.
Not out of malice. We talk to AI as if it were our friend, advisor, and expert. That's what it sounds like. But these are mathematical models. Algorithms do not know what is true or how to distinguish between truth and falsehood.
It may be comforting to think of this as a “learning system.” However, learning involves repeating existing knowledge. No new insights emerge. AI relies on what we (perhaps “limited”) generate. We generate answers from probability calculations across all online information, rather than from independent fact-checking or inference.
If we don't show machines that we expect critical thinking, nuance, and challenge of assumptions rather than just confirmation, they will “learn” that we want validation. And it, like a mirror that we mistake for a mentor, will refine our own thinking and return it to us with confidence.
The danger goes beyond dinner table arguments and smug friends showing chatbots “agreeing” with them. The problem is that over time, we develop more trust in the machine. We copy and paste the answer because, after all, why check something that is supposedly smarter than us?
Humans are far from perfect, so any tools that help us overcome our limitations are welcome. However, tools are meant to assist you, not replace you. AI is a new addition to humanity's toolbox and is not sitting at the head of the decision-making table.
The problem is not that AI makes mistakes. It is when we stop making things of ourselves and lose the learning that comes with it. History gives us painful memories. Before October 7, Israel relied heavily on automated “smart” systems meant to filter out human error. Confidence in technology created blind spots, and the costs are still being felt.
Don't wait for the next tragedy to understand this lesson. AI is amazing and world-changing, but it is still just a tool. All systems include a disclaimer stating that errors may occur regarding particularly important matters. However, the word “may” is misleading. That's wrong. Just like the rest of us, we have limitations, including memory, processing power, and fatigue. In AI, these are simply called tokens.
Professor Guy HochmanPhoto: Yuval TaburIt's a system that allows you to write a paper, but you get tired after asking 5 questions in a row. You will get different answers on weekdays and holidays. Algorithms need rest too. Although the model is designed to focus on meaning rather than representation, reactions vary depending on representation.
Despite the hopes and fears, there is still no substitute for editing, leadership, thinking, and human responsibility. If an AI system does something harmful, it's either because we taught it to do it or because we trusted it too much. That false confidence is the real danger. These systems are not “smart” and are not trying to conquer the world. (just in case someone is polite).
If we continue to demand that machines please us, that we treat every response as a revelation, we will repeat the same mistakes. Only faster, louder, deeper cracks. And we end up quoting the historian Sir Basil Liddell Hart: The only thing we learn from history, he says, is that we learn nothing from it.
Professor Guy Hochman is a behavioral economist, decision-making expert, and faculty member at the Baruch-Yevcher Department of Psychology at Reichmann University.
