Too many employers are putting their businesses at strategic risk by cutting jobs before ensuring that AI can perform the required tasks, an AI expert has warned.
Shomron Jacob is Head of Applied Machine Learning and Platforms at iterate.ai, an enterprise AI application platform provider. He believes the big question organizations need to ask themselves today is: “Are we going to deal with attrition strategically or reactively?” He explains:
The pattern is real: As companies restructure their workflows around AI capabilities, staffing requirements will inevitably change as well. The key question is not whether this is happening, but whether companies are doing it strategically or reactively. From what I’ve seen evaluating enterprise AI strategies, most organizations make these decisions without doing a proper readiness assessment. They are reducing roles before validating that AI can actually perform those functions reliably.
However, Jacob warns that this approach inevitably introduces “strategic risks” to the business.
I’ll go see it wave of regret [due to premature redundancies] Similar to what Orgvue research suggests. Companies that eliminate expertise before building proven AI capabilities can lead to skills gaps, loss of knowledge within the organization, and failed automation efforts, costing more than just headcount.
Matthew Baden, managing director of technology at technology recruitment consultancy The Search Experience, agrees:
Many companies are rushing to replace humans with AI for quick wins, but they find that current models still produce fairly generic outputs that require significant human oversight. When you rapidly reduce experienced talent, you lose years of knowledge and the ability to deal with edge cases. This is exactly the area where AI is struggling to catch up. Already we are seeing quiet regret and selective rehiring. AI is most effective when it augments, rather than completely replaces, strong talent.
As a result, he believes most job roles are likely to be redefined rather than disappear entirely, especially when they combine technical work with judgment, situation, and customer insight.
Tackling underappreciated risks
The advantage of this situation, Baden says, is that companies can increase output per person and run leaner teams. The downside is that if reductions occur too quickly, organizational knowledge will be lost and employees will be left dealing with a large amount of AI-generated output that still needs to be corrected.
Loss of organizational knowledge and,Jacob points out that the kill gap is actually the “most underestimated risk” as employers undertake AI-based restructuring.
Removing experienced staff doesn’t just mean they lose their job performance. You lose the ability to recognize patterns, understand edge cases, and detect when something is wrong. AI systems don’t develop the intuition that “this answer seems different” like an experienced human. Companies experiencing regret are primarily those that treated AI implementation as a headcount reduction activity rather than a capability transformation project. Optimized for short-term cost savings rather than long-term system reliability and performance.
On the other hand, this pattern of regret follows a clear order.
- Initial excitement about cost savings
- Deployment without proper piloting
- Increasingly recognized that the quality of AI output is inconsistent
- Discovering critical errors that a human could have discovered
- Skill gaps become apparent when trying to solve problems
- The realization that institutional knowledge is no more.
Speculation disguised as transformation
As for what types of jobs are likely to be most affected by this situation, Jacob suggests it’s not as simple as saying “automatable tasks will be eliminated.” Instead, he pointed to three key categories of roles that employers need to consider.
- The ones most vulnerable to replacement are: This refers to jobs that involve repetitive, non-judgmental information processing, such as data entry and basic content moderation. These roles “are rapidly becoming automated, but often poorly automated,” he says. Systems that replace them frequently produce so-called “.AI Slop”.
- May be redefined: These jobs involve knowledge work that combines pattern recognition and judgment, such as software development and financial analysis. Here, AI augments rather than replaces humans, but its role fundamentally changes in nature. For example, financial analysts become “AI-assisted analysts” who validate and refine the output of machines rather than building models from scratch.
- Those least vulnerable to replacement: This category includes roles that require complex human judgment, creative strategies, relationship-building skills, or the ability to deal with new situations. Ironically, jobs like customer success are harder to automate than certain “highly skilled” analytical roles because they require human judgment based on the situation, Jacob said.
But he points out that the big danger today is that employers will replace roles that should actually be redefined. He explains:
You can’t just get rid of analysts and let AI do the work. You need a variety of analysts who can evaluate AI output, catch hallucinations, and maintain organizational knowledge. Companies that miss this distinction will regret it. [having made staff redundant].
As a result, Jacob believes we are seeing a “permanent transition to AI-enhanced work,” but also predicts:
There is significant volatility in the short term as companies learn the hard way which roles AI can actually handle and which roles require human expertise…From company assessments I have conducted, I estimate that less than 20% of companies making AI-driven headcount decisions actually validate that their AI systems can perform at the level of reliability and safety they require. It’s not an AI-first strategy. That is speculation disguised as change.
As a result, in his view, the winners will be those organizations that focus on transforming their workforce, supported by appropriate investments in reskilling, rather than “replenishment cost savings.”
Take a strategic approach to AI adoption
On the other hand, taking a strategic rather than reactive approach to AI adoption requires employers to be thoughtful and deliberate about how they approach change. This includes layoffs. Baden said:
The key is to treat this as a proper team redesign, not just cost-cutting disguised as an AI strategy.
In other words:
-
Separate repetitive tasks from tasks that require human judgment
-
Re-skill and redeploy talent early instead of cutting jobs first
-
Emphasis on strong fundamentals and adaptability, not just “AI experience”
-
Test tools in real-world workflows before making important decisions.
Also, there is no such thing as:
- Reduce headcount too much before AI tools are ready to replace tasks
- Too much focus on candidates with specific AI experience rather than high-performing talent who can learn quickly and work in ambiguous situations
- Treat AI purely as a cost-cutting tool.
On the other hand, the most successful approach Jacob has seen to date is to “pilot before you cut, validate before you scale, and reskill while migrating.” In other words:
-
Identify roles that AI can truly enhance or replace. However, validate your results through a pilot project with measurable performance metrics rather than a vendor demo. Most of these demos showcase best-case scenarios, rather than testing worst-case scenarios or edge cases, which are infinitely more valuable.
-
Create an evaluation framework before rebuilding and ask yourself the following questions before firing the people who currently ensure quality control. What is an acceptable error rate? How do you detect when an AI system fails? What kind of human oversight is required?
-
Think of your efforts as workforce transformation, not just a downsizing effort. Redeploy talent to AI assessment, governance, and monitoring roles. The irony, Jacob says, is that the more capable an AI system becomes, the more, not less, low-skilled workers are needed to validate its output. However, the required skill set will shift from “performing a task” to instead evaluating whether the AI performed the task correctly.
How to survive the transition period
Jacob believes other important considerations that are often forgotten here are change management, governance, and the use of sound evaluation systems. For example, he said, most organizations deploy AI systems without proper guardrails, such as what decisions can be made autonomously and what decisions require human approval. Such guardrails tend to emerge only after an incident occurs.
Another common challenge is skills mismatch. The problem here is that the company may choose to lay off the employee who was working on the job. But then it often turns out that a new person is needed to evaluate whether the AI is correctly performing its assigned tasks.
The third most prevalent problem is that many organizations cannot reliably measure the performance of their AI systems. There is no framework for measuring the hallucination rate of an AI or the quality of a tool’s decision-making.
However, Jacob points out:
The key to success lies in the everyday and the exception. That means running a good pilot, building an evaluation framework, establishing governance before deployment, and investing in reskilling. Companies that skip these steps in order to move quickly end up paying the price later in deployment failures, quality issues, and the cost of rebuilding organizational knowledge that was prematurely eliminated.
However, as for what the next 12 to 18 months will hold, Jacob predicts there will be “a reckoning between the hype of AI in production and the reality of AI.” This is because there is a “significant” gap between what AI systems can do in controlled demos and messy production environments, and many companies are trying to “discover this the hard way.”
He also expects some companies to start “quietly rehiring” roles that they previously cut too aggressively, especially when AI systems perform below expectations or have quality issues. This can’t be framed as “we were wrong about AI,” he says. Instead, it will be pitched as an “evolution of AI strategy” or a shift to a “hybrid human-AI model.”
But Jacob concludes:
The long-term trend is toward AI-enhanced work rather than wholesale replacement, but there will be significant volatility in the short term as the market differentiates between hype and feature. The companies that successfully navigate this transition will be those that treat AI adoption not just as a cost-cutting initiative, but as a strategic capability transformation that requires readiness assessment, governance, and change management.
It’s been said before (repeatedly), but I’ll say it again. Organizations focused on reducing costs and seeing AI as an easy way to reduce headcount may end up regretting their hasty decisions. As Jacob points out, the key to real success going forward lies in investing in reskilling to support broader workforce transformation.
