Humans need to be able to stop AI
How much power will organizations and their leaders give to machines? Quite a bit, in fact, but no organization is ready to hand over the keys to their entire operations to artificial intelligence. The question is what data will drive AI decisions and actions, and concerns about erroneous “black box” decisions and bias. The mantra for business leaders is “trust, but verify.”
A recent survey found that 61% of executives say they have “complete confidence” in the reliability and validity of AI output, while 40% believe their company's data isn't yet ready to deliver accurate results. The survey, conducted by NewtonX for Teradata this spring, found that the most important factors that trust in AI come from reliable, validated results (52%), consistency or repeatability of results (45%), and the brand of the company that built the AI (35%).
So how far are executives willing to leave it to AI? Industry leaders say the technology isn't ready for full automation yet. “We believe that in general, humans will need to be involved to review and improve the recommendations that AI generates, whether it's about predictive maintenance, logistics optimization, supply chain optimization, production optimization, fraud detection, or whatever,” says Tom Siebel, CEO of C3 AI and founder of Siebel Systems, now a division of Oracle. “We believe that responsible adoption of AI in the enterprise will require human oversight, now and in the future.”
Despite the immense pressure to unlock the potential for increased productivity and efficiency, “AI solutions need to be approached with caution,” advises Binny Gill, founder and CEO of Kognitos. While there are well-documented cases of hallucinations caused by generative AI, “there is also a deep-rooted trust issue with AI overall.”
Currently, trust in AI output, whether operational or generative, is “limited,” agrees Junaid Saiyed, CTO at Alation. “In the case of AI-suggested output, there is potential for contextual misinterpretation, biased results, and hallucinations.” Providing greater trust will depend on “the governance of the data used and the level of risk associated with it.”
While errors in email campaigns may be negligible, “in a high-risk industry like insurance, human oversight is crucial, as algorithmic bias or personal information leaks can have serious repercussions,” Syed adds.
How can AI advocates begin to build the trust needed for more autonomous, yet human-guided, systems?
To address the trust issues around AI, Siebel says companies need to be clear, open and thoughtful about their use of AI in decision-making. “Any recommended action generated by AI should include a complete evidence package explaining the rationale for the recommendation and the underlying factors contributing to the recommendation,” he stresses. “The workflow should require explicit human approval to put the recommendation into action.”
As an example of the need for human-guided AI interaction, think of the self-driving car experience, Gill says: “Drivers may trust the AI in their car, but they need the ability to take over steering control when the machine stalls or the driver is unsure.” Similarly, Gill continues, “businesses will need a better steering wheel. Business users need to be able to see how AI will affect the world around them, including financial books, emails and business processes, before any action occurs.”
The best way to achieve this is for the AI engine to propose a plan in natural language, which a human can review before handing over to a hard-working AI system, Gill adds.
This is scalable and sustainable, Siebel said, noting that in C3 transactions, “every day, hundreds of AI-recommended actions are reviewed and rejected by human intervention, and thousands, perhaps tens of thousands, of actions are reviewed and implemented after management review.”
The tagline for the AI enhancement process should be “machine proposes, human validates,” says Said, which “requires a clear oversight role in AI models and transparency to make them easily interpretable. Regular audits are essential to correct errors and biases, and robust feedback mechanisms drive continuous improvement.”
Tracking data lineage is also important, especially for complying with regulations like GDPR, CCPA, HIPAA, and emerging AI rules, Saiyed adds. “Organizations need to unify and trust data quality, focusing on metrics like freshness, accuracy, and completeness. This foundation is essential to building trustworthy AI models, mitigating risk, and enabling users to effectively leverage trusted data.”
Finally, any humans close to the AI process should have direct authority to correct or stop AI transactions. “Our AI tools can be continuously reviewed and overridden by humans through our GenAI application and AI-driven search engine that suggests responses,” Saiyed says. “This continuous human oversight ensures that the AI is a supporting tool, not an absolute authority.”
No matter what process or task, “there's always a human being responsible,” Gill says. “In an airplane flying on autopilot, that's the human pilot. In a car, it's the person sitting in the driver's seat. In a factory with a large assembly line, it's the worker on the floor monitoring the quality of the parts. In a business process, it's a subject matter expert, a person tasked with handling the cases when a trusted AI can't make the right decision and needs help.”
