Why trust is key to driving business results with AI

AI For Business


The use of AI is rapidly spreading across the global economy, but new research suggests that issues around trust are hindering its success.

Almost all companies are already using AI or plan to use it in the next 12 months. However, according to SAS, Data and AI impact report46% of organizations’ AI efforts are affected by the “trust dilemma,” or the gap between perceived trust in AI systems and actual trustworthiness.

This disconnect leads to two opposing risks, each of which prevents companies from maximizing their AI return on investment (ROI).

If trust in AI is low, employees will not be able to take full advantage of the technology. When employees place too much faith in an untested system, they become overly dependent on it.

To maximize the value of their AI investments, organizations need to strike the perfect balance.

The risks of trusting too much or not enough in AI

Despite the relatively nascent nature of AI tools, the SAS report found that while 78% of respondents have “complete trust” in the technology, only 40% of systems demonstrate “a high or high level of AI trust.”

Additionally, respondents with low AI trust scores actually trust genAI 200% more than traditional machine learning tools. Kimberly Nevala, strategic advisor at SAS, said this is due to the conversational nature of the technology and the fact that users can prompt it, read responses, and redirect as needed.

“The way the system works gives you the sense that you probably have a higher degree of ownership and control over this process than we actually do,” Nevarra said during a recent CIO webcast. “They are always designed to answer, always confident collaborators. It’s subtle and captivating.”

The more users trust AI tools, Nevala continued, the more they will utilize them.

“This is a problem because if we trust it too much, we can become overly dependent on it,” she says. “Therefore, it not only introduces potentially large errors, but also increases risk to the organization.

“On the other hand, if employees don’t have enough trust in AI, they’re likely to be less reliant on the technology, so it’s a question of value and whether it’s actually going to deliver lasting results. And to address this trust dilemma, [trust and trustworthiness] It’s really important to have a balance. ”

How to achieve trustworthy AI

Maximizing AI ROI is only possible when organizations have high confidence that their tools work as intended. To do this, organizations must put guardrails on AI-driven processes and train their teams to recognize when to use and when to avoid AI systems.

Gretchen Stewart, AI Solutions Architect at Intel, emphasized the importance of project communication. By providing information on areas such as risk mitigation and consequences, people realize that “system integrity is built into the system,” she added.

“Developing trustworthy AI systems and fostering trust in AI is a process,” Nevala added. “It’s done through a series of decisions from the beginning of the AI ​​lifecycle, to the end, to deployment, and beyond.”

As AI initiatives evolve, such decisions include establishing business boundaries, defining security and privacy requirements, determining which models and tools are allowed or prohibited, and choosing which processes require human involvement.

Building trustworthy AI is an ongoing field. Organizations that get it right will realize the highest ROI from AI.

Watch the webcast to learn more about how to solve your AI trust dilemma and unlock your AI ROI.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *