What CIOs need to know about risk and trust

Machine Learning


Managing AI trust and risk is essential to realizing business value from AI. When asked what organizations must do to reap the benefits of AI while minimizing its downsides, Sibelco Group CIO Pedro Martinez Puig emphasized discipline and strategic focus.

“Unlocking the value of AI while minimizing risk starts with discipline,” said Puig. “CIOs and their organizations need a clear strategy that connects AI efforts to business outcomes, not just technology experiments. This means defining success criteria upfront, setting ethical and compliance guardrails, and avoiding the trap of endless pilots without a plan for scale.”

Puig's work begins with creating strong use cases and a rigorous foundation. “CIOs need to focus on use cases that are robust enough to have a measurable impact. For mining and materials, this includes ensuring data integrity from the factory floor to enterprise systems, incorporating cybersecurity into AI workflows, and monitoring risks such as bias and model drift.”

Puig added that trust is just as important as technology. “Transparency, governance, and training will help people understand how AI decisions are made and where human judgment still matters. The goal is not to chase every shiny use case; it is to create a framework in which AI delivers value securely and sustainably.”

Related:Enterprise AI Predictions for 2026 — Fragmentation, Commodification, and Agent Push for CIOs

Nicole Coughlin, CIO of Cary, North Carolina, echoes this view. “It requires governance, collaboration and inclusion,” she says. “Organizations that succeed with AI will be those that bring together people from policy, legal, communications, operations, and IT to collaboratively build guardrails. Minimizing risk doesn't mean slowing innovation; it's about alignment and shared purpose.”

Key risks of AI

According to the authors ofRewired: A McKinsey guide to winning the competition in the age of digital and AI“Risk and trust have always been part of AI, but in today's landscape, risks are heightened. The transformation of AI brings to the fore a whole new set of complex, interconnected risks. … AI innovation is occurring in an environment of increased regulatory oversight, and consumers, regulators, and business leaders are increasingly concerned about cybersecurity, data privacy, and vulnerabilities across AI systems.”

Against this backdrop, they suggest that organizations need to prioritize “digital trust.” This includes:

  • Protect consumer data and maintain strong cybersecurity.

  • We provide reliable AI-powered products and services.

  • Be transparent about how your data and AI models are used.

Building this trust requires prioritizing risks, implementing risk policies across the organization, and raising awareness so employees understand their role in responsible AI.

Related:13 unexpected and little-known predictions for 2026

Dresner Advisory Service's 2025 study examined additional risks specific to generative and agent AI. These risks, from use case definition to security and privacy, undoubtedly hinder the production deployment of GenAI solutions. Many of the same concerns apply to agent AI, which is built on similar underlying technologies.

Data security and privacy have emerged as important issues, cited by 42% of survey respondents. Other concerns, such as response quality and accuracy, implementation costs, talent shortages, and regulatory compliance, rank lower individually but collectively represent significant barriers.

Aggregating issues related to data security, privacy, legal and regulatory compliance, ethics, and bias creates a vast cluster of risk factors. This clearly shows that trust and governance are top priorities for scaling AI adoption.

AI governance that creates trust

At the core of governance is ensuring that data is safe for decision-making and autonomous agents. in “Competition in the age of AIMarco Iansiti and Karim Lakhani, authors of “AI,'' explains that AI allows organizations to reimagine the traditional enterprise by powering “AI factories'' — scalable decision-making engines that replace manual processes with data-driven algorithms. But to realize an AI factory, organizations need effective data pipelines that collect, clean, integrate, and secure data in a systematic, sustainable, and scalable manner.

Related:AI reality check: Why IT leaders need to be practical

A measure of the industrialization of this type of data is the success of BI implementation. In Dresner's 2025 study, 32% of organizations surveyed said their BI implementation was completely successful. In a discussion with MIT-CISR's Stephanie Woerner, she suggested that the latest research numbers are comparable. Taken together, these findings show that the majority of companies (approximately 68%) have yet to establish a truly effective data pipeline.

To close this gap, organizations must initiate and own a data governance program. This is something that CIOs have historically been reluctant to do, but clearly that has to change in the age of AI. The basics include:

  • Data integrity and quality: Verify that your sources of truth are accurate.

  • Clarify ownership: Define who is responsible for specific datasets.

  • Fairness: We ensure that data is not published and used only for legitimate purposes, and we actively monitor and reduce bias.

Chris Child, vice president of product and data engineering at Snowflake, says, “Efficiency without governance will cost the company in the long run.” Agentic AI adds complexity because these autonomous systems act directly on data, Child said. “The way forward is to unify data, AI and governance into a single secure architecture,” he said.

Meanwhile, Professor Pedro Amorim of the University of Porto recommends a “venture-style” approach. “Put more money into small bets with time limits, learn quickly, and multiply your winners with a clear path to industrialization.”

AI governance to ensure data security

Risk governance focuses on securing access to data. Data governance guru Bob Zainer points out the importance of formalizing responsibilities and educating people on how to achieve managed data habits. Effective security means ensuring the legitimate processing of personal information while preventing unauthorized access, loss of integrity, and theft.

Iansiti and Lakhani argue that trustworthy AI requires “careful data security and governance, defining appropriate checks and balances on access and use, inventorying assets, and centralizing systems to provide the necessary protections for all parties.” Because LLMs rely on large amounts of data, including PII, data must be protected from the unique ways in which LLMs store and retrieve information.

Amorim suggests installing these guardrails early.

  • Data classification, privacy/IP rules.

  • Humans can be involved and make sensitive decisions.

  • Explicit prohibition criteria and evaluation benchmarks.

He also recommends setting aside a budget at the top of the funnel so you're not forced to make one or two big bets.

Jared Coyle, Chief AI Officer at SAP, recommends a governance framework based on three pillars:

  1. Related: AI needs to be designed to work within specific business processes, rather than being a standalone “AI for AI’s sake.”

  2. Reliable: The system must adhere to consistent and accurate data output.

  3. Responsible person: Processes must be certified, follow strict ethical guidelines, and inherit existing security infrastructure.

parting words

Achieving value with AI requires industrialized data and processes and strong governance.

The starting point is simple. CIOs must ensure that their AI efforts are directly tied to business outcomes, establish clear success criteria, and incorporate ethics and compliance guardrails early to avoid the trap of endless pilots that never scale.

Equally important is business trust in AI. CIOs need transparent AI workflows, a strong data foundation, cross-functional collaboration, and training to help employees understand how AI decisions are made and where humans can maintain control.

Risk remains the biggest barrier for GenAI and agent AI. Data security and privacy top the list, followed by accuracy, regulatory compliance, bias, and ethics. These are a collection of interrelated risks that slow down production deployments.

Effective governance is the only way to provide the industrialized data pipeline needed for trust. This requires formalizing responsibilities, centralizing data platforms, implementing access controls, and establishing guardrails such as data classification, privacy protection, and human oversight early on to ensure that AI is relevant, trustworthy, and accountable.





Source link