As generative AI (GenAI) adoption increases across the Asia-Pacific region, some organizations, especially those in unregulated industries, may be hesitant to implement governance frameworks for fear that red tape will stifle innovation.
But Grant Case, chief data officer for Asia Pacific and Japan at Dataiku, argues that the opposite is true and that clear boundaries are essential to speed.
In a recent interview with Computer Weekly, Case said that while the region is leading the way in the use of GenAI, AI efforts could be derailed by eroding user trust in AI output. He argued that establishing governance guardrails will help close the trust gap.
However, Case often encounters a misconception that governance slows down AI efforts. “We think exactly the opposite,” he said. “The organizations moving fastest in the region are those that have already established a strong AI governance stance.”
He likened AI governance to a highway safety barrier. Just as barriers allow vehicles to travel safely at high speeds, governance provides the confidence needed for rapid development.
“To move faster, you need to understand where the boundaries are,” Case explained. “Organizations that set these parameters early eliminate hesitation to innovate because teams know exactly what is acceptable.”
Dodge shadow AI
One of the drivers for AI governance is the rise of shadow AI, where employees use unapproved AI tools in their work.
Case pointed to recent research showing that 77% of security professionals observe employees exposing corporate data to large-scale language models (LLMs). This behavior is rarely malicious. Rather, this occurs due to lack of proper internal tools.
According to Case, employees often turn to external AI tools because they don’t have smooth access to in-house alternatives. He pointed out that the solution is not simply to ban external tools, but to provide internal options that are integrated with the necessary governance protocols.
“We want the managed path to be the fast path, and we want it to be the right path,” Case said, referring to a philosophy shared by bank customers. “If you set up the right infrastructure for your end users, they won’t feel the need to leave your organization.”
AI governance discussions often start at the chief data officer (CDO), but security risks as well as the rising costs of unregulated AI experimentation are pushing the discussion all the way to corporate boards.
Mr. Case shared an example of a client who incurred large unexpected costs in one of its business units. The CDO launched a $3 million AI project two years ago, but the board recently questioned the $47,000 monthly cost without clear evidence of a return on investment. As a result, the company’s finance and internal audit teams became more involved in AI governance to address both technical and financial risks.
Some companies are starting to build their own LLMs for reasons such as governance and more control over localization and the lifecycle of their AI models. Case disagrees, noting that the rapid pace of technological change often makes internal projects obsolete before they are completed.
He gave the example of a high-level analyst who spent about six months building an in-house LLM. Several months after the model was completed, commercial updates to OpenAI made the proprietary work less effective and more expensive.
Instead, Case advocated a platform approach that builds governance requirements into the infrastructure, such as those mandated by EU AI law. This allows businesses to connect the latest models while remaining compliant.
“The value of a platform like Dataiku is the ability to integrate the latest technology,” Case says. “This allows teams to use the best thing they need right now, instead of trying to build something that might be too outdated in six months.”
According to Gartner, global spending on AI is expected to reach $2.52 trillion in 2026, an increase of 44% year over year. Investment in AI platforms for data science and machine learning, such as Dataiku, is also expected to increase from $21.9 billion in 2025 to $44.5 billion in 2027.
“AI adoption fundamentally depends on the readiness of both human capital and organizational processes, not just financial investment,” said John-David Lovelock, Distinguished Vice President Analyst at Gartner. “Organizations with greater experiential maturity and greater self-awareness are more likely to prioritize proven outcomes over speculative possibilities.”
