

DBS has spent nearly a decade building the data, systems, and governance structures that currently support our AI efforts. According to Group CIO Eugene Huang, much of the effort revolved around preparing organizations to use AI sustainably and scalably, long before generative models emerged. In this interview, he explains the decisions behind that foundation and how DBS approaches generative and agent AI.
What has DBS learned from scaling AI, and what has turned out to be more difficult than expected?
DBS began experimenting with AI in 2014, starting with a pilot using IBM Watson for asset management. Through these early efforts, we recognized that data is at the heart of all AI work, and we spent several years putting the necessary architecture in place to support it.
We worked to build automation into our processes to make our technology stack more scalable and stable. This includes migrating from legacy systems to open source technology and investing in hybrid multicloud infrastructure to increase available computing resources.
At the same time, we developed two internal capabilities. ADA is a self-service platform that serves as a single source of truth for data governance, discoverability, quality, and security. ALAN is an AI protocol and knowledge repository that provides a standardized and repeatable approach to deploying AI models into use cases.
Currently, DBS has deployed over 1,500 AI models in over 370 use cases across various departments within the bank. Building this foundation was not easy, but it helped us expand our generative AI use cases and prepare for agent AI use cases.
In 2021, we began quantifying and disclosing the economic impact of our AI efforts. We expect that amount to exceed S$1 billion this year.
We built processes and technology to support our AI strategy, but the people aspect was just as important. We focused on ensuring employees were included in the transformation.
How are banks currently using generative AI, and which use cases are having the most impact?
Having built a foundation of AI work over several years, we were able to deploy generative AI use cases in 2023 as the technology matured. These use cases now support a variety of customer and employee workflows across sales, advisory, service, processing, software development, and other areas.
For corporate customers, our virtual assistant DBS Joy uses generative models to answer queries, respond to common requests, and support day-to-day services. When a customer requires more complex assistance, the system connects the customer to a service expert, and an in-house co-pilot provides a more complete response.
We also use generative models to support our employees. Customer service representatives use CSO Assistant for transcription, call summaries, and post-call documentation. This reduced call handling time by up to 20%. More than 90% of our employees have access to DBS-GPT, an internal platform used for writing, brainstorming, summarizing, translating, and retrieving information from the bank’s knowledge base.
Generative AI is also used throughout the software lifecycle. One example is JIRA Assist, which helps developers and business analysts spend less time improving code, writing documentation, and fixing bugs.
These tools aren’t just for automation. This frees up employees to focus on tasks that require more judgment and customer interaction.
How are AI and large-scale language models deployed throughout the system?
One of our approaches is a modular AI architecture. We extended ADA to support generative AI use cases by building a generative AI marketplace that delivers in-bank applications using LLM as a service under defined controls and governance. This approach means we are not dependent on any particular LLM or technology provider, whether on-premises or in the cloud.
Instead, our architecture allows LLM integration and replacement with minimal effort. This marketplace includes safety guardrails, audit controls, cost controls, and pre-approved patterns, along with reusable APIs to support AI deployment.
This initiative reduced the time-to-value of AI and machine learning from 18 months to approximately 2-3 months, contributing to an economic impact of S$750 million in 2024.
How do you balance AI development with governance requirements?
For us, responsible AI means deploying AI in an ethical, transparent, and principled manner. We recognize the potential of AI in areas such as customer experience and operations, but its use must be guided by common guidelines.
All of our AI and machine learning use cases are reviewed through the PURE framework, which stipulates that data use must: purposeful, not surprising, with respectand explainable.
PURE is not meant to be viewed solely as a compliance checklist. It is applied at the level of each data use case and is built into the AI and machine learning processes, allowing use case owners to consider whether the use of their data is ethical, in addition to whether it is legally or technically permissible.
To support this, all new employees are introduced to PURE during orientation and the principles are explained as part of how they approach data use.
With the rise of generative AI and more agentic systems, we are expanding our Responsible Data Use Framework to include guidelines specific to these technologies. Insights from each use case review will be included in future updates.
What role can agent AI play in regulated banking? What are its limits?
As agent AI is likely to become more commonplace with customers, people may come to rely on these systems to find information, procure products, and manage payments. This opens up new possibilities for how certain tasks are handled, but important decisions still require human oversight.
Our goal is to provide our customers with a controlled and convenient way to conduct these transactions. We have working groups that study different use cases and consider areas such as observability, spend management, accountability, and responsibility to balance convenience with associated responsibilities.
By automating routine tasks within defined guardrails, agent AI reduces manual labor and frees up employees to focus on activities that require judgment and more complex decision-making.
As AI transforms the technology stack, how can your team prepare for a different way of working?
As AI reshapes our operating models, we’ve ramped up our upskilling efforts to help our employees stay relevant. Since the beginning of this year, DBS has identified more than 12,000 employees for upskilling or reskilling, with almost all of them starting learning roadmaps in areas such as AI and data.
To support this, we are making our product and customer experience design more data-driven and developing roles dedicated to data analysis and governance. Technical employees will also be trained to take on changed roles that incorporate generative AI tools into their work.
Editor’s note: This interview first appeared in Frontier AI 2026.
