
From underwriting to patent-pending risk models, Piramal Finance, a wholly owned subsidiary of Piramal Enterprises Limited and a housing finance company, is deeply embedding artificial intelligence at the core of its lending operations. In an interview with TechCircle, Saurabh Mittal, chief technology officer (CTO), Piramal Finance, explains how NBFCs are leveraging artificial intelligence (AI), machine learning, and data science to drive faster decision-making, stronger risk discipline, and scalable growth while operating within the strict guardrails of financial services regulations. Edited excerpts.
How is Piramal Finance leveraging digital and AI-driven technology to transform lending and risk management across retail and wholesale?
At Piramal Finance, digital transformation is about embedding intelligence deep into the loan lifecycle rather than layering technology. Over the past several years, we have built strong AI, machine learning, and decision science capabilities that span onboarding, underwriting, payments, monitoring, and collections. On the front end, an AI-driven agent system triangulates information across multiple data sources (KYC records, bank statements, pay stubs, and application data), significantly reducing manual file processing. This reduced turnaround time and operational overhead for both the customer and internal team.
In underwriting and risk, we deploy multiple supervised models across credit scoring, fraud detection, income estimation, and policy valuation. Traditional machine learning models work with AI agents to provide insurers with richer, more reliable signals at the point of decision-making. AI-powered document synthesis further helps teams interpret complex profiles and large datasets. This integrated, AI-native approach enables scalable growth with strong risk discipline without commensurate increases in headcount or operational costs.
What role are AI, machine learning, and data analytics playing in your core operations today?

These are basic functions for us. Across teams and business lines, employees interact with AI-powered systems every day. We use machine learning to differentiate risk and optimize collection, agent AI to verify documents, advanced models to detect fraud, and computer vision to identify and verify customers.
Internally, the team relies heavily on its AI assistant, ARYA, to support daily workflows, from tracking incentives and sales performance to checking lead status, planning work, and referencing company policies. These workflows are increasingly AI-driven and integrated into the way we work. All this on a modern, resilient technology stack that supports scale and low latency. Business users, engineers, and AI practitioners come together to solve real problems. Most AI projects fail globally, but our belief systems, operating models, and ecosystems allow us to remain in the minority of impact.
Can you share an example where technology has had a measurable impact on business?
One obvious example is an AI-powered insurance underwriting framework. Instead of a one-size-fits-all model, deploy a series of product-specific scorecards covering credit, fraud, income, eligibility, and asset quality. This significantly improves risk separation and enables more confident credit decisions. Another innovation is our patent-pending leveraged risk model. This identifies customers that may appear healthy at underwriting but may become overleveraged after execution. Customers flagged by this model are at significantly higher risk of default, which can strongly enhance traditional checks. We also introduced an agent system to extract and triangulate information from documents. Until now, this was a manual, time-consuming, and error-prone task. AI allows teams to move faster and focus their expertise on complex tasks. These results are possible because AI systems are tightly integrated into decision-making flows, rather than being deployed as standalone tools.
How are you leveraging generative AI in a regulated financial services environment?
We are very careful. At Piramal Finance, GenAI is used to augment human intelligence rather than replace it. Our main areas of focus are internal productivity, unifying insights, simplifying workflows, and empowering developers, all within strong governance guardrails. Most of the current use cases are inward-facing. We have built a robust monitoring, verification, and monitoring framework to proactively identify and mitigate risks. Outputs generated by AI are rigorously reviewed before production deployment and then continuously monitored to ensure compliance and alignment with risk appetite. This ecosystem allows GenAI to be used responsibly in a regulated environment.
How do you balance innovation with cybersecurity, compliance, and data privacy?

For us, innovation and trust go hand in hand. We have made significant investments in governance and end-to-end model lifecycle management. A dedicated annotation and hindsight team tracks model performance, drift, and explainability over time. For example, fraud prevention uses in-house computer vision models to detect document tampering and discover anomalies that are difficult for humans to consistently identify. Importantly, AI alerts always go through human review. Cybersecurity, compliance, and data privacy are non-negotiable, and the same rigor applies to AI systems as to other critical infrastructure.
What are your top technology priorities for the next 2-3 years?
First, we want AI to become a fast lane for business execution, serving as a full-time assistant to frontline teams while driving customer acquisition, credit, and conversion. Second, we aim to extend AI productivity gains beyond engineers to the enterprise and support departments, eliminating repetitive tasks and freeing up teams for higher-value work. Third, we are embedding AI at the core of decision-making, especially underwriting. AI can transparently synthesize disparate data sources and improve outcomes. In simple cases, AI manages end-to-end workflows, freeing human experts to focus on complex scenarios. In parallel, we will continue to strengthen our data, governance, and oversight infrastructure to support scale and regulatory trust.
How are you building a technology-first, AI-native culture and leveraging India’s talent pool?
The key thing we learned is that successful AI deployments are user-driven. Many initiatives come from business teams that see value and want to be strengthened, creating strong bottom-up momentum. We operate with a multidisciplinary team that combines engineering, agent AI, small language models, and domain expertise. We also run training programs that enable non-programmers to build solutions on our internal platform. Being AI native is not about a single model, but about an ecosystem where business, technology, and data work together from day one.
How important are partnerships to your innovation strategy?

yes. We work with hyperscalers such as AWS, Azure, and Google Cloud, as well as specialized partners in areas such as voice-based AI. But partnerships alone are not enough. Real value comes from combining external expertise with strong internal ownership to co-create and operate solutions that truly fit your customers and your business.
