Amid rising fraud and regulatory scrutiny, Indian banks are rapidly integrating machine learning models into their financial crime compliance (FCC) operations, making traditional rules-based systems inadequate, KPMG said in a report.
The report highlighted that traditional manual and threshold-based methods are “increasingly losing their effectiveness” against sophisticated financial crimes.
This is prompting financial institutions to move toward AI-driven frameworks for anti-money laundering (AML), fraud detection, and customer risk assessment.
Notably, the KPMG report also highlights that the transition to AI is being accelerated by regulatory expectations such as the RBI's FREE-AI framework and SEBI's guidelines calling for responsible and accountable AI systems.
He added that financial institutions are moving from pilot deployments to “full-scale machine learning integration” across the customer lifecycle.
The report further cited RBI Innovation Hub's MuleHunter.AI tool and noted that over 15 Indian banks are currently using the tool, with one leading bank achieving 95% accuracy in detecting mule accounts.
Citing the World Economic Forum, the report highlighted the use of AI to combat fraud around the world, saying that global financial services have already spent $35 billion on AI implementation by 2023, and that investment is projected to reach $97 billion by 2027.
The report highlighted that rules-based financial crime compliance (FCC) systems face high false positives, lack adaptability to new laundering typologies, and are unable to scale as transaction volumes increase.
In contrast, machine learning models enable real-time monitoring, anomaly detection, behavioral analysis, and automatic creation of suspicious activity reports using natural language processing.
KPMG also noted that regulators are increasingly focused on managing model risk, highlighting the need for independent validation to address opacity, bias, data quality issues and vulnerabilities to adversarial manipulation.
The report warns that AI-driven systems can amplify systemic risks if not properly stress tested.
