Financial institutions are facing an unpleasant reality as attackers leverage large language models to create synthetic identities and automate fraud. That means the tools financial institutions rely on to fight fraud weren’t built for this moment.
The arms race has turned
Fraud has always been the subject of an arms race, but AI has fundamentally shifted the balance. With just a laptop and a subscription, attackers can now generate convincing synthetic IDs, launch massive phishing campaigns, and use deepfake audio and video to impersonate account holders.
According to the Nasdaq Global Financial Crime Report, global fraud losses in 2023 will exceed $485.6 billion, with payment fraud accounting for the majority. These losses reflect how the cost of launching sophisticated attacks has fallen. Fraud-as-a-Service toolkits, amplified by generative AI, make it easier than ever to attack financial systems. Defenders, on the other hand, primarily run systems built in a different era.
Two approaches, two different problems
Most fraud teams rely on a combination of rule-based systems, machine learning, graph analysis, and behavioral biometrics. Although these are powerful tools, architectural limitations hinder operational efficiency.
Rule-based systems rely on thresholds. A transaction is flagged if a customer makes 5 transfers in 10 minutes. Analysts can explain these rules to regulators, but fraudsters can explore thresholds and stay just below the alert. Rules updates take several weeks, and by then time attack patterns have evolved.
Machine learning models can detect anomalous activity without explicit rules and incorporate network or behavioral signals. But flagged transactions often come with a list of statistical scores and factors that are difficult for analysts to address in real time. Graph analysis can reveal proximity to fraudulent networks, but it is difficult to immediately translate this into preventable action. According to LexisNexis Risk Solutions, this gap results in a 95% false positive rate.
The cost of getting this wrong
Modern payment rails such as FedNow in the US, SEPA Instant in Europe, Faster Payments in the UK, and NIP in Nigeria process transactions irreversibly in milliseconds. A slow or noisy detection system creates immediate financial risk and operational burden.
Regulators are watching closely. EU AI law requires transparency and human oversight. In the United States, Federal Reserve Board guidance under SR 11-7 mandates validated, documentable, and reviewable models. African regulators are adjusting their frameworks. The Central Bank of Nigeria now requires real-time automated monitoring with a clear rationale. The draft APP Fraud Guidelines require systematic reimbursement based on a rigorous investigation schedule. Explainable, automated decision-making is becoming increasingly mandatory in all regions.
Third architecture: protocol approach
Previous “third-way” solutions such as FICO Falcon, SAS Fraud Management, Feedzai, and Featurespace combine an orchestration layer with ML and rules. Analysts still face the same questions. Why has this alert been issued and what action should you take? Valuable signals require interpretation before they can be put to practical use.
“Fraudulent behavior rarely makes itself known through a single, dramatic signal. It manifests itself as a bunch of things that are all slightly off, and a system that can only ask yes or no questions will always struggle with that reality,” said Solomon Ehi Olmese, global head of operations at Loci Fraud AI.
What is missing is not a better model, but a structured, auditable protocol to express fraud detection intent, translate it into executable logic, and ensure transparency.
Why a protocol rather than another platform?
Lagos-based fraud infrastructure company Loci Fraud AI has developed the Fraud Language Model (FLM), a protocol designed to bridge this gap. Unlike orchestration platforms that layer complexity, FLM unifies rules, ML output, and analyst intent into a single explainable artifact.
FLM operates across three layers.
- Domain-constrained AI – The analyst explains the scenario in plain language. The AI layer uses formal fraud vocabularies and interprets them into structured operational specifications.
- Structured policy expression – Policies are human readable and machine executable, allowing analysts to act and regulators to audit.
- decisive execution – Policies run reproducibly on Loci’s infrastructure. AI supports document creation rather than decision-making, ensuring auditability and preventing sensitive data from being leaked.
FLM integrates with your existing ML investments. The output of the previous model is input as one signal among many signals, and FLM provides explainable logic to wrap them.
“Most agencies don’t have a model problem; they have a coordination problem. They’ve invested in powerful machine learning tools, but those tools operate in silos. FLM provides an integration layer that makes all signals readable, composable, and accountable. That’s what modern fraud defense really needs,” Olumese said.
Built for today’s enemies
In 2010, the challenge was to build an ML model that broke the rules. By 2020, we had integrated multiple detection tools. In 2026, the challenges will change. Attackers iterate in hours, payment rails are instant, and regulators demand explainability.
A sophisticated model alone will not solve the problem. Analysts need credibility, clarity, and operational agility. A system that takes weeks to update will give way to attackers who can iterate over days. The right approach is to design a detection infrastructure that coexists with functionality, clarity, speed, and explainability. FLM embodies that approach.
The arms race of fraud is accelerating. Institutions that build explainability into their architecture, rather than an afterthought, will be better positioned. After all, the problem is serious. Is your detection infrastructure built for today’s attackers or the attackers of five years ago? The answer impacts regulatory compliance, analyst effectiveness, and how much of that $485.6 billion annual problem hits your balance sheet.
author Mr. Solomon Ehi Olmese, Head of Global Operations Loci fraud AI
Actively obtain the latest information on African technology and startups
View and select stories to interact with on your WhatsApp channel
explore
Last updated: March 31, 2026
