In this Help Net Security interview, Dilek Çilingir, Global Forensic and Integrity Services Leader at EY, talks about how AI is transforming third-party assessment and due diligence. She explains how machine learning and behavioral analytics can help organizations detect risk early, improve compliance, and strengthen accountability. As surveillance increases, Chiringil explains why human judgment remains important in all AI-supported decisions.

When a third-party breach occurs, forensic investigations often uncover weaknesses that AI could have previously warned about. In your experience, what recurring patterns do you see in post-incident analysis that AI could realistically detect or prevent?
Throughout our research, we repeatedly see behaviors that EDR/XDR with machine learning can flag much earlier, such as unusual user activity at unusual times, unusual application execution for specific users or roles, connections to unfamiliar or never-before-seen external IPs, and unusual and continuous data output patterns (such as regular send flows to new addresses).
EDR/XDR applies behavioral analysis, signal correlation, and automated responses to uncover these weak signals in near real-time. In our experience, organizations that have these controls in place detect attacks earlier. To date, we have not observed any significant incidents among clients running these tools. This is because the chain of suspicious activities is interrupted before it reaches the impact stage.
Can you share an example where AI helped identify potential third-party issues before they escalated, such as anomalous data transfers, financial fraud, or red flags in communications?
In one effort, an EDR monitoring platform generated alerts on periodic spikes in the outbound network from a single workstation to newly observed destinations, flagging them as “anomalous traffic.” Triage correlated endpoint process behavior, user context, and network telemetry to ensure that the patterns were not consistent with past user activity.
Through a combination of behavioral anomaly detection and cross-signal correlation, the threat was contained, preventing data leakage and further escalation. Key enablers are broad endpoint coverage, AI real-time behavioral tracking, and disciplined alert review and response, demonstrating that the combination of well-implemented AI solutions and knowledgeable users often leads to desired outcomes.
Many companies struggle with the “black box” output of AI. How can organizations maintain transparency and explainability in AI-powered third-party assessments?
Don’t rely on a single black-box “investigation agent.” Break down your due diligence process into small auditable steps. Clearly decide where AI adds value (e.g. entity resolution, anomaly scoring) and where deterministic logic is sufficient (e.g. checking sanctions lists). Build an evaluation framework (“evals”) using metrics from each step to continually compare expected and actual results.
Rather than letting a model decide what to display, force the lineage of results from source by forcing AI to propagate source citations end-to-end. Add guardrails to ensure your workflow progresses through definitive, reviewable stages. For machine learning components, use established explainability techniques, such as SHAPley Additive exPlanations (SHAP), to uncover feature contributions and support analyst understanding and challenge.
As companies begin to incorporate AI into their vendor risk processes, what new governance structures and accountability measures do you think will be needed?
Companies that incorporate AI into their vendor risk processes need governance structures that ensure transparency, accountability, and compliance. This includes maintaining an approved source catalog and requiring systems or analysts to verify findings and document the rationale behind them.
Data minimization must be built into the design by defining what information is always of interest, such as sanctions and embargo lists, and what is contextually relevant. On the other hand, you should exclude GDPR-protected or sensitive attributes and configure your AI to ignore them.
Risk assessments should be done in stages, adjusting the depth of checks to the importance and geography of the supplier, and increasing the scope of high-risk scenarios while avoiding unnecessary data collection for low-risk relationships. AI provides recommendations without replacing human judgment (“human-involved”). Human accountability remains essential as designated individuals own due diligence decisions.
Regulators are beginning to scrutinize the use of AI in compliance and governance. What direction do you think global regulators will take regarding AI in third-party due diligence?
Regulators are likely to allow the use of AI if companies establish strong controls and demonstrate effective oversight, as required by frameworks such as the EU AI Act. The responsibility lies with the individual or organization. Responsibility does not shift to the AI itself.
While regulators may struggle to define detailed technical rules, one clear change is that “too much data to review” is no longer an acceptable defense. Expect scalable and explainable processes with audit trails. Ultimately, companies need to provide documentation, demonstrable assessments, human accountability, and flexibility in how they achieve these results.

Webinar: Redefining attack simulation with AI
