Escalating tensions involving Israel, the United States, and Iran are reinforcing a broader reality for security leaders across the Middle East. The idea is that geopolitical instability not only increases the risk of external attacks, but also changes internal risk dynamics in ways that many organizations are unprepared to manage.
As companies grapple with the shift to remote work, distributed access patterns, supply chain dependencies, and greater use of artificial intelligence (AI)-powered business tools, insider risk is becoming more complex, less predictable, and difficult to detect through traditional means. In this environment, AI is emerging not just as an enhancement to cyber security, but as a practical tool for managing uncertainty at scale.
Mazen Adnan Dohaji, Exabeam’s senior vice president and IMETA general manager, said in an interview with Computer Weekly that while conflicts don’t necessarily increase the number of malicious insiders, they create more operational noise at a time when defenders need clarity the most.
“The real challenge for defense is not just that conflict increases cyber risk,” Dohaj said. “Conflict creates more noise, more edge cases, and more ambiguity at the very moment security teams need to make faster decisions.”
This distinction is particularly important in the Middle East. In the Middle East, organizations are balancing digital transformation ambitions with growing concerns about sovereignty, resilience and cyber preparedness. During times of geopolitical tension, everyday behavior can suddenly seem abnormal. Examples include users logging in from unknown locations, contractors requiring temporary privileged access, and employees interacting with sanctioned and unsanctioned generative AI (GenAI) tools in ways with limited visibility from security teams.
Traditional insider threat programs built on static rules and manual investigation often fail in these situations. Actions, not alerts, are the new signals. “Security teams should focus less on growing watch lists and more on understanding how normal behavior changes under stress,” says Dohaji.
“Security teams don’t need one strategy for AI risk and another for insider risk; these are increasingly the same problem.”
Mazen Adnan Dohaj, Exabeam
This is where AI-driven user and entity behavior analytics (UEBA) comes into play. Machine learning can establish a baseline of normal activity across employees, contractors, service accounts, and privileged users. This helps identify subtle anomalies that may indicate abuse, coercion, compromised credentials, or data leakage.
“Machine learning can establish a baseline of human and non-human activity, identify subtle anomalies, and increase risk as small signals begin to accumulate across an identity or entity. This is important because insider risk is rarely a single, dramatic event. It often manifests itself through a series of explainable but anomalous actions that only make sense when seen together. It helps teams connect these signals early, before it becomes difficult to prevent exploitation, compromise, or exfiltration,” says Dohaji.
Insider risk now includes machines
The rise of non-human identities is also changing the debate. Insider risk grows as companies deploy AI agents, co-pilots, and automated workflows to capture data and trigger actions. It is no longer limited to employees.
“One of the biggest changes in security operations is that insider risk is no longer limited to human actors,” Dohaji explains. “AI agents and automated workflows are increasingly authenticating to systems, retrieving documents, and calling APIs. [application programming interfaces] Trigger actions on behalf of users. ”
For organizations in the Middle East that are accelerating AI adoption, particularly in sectors such as government, financial services, and energy, this significantly expands the attack surface.
Compromised or over-empowered AI agents can create risks similar to those posed by human insiders, but at machine speed. This means organizations need to connect human and machine behavior into a unified investigation path while providing visibility into agent behavior, identity changes, and privilege escalation.
Dohaj argues that it is a mistake to separate the areas of AI and insider risk. “Security teams don’t need to have a separate strategy for AI risk and a separate strategy for insider risk,” he says. “They increasingly have the same problems.”
AI that can not only detect but also investigate
AI is not only reshaping detection but also the investigation layer. “With the right tools, you can automatically collect evidence, connect related activities, build timelines, summarize incidents, and uncover organizations most likely to require action,” he says. “With an expanded SOC, [security operations centre]it’s not a useful feature. This allows the team to protect the analyst’s time. ”
True resiliency means giving defenders the ability to recognize changes in behavior early, connect human and machine activity, investigate faster, and act on anomalies before they become a breach.
Mazen Adnan Dohaj, Exabeam
This is especially valuable for regional defenders as they deal with uncertainty from daily threats and geopolitical events. The larger lesson Dohaj suggests is that resilience in today’s threat environment is increasingly context-dependent.
“The lesson from the Israel-US-Iran conflict is that not all employees are a threat in geopolitical turmoil,” he says. “That means unstable operating conditions make intentions harder to read, risky behavior more likely to be hidden, and traditional detection models less effective.”
For organizations in the Middle East, this means transforming AI from an innovation narrative to an operational discipline. This means measuring the environment in which work is actually done, monitoring the use of sanctioned AI, building behavioral baselines, and using automation to reduce analyst workload without eliminating human oversight.
It also means preparing for real-world scenarios, such as excessive data movement before an employee leaves the office, unusual access after hours, or a sudden expansion of an AI agent’s access patterns.
Dohaji said: “True resiliency means giving defenders the ability to recognize changes in behavior early, connect human and machine activity, investigate faster, and act on anomalies before they become a breach.”