DUBAI: Deepfake technology – AI-generated videos and images that imitate real people or alter events – has proliferated in recent years, transforming the digital landscape.
Once considered a novelty, deepfakes now pose serious risks that can spread misinformation, manipulate public opinion, and undermine trust in the media. As technology becomes more sophisticated, it becomes increasingly difficult to distinguish fact from fiction, making society more vulnerable to deception and disruption.
This challenge is unprecedented and rapidly escalating.
In March 2022, as Russian troops closed in on Kiev, a chilling video began circulating online. There, a pale and exhausted Ukrainian President Volodymyr Zelenskiy appeared to be urging his soldiers to surrender.

Within hours, fact checkers revealed it to be a deepfake. This is an AI-generated hoax planted on hacked news sites and social media to lower morale and spread confusion at critical moments.
Although it was quickly exposed, the damage remained. Millions of people have already watched the footage, and for a brief but unsettling period, even seasoned observers had a hard time distinguishing truth from digital deception. It was one of the first major wartime deployments of synthetic media and offered a glimpse into the new battles over authenticity that would define the Information Age.
According to identity verification company Sumsub, deepfake incidents in Saudi Arabia surged 600% year-on-year in the first quarter of 2024.
As AI platforms appear to be slow to intervene, governments are increasingly seen as a key line of defense. In Saudi Arabia, lawmakers are taking swift action to contain the threat, leveraging a growing number of legal measures.

Legislation for safety
Anna Zeitlin, partner in fintech and financial services at law firm Addleshaw Goddard, said Saudi lawmakers have already taken decisive action.
“Saudi Arabia is leading the way in this regard, which is really great,” Zeitlin told Arab News.
“Saudi Arabia has an anti-cybercrime law, which basically means things like spreading fake news or misinformation that is considered to threaten public peace and security or national interests. It's prohibited and it's a crime. So I think that's the basic level, the starting point.”

She added that the framework is supported by the Saudi Data and AI Authority, which Zeitlin described as “truly the first of its kind.”
“These days, there are a lot of data protection regulators around the world, but there aren't really many AI regulators. SDAIA covers both data and AI. Obviously, they go hand in hand.”
“They have some things we need to talk about,” she continued. “AI Principles and Ethical Management will be published in September 2023, followed by Generative AI Guidelines for Governments to help people address or treat the use of AI appropriately, fairly and wisely.”
“Additionally, they have produced a public consultation document specifically for deepfakes, which is very interesting. These are guidelines for dealing with deepfakes, and it's all about how to deal with deepfakes, how to spot them, and how to deal with them. I have to stress that this is just a public consultation, but there will be legal weight behind it.”
This section contains relevant reference points placed in (opinion field).
Mr. Zeitlin also highlighted the role of the Saudi Media Regulatory Authority in enforcing these standards, especially regarding synthetic content shared online. Using deepfakes to “advertise or promote anything” can be a criminal offense, punishable by fines or imprisonment.
“This is quite serious,” she said, noting that the United Arab Emirates (UAE) has similar provisions through its Cybercrime and Data Protection Act, but “Saudi Arabia is really leading the way and moving in the right direction.”
find the right balance
Even as regulations advance, experts are warning against going too far. Preslav Nakov, dean and professor in the Department of Natural Language Processing, explains that the challenge is far-reaching and solutions require a delicate balance.
“The proliferation of AI-driven misinformation and deepfakes poses major challenges everywhere. The instinctive reaction is often to call for stricter regulation. But technology is evolving too quickly and blatant regulation risks choking the very innovation that Gulf economies are trying to foster,” he told Arab News.
Nakov believes the answer lies in a “multi-pronged strategy” that combines AI-powered detection systems, digital literacy, and cross-disciplinary collaboration.

He cited recent Nature Machine Intelligence research showing that large-scale language models, while prone to factual errors, can assist fact-checkers in identifying claims and obtaining evidence, making them “part of the problem and part of the solution.”
He noted that other research has found that fake news detectors can be biased, sometimes labeling accurate AI-generated text as false. This is a growing risk as machine-generated content proliferates.
“Deepfake technology has come a long way in recent years. Today, AI-generated text, images, and videos are convincing enough to catch people off guard. At some point, yes, certain AI-generated content may become indistinguishable from reality to the human eye alone. That's why detection can't be the only line of defense,” he said.
“That's why the answer is smart governance, a balanced approach that combines advanced detection technologies, public awareness, and evidence-based policymaking. Only by integrating these elements can we ensure we benefit from the vast opportunities AI presents, while mitigating the negative effects of AI misinformation.”
Was it you? know?
• The first deepfake videos appeared online in 2017. Just eight years later, this technology can imitate anyone's face or voice in minutes.
• Cybersecurity analysts estimate that global deepfake-related fraud will result in losses of more than $25 billion in 2024.
• More than 90 percent of AI-generated deepfakes target individuals rather than organizations.
• Saudi Arabia’s AI Principles and Ethics Regulations, published in 2023, are one of the first national AI ethics frameworks in the region.
Zeitlin echoed Nakov's concerns, noting that Europe is losing AI business to what it sees as overregulation.
He said the fight against deepfakes and online fraud lies “between politics and regulation,” highlighting the role of platforms themselves that have largely avoided strict accountability for cracking down on misinformation.
In contrast, governments in the Middle East tend to implement stricter online content regulations “to protect people in the region,” she said, while European regulators seek widespread oversight, often clashing with technology companies who say it is impossible to police such large volumes of content.
“This is not a debate that will go away anytime soon,” Zeitlin said.

For Nakov, whose work at MBZUAI focuses on developing fact-checking tools such as LLM-DetectAIve, Factcheck-Bench, and OpenFactCheck, he believes the complexity of the debate requires us to rethink how society approaches truth online.
“When we talk about misinformation and disinformation, I think it's time to move beyond simple true/false judgments. Reality is rarely such a binary. What's more important is explanation: the reasoning, context, and nuance that helps people truly understand why the claim may be misleading, partially true, or simply taken out of context,” he said.
“In fact, many fact-checking organizations are already moving in this direction. They no longer rely on simple label assignments, but instead produce detailed fact-checking articles. These articles are essentially conversations between fact-checkers and the public. These articles unpack claims, provide evidence, and show why reality is often more complex than it first appears.”


