Olga Royek, a 21-year-old student at the University of Pennsylvania, started a YouTube channel to connect with an online audience, but soon found her image being photographed and altered by AI to create fake versions of her on Chinese social media.
AI-generated profiles like “Natasha” posed as a Chinese-speaking Russian woman, with the avatar thanking China for helping Russia and promoting Russian candy and other products.
Astonishingly, these fake accounts gained hundreds of thousands of followers in China, far more than Royeck himself had.
“It's literally like my face speaking Chinese and you can see the Kremlin and Moscow in the background and I'm talking about how great Russia and China are,” the Ukrainian YouTuber said. Reuters.
“It was really creepy because those were things I would never say in my life.”
The technology behind the deception
Royek's case is part of a larger trend of fake Russian women on Chinese social media expressing love for China, supporting Russia and selling Russian products. But these people are not real.
They are created with AI technology from videos of real women found online, sometimes without their permission. Experts say these fake videos are targeting single Chinese men, taking advantage of the “no strings attached” partnership between Russia and China declared in 2022.
Jim Chai, CEO of XMOV, a company that specializes in advanced AI technologies, explained how easy it is to create these images: “The technology to create these images is very common in China because a lot of people use it,” Chai said. Reuters.
“For example, to create my own 2D digital human, all I need to do is shoot a 30-minute video of myself, and then once that's done, I re-create the video, and it looks very realistic, and of course, if I change the language, the only thing I need to adjust is the lip sync.”
The ease of creating and sharing AI-generated content has sparked fierce debate about the ethical and legal issues that come with it. Royeck's story illustrates the dangers of powerful AI tools, particularly when they are misused to spread misinformation or create content without permission.
Regulatory challenges and ethical concerns
AI technology is advancing rapidly, but the rules governing it are not, raising concerns about privacy, consent and the authenticity of online content.
China and the European Union are drafting rules on AI to address concerns, with China planning to enact more than 50 standards by 2026. Similarly, the EU's new AI law requires transparency for high-risk AI systems.
Despite these efforts, experts such as Xin Dai of the Peking University School of Law say regulation is struggling to keep up with advances in AI.
“You can expect the tools for information creation, content creation, content distribution to become more and more powerful and available basically every minute,” Dai said. “The volume is huge, not just in China but across the internet.”
Royek's situation highlights the risks of AI, particularly in international relations and propaganda.
“I don't want anyone to think that I have said such horrible things in my life. Using a Ukrainian girl to promote Russia. It's crazy,” Royek said. BBC.
The broad impact of AI on society
The accounts using Royek's image used a variety of names, including Sophia, Natasha, April and Stacey, and communicated in Chinese, a language that Royek never learned.
The AI-generated avatars spoke about China-Russia friendship and endorsed Russian products, in line with the political views of the two countries. “90 percent of the videos were about China and Russia, the friendship between China and Russia, how we should be strong allies, and promoting food products,” Royek said. BBC.
One popular account, “Natasha Imported Food,” had more than 300,000 followers. The account praised Russia, criticized international distancing from the country, and promoted Russian candy. The situation highlighted the problem of AI-generated misinformation and its connection to political propaganda, which infuriated Royek.
Despite China's strict AI laws, Royeck's case shows that a lot of work remains to be done to enforce them. In 2023, Chinese authorities arrested 515 people for “AI face swap” activities, but the problem persists. BBC The report states:
According to HeyGen, the platform was allegedly hacked to create these unauthorized deepfakes, with Royeck's image being used in more than 4,900 videos..
Experts warn that people like Royeck have little room to fight back, leaving them at risk of having their digital images used against them. Kayla Blomquist, a researcher at Oxford University, has warned that AI-generated content risks framing individuals on sensitive political issues, leading to hasty and unjust punishments.
