The U.S. Department of Justice (DoJ) announced that it has seized two internet domains and searched approximately 1,000 social media accounts allegedly used by Russian threat actors to covertly spread pro-Russian disinformation on a large scale both at home and abroad.
“Social media bot farms used AI elements to create fictitious social media profiles, often purporting to belong to U.S. citizens, which their operators used to promote messages supporting Russian government objectives,” the Justice Department said.
The bot network, consisting of 968 accounts on X, is said to be part of an elaborate plan devised by employees of Russian state-run media outlet RT (formerly Russia Today), backed by the Kremlin and assisted by an official from Russia's Federal Security Service (FSB) who founded and led an unnamed private intelligence organization.
Development of the bot farm began in April 2022, when individuals procured online infrastructure while anonymizing their identities and locations. According to the Department of Justice, the organization's goal was to spread disinformation and advance Russian interests through fictitious online personas representing various nationalities.
The fake social media accounts were registered using a private mail server that relies on two domains: mlrtr.[.]com and otanmail[.]com – was purchased from domain registrar Namecheap, who subsequently suspended the bot account for violating their terms of service.
The intelligence operation, which targeted the United States, Poland, Germany, the Netherlands, Spain, Ukraine and Israel, was carried out using an AI-powered software package called Meliorator, which enabled the “mass” creation and operation of social media bot farms.
“Using this tool, RT affiliates spread disinformation in and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine and Israel,” Canadian, Dutch and US law enforcement agencies said.
Meliorator includes an admin panel called Brigadir and a back-end tool called Taras that are used to control authentic-looking accounts whose profile pictures and biographical information are generated using an open-source program called Faker.
Each of these accounts had a distinct identity, or “soul,” based on one of three bot archetypes: accounts spreading political ideology favorable to the Russian government and perpetuating disinformation shared by both bot and non-bot accounts, like messages already shared by other bots.
While this software package was only identified on X, further analysis revealed that the threat actor intends to extend its functionality to other social media platforms.
Additionally, the system circumvented X's safeguards for verifying users' authenticity by automatically copying one-time passcodes sent to registered email addresses and assigning proxy IP addresses to AI-generated personas based on their assumed location.
“Bot persona accounts are an apparent attempt to avoid bans for violating terms of service and to blend into the social media environment to avoid being noticed as bots,” the agency said. “Like real accounts, these bots follow real accounts that reflect their political leanings and interests as stated in their bios.”
“Farming is a beloved pastime for millions of Russians,” RT reportedly told Bloomberg without directly refuting the allegations.
This marks the first time the United States has publicly accused a foreign government of using AI for foreign influence operations. No criminal charges have been made public in the case, but the investigation into the activity is ongoing.
The doppelganger lives on
In recent months, Google, Meta and Open AI have warned that Russian disinformation operations, including one orchestrated by a network known as Doppelganger, have repeatedly used their platforms to spread pro-Russian propaganda.
“The campaign remains active and the network and server infrastructure responsible for delivering the content is still functional,” Qurium and the EU DisinfoLab said in a new report published on Thursday.
“Surprisingly, Doppelganger does not operate from a secret data center in Vladivostok Fortress or some far-away military bat cave, but from a newly established Russian provider operating within Europe's largest data center. Doppelganger works in close coordination with cybercriminal operations and affiliate advertising networks.”
At the heart of this attack is a network of bulletproof hosting providers, including Aeza, Evil Empire, GIR and TNSECURITY, which also host command and control domains for various malware families, including Stealc, Amadey, Agent Tesla, Glupteba, Raccoon Stealer, RisePro, RedLine Stealer, RevengeRAT, Lumma, Meduza and Mystic.
Moreover, NewsGuard, which offers a range of tools to combat misinformation, recently found that a popular AI chatbot tended to repeat “fabricated stories from government sites posing as local news organizations in one-third of responses.”
Iran and China's influence operations
The Office of the Director of National Intelligence (ODNI) also said Iran was “increasingly aggressive in its foreign influence efforts, seeking to stomp down discord and undermine confidence in our democratic institutions.”
The agency also noted that Iranian activists continue to hone their cyber and influence operations, including using social media platforms to issue threats and pose as activists online to incite pro-Gaza demonstrations in the United States.
Meanwhile, Google said it had blocked more than 10,000 instances of activity from Dragon Bridge (aka Spam Mofumofu Dragon), a spammy but persistent China-linked influence network that promoted stories portraying the United States in a negative light and content about Taiwan's elections and the Israel-Hamas war to target Chinese-speakers, on YouTube and Blogger in the first quarter of 2024.
By comparison, the tech giant has blocked at least 50,000 similar instances in 2022 and another 65,000 in 2023. In total, it has blocked more than 175,000 instances so far over the life of its network.
“Despite the volume of content DRAGONBRIDGE consistently produces and the scale of its operations, it has rarely received organic engagement from real audiences,” said Zak Butler, a researcher at Threat Analysis Group (TAG). “When DRAGONBRIDGE content did receive engagement, it was almost always fake, coming from other DRAGONBRIDGE accounts rather than from real users.”