AI and network science reveal the impact on campaigns for X

AI News


In the era of generative AI and large-scale language models (LLMs), large amounts of inauthentic content can be rapidly broadcast on social media platforms. As a result, malicious actors have become more sophisticated, hijacking hashtags, artificially amplifying misleading content, and resharing propaganda en masse.

These actions are often orchestrated by state-sponsored information operations (IOs) that seek to sway public opinion during major geopolitical events such as the US election or the COVID-19 pandemic.

The fight against these IOs has never been more important. Identifying influential campaigns with high-precision technology greatly reduces the misclassification of legitimate users as her/his drivers, helping social media providers and regulators curb illegal activity This will prevent you from accidentally deactivating your account while trying to do so.

With this in mind, Luca Lucelli, a researcher at the USC Information Sciences Institute (ISI), is coordinating a Defense Advanced Research Projects Agency (DARPA)-funded effort to identify and characterize influence campaigns on social media. Leading. His latest paper, “Uncovering the Web of Deception: Revealing Coordinated Activities to Expose Information Manipulation on Twitter,” was presented at his web conference held on May 13, 2024. Announced.

“My team and I have been working on modeling and identifying IO drivers such as bots and trolls for the past five to 10 years,” Luceri said. “In this paper, we develop a methodology to propose a set of unsupervised and supervised machine learning models that can detect organized influence campaigns from different countries within Platform X (formerly Twitter). ”

Fusion network of similar behaviors

Luceri and his team used a comprehensive dataset of 49 million tweets from verified campaigns originating from six countries: China, Cuba, Egypt, Iran, Russia, and Venezuela. We focused on five shared actions on X that participate.

This includes co-retweets (sharing the same tweet), co-URLs (sharing the same link or URL), hashtag sequences (using the same sequence of hashtags within a tweet), and fast retweets (content from the same user). (quickly re-share), text similarity (tweets with similar text content).

Previous research has focused on exploring similarities between individual users on X and building networks that map each type of behavior. However, Luceri and his team realized that these accounts often employed many strategies at the same time, which meant that monitoring just one behavioral tracker was not enough. not enough.

“We found that the Cuban and Venezuelan campaigns were using a lot of co-retweets,” Lucelli explained. “However, if we looked only at co-retweets without considering other behaviors, it would do a good job of identifying some campaigns, such as those originating from Cuba or Venezuela, but it would do a good job of identifying some campaigns, such as those from Russia, where co-retweets may be less effective. An unused space is not enough.”

To capture a broader range of coordinated shared behaviors, the researchers built an integrated similarity network called a fusion network. We then applied a machine learning algorithm that exploits the topological properties of the fusion network to classify the similarity of these accounts and predict his future participation in IOs.

Luceri and his team found that the method could be applied to campaigns around the world. Multiple X users within the same campaign showed significant collective similarities in their behavior, regardless of their origin.

“We believe our study represents a paradigm shift in research methodology and provides a new perspective on identifying influential campaigns and their drivers,” Lucelli said.

unlock new opportunities

The unsupervised machine learning model leverages well-known but underutilized network capabilities to detect influence campaigns with 42% higher accuracy than other traditional approaches. Luceri sees this paper as a starting point that paves the way for further research.

“Models can be trained based on the topological features of this similarity network and made to work in complex scenarios, such as when different users from different countries interact with each other, or where there is limited information about the campaign. “It's a more difficult situation than what you're in,'' Lucelli said.

Mr. Luceri also presented another paper at the web conference, “Leveraging Large Language Models to Detect Influence Campaigns in Social Media,” which won the Best Paper Award from the International Workshop on Computational Methods for Online Discourse Analysis (BeyondFacts'24). was awarded. This paper examines the possibility of using her LLM to recognize the signs of AI-driven influence campaigns. This is especially important in the current climate of AI-generated media.

“These collaborative activities have real-life consequences,” Lucelli said. “They have the power to spread misinformation and conspiracy theories that can lead to protests and attacks on democracy, such as Russian troll interference in the 2016 U.S. election.”

Luceri and his team are committed to continuing to explore alternative strategies to identify influential campaigns and protect vulnerable users.

/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *