How foreign operations are manipulating social media to influence your views

How foreign operations are manipulating social media to influence ...

Foreign influence campaigns, or information operations, have been widespread in the run-up to the 2024 U.S. presidential election. Influence campaigns are large-scale efforts to shift public opinion, push false narratives or change behaviors among a target population. Russia, China, Iran, Israel and other nations have run these campaigns by exploiting social bots, influencers, media companies and generative AI.

At the Indiana University Observatory on Social Media, my colleagues and I study influence campaigns and design technical solutions – algorithms – to detect and counter them. State-of-the-art methods developed in our center use several indicators of this type of online activity, which researchers call inauthentic coordinated behavior. We identify clusters of social media accounts that post in a synchronized fashion, amplify the same groups of users, share identical sets of links, images or hashtags, or perform suspiciously similar sequences of actions.

We have uncovered many examples of coordinated inauthentic behavior. For example, we found accounts that flood the network with tens or hundreds of thousands of posts in a single day. The same campaign can post a message with one account and then have other accounts that its organizers also control “like” and “unlike” it hundreds of times in a short time span. Once the campaign achieves its objective, all these messages can be deleted to evade detection. Using these tricks, foreign governments and their agents can manipulate social media algorithms that determine what is trending and what is engaging to decide what users see in their feeds.

Adversaries such as Russia, China and Iran aren’t the only foreign governments manipulating social media to influence U.S. politics.

Generative AI

One technique increasingly being used is creating and managing armies of fake accounts with generative artificial intelligence. We analyzed 1,420 fake Twitter – now X – accounts that used AI-generated faces for their profile pictures. These accounts were used to spread scams, disseminate spam and amplify coordinated messages, among other activities.

We estimate that at least 10,000 accounts like these were active daily on the platform, and that was before X CEO Elon Musk dramatically cut the platform’s trust and safety teams. We also identified a network of 1,140 bots that used ChatGPT to generate humanlike content to promote fake news websites and cryptocurrency scams.

In addition to posting machine-generated content, harmful comments and stolen images, these bots engaged with each other and with humans through replies and retweets. Current state-of-the-art large language model content detectors are unable to distinguish between AI-enabled social bots and human accounts in the wild.

Model misbehavior

The consequences of such operations are difficult to evaluate due to the challenges posed by collecting

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :