In mid-2023, around the time Elon Musk rebranded Twitter as X but before he discontinued free academic access to the platform’s data, my colleagues and I looked for signs of social bot accounts posting content generated by artificial intelligence. Social bots are AI software that produce content and interact with people on social media. We uncovered a network of over a thousand bots involved in crypto scams. We dubbed this the “fox8” botnet after one of the fake news websites it was designed to amplify.
We were able to identify these accounts because the coders were a bit sloppy: They did not catch occasional posts with self-revealing text generated by ChatGPT, such as when the AI model refused to comply with prompts that violated its terms. The most common self-revealing response was “I’m sorry, but I cannot comply with this request as it violates OpenAI’s Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences.”
We believe fox8 was only the tip of the iceberg because better coders can filter out self-revealing posts or use open-source AI models fine-tuned to remove ethical guardrails.
The fox8 bots created fake engagement with each other and with human accounts through realistic back-and-forth discussions and retweets. In this way, they tricked X’s recommendation algorithm into amplifying exposure to their posts and accumulated significant numbers of followers and influence.
Such a level of coordination among inauthentic online agents was unprecedented – AI models had been weaponized to give rise to a new generation of social agents, much more sophisticated than earlier social bots. Machine-learning tools to detect social bots, like our own Botometer, were unable to discriminate between these AI agents and human accounts in the wild. Even AI models trained to detect AI-generated content failed.
Bots in the era of generative AI
Fast-forward a few years: Today, people and organizations with malicious intent have access to more powerful AI language models – including open-source ones – while social media platforms have relaxed or eliminated moderation efforts. They even provide financial incentives for engaging content, irrespective of whether it’s real or AI-generated. This is a perfect storm for foreign and domestic influence operations targeting democratic elections. For example, an AI-controlled bot swarm could create the false impression of widespread, bipartisan opposition to a political candidate.
The current U.S. administration has dismantled federal programs that combat such hostile campaigns and defunded research efforts to study them. Researchers no longer have access to the platform data that would make it possible to detect and monitor these kinds of online manipulation.
I am part of an interdisciplinary team of computer science, AI, cybersecurity, psychology, social science, journalism…



