In a world where social media influences opinions and shapes narratives, the rise of artificial intelligence (AI) is both a boon and a challenge.
A new study titled “Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence,” published in the British Journal of Management, delves into the realm of AI-powered social bots, revealing their potential to spread misinformation and the urgent need for organizations to detect and mitigate their harmful effects.
Led by a team of researchers from the U.K. and Canada, the study utilizes cutting-edge text mining and machine learning techniques to dissect the behavior of social bots on X (formally Twitter), one of the most prominent social media platforms. By analyzing a dataset of 30,000 English-language tweets, the researchers uncover the intricate web of interactions between human and non-human actors, shedding light on the propagation of disinformation in the digital sphere.
“Social bots are not just benign entities; they have the power to influence public opinion and even manipulate markets,” says Dr. Mina Tajvidi, Co-Director of MSc Marketing Programme; Lecturer in Marketing in the School of Business and Management, Queen Mary University of London. “Our research underscores the importance of understanding their intentions and detecting their presence early on to prevent the spread of false information.”
The study draws from the actor-network theory (ANT), providing a theoretical framework to examine the dynamics between humans, bots, and the digital landscape. By integrating ANT with deep learning models, the researchers unveil the symbiotic relationship between actors and the language they use, offering new insights into the spread of disinformation.
“Our findings highlight the need for enhanced detection techniques and greater awareness of the role social bots play in shaping online discourse,” adds Dr. Mina Tajvidi. “While our research focuses on X (formally Twitter), the implications extend to all social media platforms, where the influence of AI is increasingly prevalent.”
However, the study also acknowledges its limitations, including the lack of metadata and the focus on English-language tweets. The researchers emphasize the need for future studies to explore additional languages and communication modalities to provide a comprehensive understanding of social bot behavior.
As the digital landscape continues to evolve, the research serves as a clarion call for vigilance and proactive measures to combat the spread of disinformation. By harnessing the power of AI for good and equipping organizations with the tools to detect and mitigate harmful social bots, we can pave the way for a more informed and resilient society.
More information:
Nick Hajli et al, Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence, British Journal of Management (2021). DOI: 10.1111/1467-8551.12554
Provided by
Queen Mary, University of London
Citation:
Unlocking the secrets of social bots: Research sheds light on AI’s role in spreading disinformation (2024, February 29)