Zohran Mamdani as a creepy trick-or-treater, Gavin Newsom body-slamming Donald Trump and Hakeem Jeffries in a sombrero. This is not the setup to an elaborate joke. Instead, these are all examples of recent AI-generated political videos. New easy-to-use tools – and acceptance of those tools by politicians – means that these fake videos are quickly becoming commonplace in American politics.
Perhaps the most interesting thing about many of the videos is how clearly fake they are. Rather than trying to deceive the viewer into thinking a depicted event actually happened, the videos serve a different purpose. President Trump didn’t post a video of himself wearing a crown in a fighter jet dumping feces on a group of protesters because he wanted people to believe that the flight actually happened. He likely did it to express his feelings about the protest and to create an in-joke with his followers.
Fears about the political implications of AI-generated videos have been around since the term deepfakes was coined in 2017. Steady improvements in the technology mean that distinguishing real from fake could become a significant threat. But today’s use of AI imagery is largely about making memes and making money – in other words, typical social media content.
Getting a rise out of people
Internet platforms use algorithms designed to keep people engaged, and that typically means promoting content that stirs emotions. AI-generated political videos often provoke an emotional response – amusement or outrage.
People are more likely to share information when it is emotionally arousing. For example, people are more likely to pass along urban legends that elicit feelings of disgust, and news articles that are emotionally charged are more likely to make the New York Times list of most emailed articles. Similar patterns occur online, where emotional content is much more likely to go viral than nonemotional content.
In addition, strong emotions can interfere with people’s ability to detect false information. People are worse at distinguishing between true and false political news headlines when they are experiencing stronger emotions – for instance, enthusiasm, excitement or fear. Thus, emotionally appealing AI-generated videos are both more likely to spread and reduce people’s ability to judge whether they are real or fake.
Online politics
Creating and sharing AI videos is also a powerful way for people to demonstrate their allegiances and show their political identities. “I am a Trump supporter, so I post AI videos of ICE detainees crying to own the libs” or “I am a Democrat and so I share Governor Newsom’s AI-video of JD Vance talking about couches to show that I’m in on the joke.”
What’s new in recent months is that campaigns and politicians are using AI-created videos, not just their supporters. An analysis from The New York Times showed that Trump commonly uses AI imagery to “attack enemies and rouse…


