‘Inoculation’ helps people spot political deepfakes, study finds

Inoculation' helps people spot political deepfakes, study finds

Informing people about political deepfakes through text-based information and interactive games both improve people’s ability to spot AI-generated video and audio that falsely depict politicians, according to a study my colleagues and I conducted.

Although researchers have focused primarily on advancing technologies for detecting deepfakes, there is also a need for approaches that address the potential audiences for political deepfakes. Deepfakes are becoming increasingly difficult to identify, verify and combat as artificial intelligence technology improves.

Is it possible to inoculate the public to detect deepfakes, thereby increasing their awareness before exposure? My recent research with fellow media studies researchers Sang Jung Kim and Alex Scott at the Visual Media Lab at the University of Iowa has found that inoculation messages can help people recognize deepfakes and even make people more willing to debunk them.

Inoculation theory proposes that psychological inoculation – analogous to getting a medical vaccination – can immunize people against persuasive attacks. The idea is that by explaining to people how deepfakes work, they become primed to recognize them when they encounter them.

In our experiment, we exposed one-third of participants to passive inoculation: traditional text-based warning messages about the threat and the characteristics of deepfakes. We exposed another third to active inoculation: an interactive game that challenged participants to identify deepfakes. The remaining third were given no inoculation.

Participants were then randomly shown either a deepfake video featuring Joe Biden making pro-abortion rights statements or a deepfake video featuring Donald Trump making anti-abortion rights statements. We found that both types of inoculation were effective in reducing the credibility participants gave to the deepfakes, while also increasing people’s awareness and intention to learn more about them.

Why it matters

Deepfakes are a serious threat to democracy because they use AI to create very realistic fake audio and video. These deepfakes can make politicians appear to say things they never actually said, which can damage public trust and cause people to believe false information. For example, some voters in New Hampshire received a phone call that sounded like Joe Biden, telling them not to vote in the state’s primary election.

This deepfake video of President Donald Trump, from a dataset of deepfake videos collected by the MIT Media Lab, was used in this study about helping people spot such AI-generated fakes.

Because AI technology is becoming more common, it is especially important to find ways to reduce the harmful effects of deepfakes. Recent research shows that labeling deepfakes with fact-checking statements is often not very effective, especially in political contexts. People tend to accept or reject fact-checks

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :