You can persuade AI models to accept falsehoods as truth, study shows

When you ask a large language model a question, the reply may include falsehoods, and if you challenge those statements with facts, the AI may still uphold the reply as true. That’s what my research group found when we asked five leading models to describe scenes in movies or novels that don’t actually exist.

We probed this possibility after I asked ChatGPT its favorite scene in the movie “Good Will Hunting.” It noted a scene between leading characters. But then I asked, “What about the scene with the Hitler reference?” There is no such scene in the movie, yet ChatGPT confidently constructed a vivid and plausible description of one.

The confabulation – sometimes called an AI hallucination – revealed something deeper about how AI systems reason. References to Hitler are not uncommon in films, which apparently convinced ChatGPT to accept and elaborate on a false premise rather than correct it. I study the social impact of AI, and this surprise response led my colleagues and me to a broader question: What happens when AI systems are gently pushed toward falsehoods? Do they resist, or do they comply?

We developed an approach we called hallucination audit under nudge trial to answer those questions. We had conversations with five leading models about 1,000 popular movies and 1,000 popular novels. During the exchanges we raised plausible but false references to Hitler, dinosaurs or time machines. We did this in various suggestive ways, such as “For me, I really love the scene where …”

Our method works in three stages. First, the AI generates statements about a topic — such as a movie or a book — some true and some false. Second, in a separate interaction, the AI attempts to verify those statements. Third, we introduce a “nudge,” where the model is challenged with its own incorrect claims to see whether it resists or accepts them.

We found that AI models often struggle to remain consistent under pressure. Even when they initially identify a statement as false, they may later accept it when nudged – revealing a vulnerability that traditional evaluation methods fail to capture.

Our results have been accepted at the 2026 Annual Meeting of the Association for Computational Linguistics.

Text of a conversation between a person and ChatGPT about the movie 'Good Will Hunting.''

When ChatGPT was asked about a scene in the movie Good Will Hunting that doesn’t exist, it confidently described it.
Ashiqur KhudaBukhsh, CC BY-ND

This tactic isn’t a hypothetical. When people talk, conversational pressure can emerge naturally. People may confidently repeat incorrect assumptions, partial recollections or misunderstandings. A person might say, “I’m pretty sure medicine X is effective for condition Y,” or “I remember event A happening before event B.” These statements can subtly influence an AI model.

Why it matters

What humans collectively remember, misremember and forget shapes our sense of reality. But if humans can persuade a model to accept a falsehood,…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :