Two new experiments show that most people do not even consider that a personal message could be AI-generated, even when they themselves use artificial intelligence to write.
To see how people judge someone based on their writing in the age of ChatGPT, my colleague Jiaqi Zhu and I recruited more than 1,300 U.S.-based participants, ages 18 to 84, and showed them AI-generated messages like an apology sent in an email. We split our volunteers into four groups: Some people saw the messages with no information about who or what wrote them, as in everyday life. Others were told the messages were definitely written by a human, definitely AI-generated, or that the source could be either.
An AI-generated fictional apology sent via text was one of the messages participants evaluated in a recent study.
Zhu & Molnar (2026)
We found a clear “AI disclosure penalty.” When people knew a message was AI-generated, they rated the sender much more negatively – “lazy,” “insincere,” “lack of effort” – than when they believed that the same text was written by a person – “genuine,” “grateful,” “thoughtful.”
But here is the twist: The participants who were not told anything about authorship formed impressions that were just as positive as those from people who were told the messages were genuinely human.
This complete lack of skepticism surprised us – and it raises new questions. Maybe participants were not familiar enough with AI to realize that today’s models can produce detailed and personal messages. (They can.) Or perhaps participants have never used AI themselves. (They likely have.) So we also tested whether participants’ own AI use changed how they judged senders.
To our even bigger surprise, we found little to no effect. People who use generative AI quite frequently in their daily lives – at least every other day – did penalize AI use slightly less when AI authorship was disclosed, compared with people who never or rarely use AI. But participants were no more skeptical by default: When authorship was not disclosed, heavy AI users, light AI users and nonusers all tended to assume the text was written by a person and formed essentially the same impressions.
Word clouds depict participants’ first impressions of senders who wrote messages themselves, left, and those who used AI, right.
Andras Molnar
Why it matters
Lack of skepticism and a lack of negative impressions matter because people make social judgments from text all the time. Recipients consider taking the time and effort to send written messages as an insight into the writer’s sincerity, authenticity or competence, and those impressions shape people’s decisions in friendships, dating and work.
Yet our main findings reveal a striking disconnect: People usually do not suspect AI use unless it is obvious. This unawareness creates a moral dilemma: People who use AI…


