An experimental study suggests that people are less likely to behave in a trusting and cooperative manner when interacting with AI than when interacting with other humans. Scientists use experimental games to probe how humans make social decisions requiring both rational and moral thinking.
Published in PNAS Nexus, Fabian Dvorak and colleagues compared how humans act in classic two-player games when playing with another human to how humans act when playing with a large-language model acting on behalf of another human. Participants played the Ultimatum Game, the Binary Trust Game, the Prisoner’s Dilemma Game, the Stag Hunt Game, and the Coordination Game. The games were played online with 3,552 humans and the LLM ChatGPT.
Overall, players exhibited less fairness, trust, trustworthiness, cooperation, and coordination when they knew they were playing with an LLM, even though any rewards would go to the real person the AI was playing for. Prior experience with ChatGPT did not mitigate the adverse reactions.
Players who were able to choose whether or not to delegate their decision-making to AI often did so, especially when the other player would not know if they had done so. When players weren’t sure if they were playing with a human or an LLM, they hewed more closely to behavior displayed toward human players.
According to the authors, the results reflect a dislike of socially interacting with an AI, an example of the broader phenomenon of algorithm aversion.
More information:
Fabian Dvorak et al, Adverse reactions to the use of large language models in social interactions, PNAS Nexus (2025). DOI: 10.1093/pnasnexus/pgaf112
Provided by
PNAS Nexus
Citation:
People show less trust and cooperation when interacting with AI vs. humans (2025, May 27)