AIs have ‘personalities’ – here’s how they affect you more deeply than you may realize

Many people are interacting with AI large language models, and most
of them would say the models have different “personalities.” Some models come across as calm and useful. Others feel eager, flattering or strangely cold. You can ask two models the same question and walk away with two very different impressions, even when the factual content they return is similar.

Artificial intelligence models do not have personalities in the human sense; they do not have childhoods, inner motives or self-awareness. But they do display patterns of behavior that people read as personality: supportive or dismissive, playful or formal, bold or cautious.

People have long related to machines in human ways. We thank voice assistants, and we get annoyed at GPS systems. But large language models introduce something more sustained: They can maintain a recognizable interaction style across conversations. As a researcher in human-AI collaboration, I study how people experience and respond to AI. Because these systems can sound coherent, emotionally responsive and tailored to the user, they create a much stronger impression of personality.

Where does AI personality come from?

What people experience as personality emerges from the way AI models are built, tuned and deployed. A useful way to think about this is to consider two facets of a model: designed personality and perceived personality.

Designed personality is what developers build into a system through training choices, instructions and safety settings. Anthropic, for example, gives Claude a set of principles, called Claude’s Constitution, that steer it toward careful, measured responses. xAI instructs Grok to be irreverent and minimally restrictive. OpenAI tunes ChatGPT to be broadly helpful and agreeable.

Beneath those explicit instructions, personality is also shaped by reinforcement learning from human feedback, a process in which human raters reward certain qualities such as warmth, directness and caution, and penalize unwanted behaviors. The raters at one company are shaping a fundamentally different character than the raters at another.

Perceived personality is what users actually experience. An AI designed to seem helpful may come across as overly flattering. A model intended to be neutral may feel cold. Designed personality and perceived personality do not always match, and the absence of a designed persona is not the absence of a perceived personality. It just means the personality arises with use.

This dynamic is especially evident in companion platforms, where the goal is to create emotional connection. In a standard chatbot, warmth sits in the background – a customer-service bot might say, “I understand your frustration,” before issuing a refund. In a companion system such as Replika or Character.ai, that same warmth is a product feature.

This becomes more serious in romantic settings, where a persona optimized for reassurance may encourage dependency. Because AI personas…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :