Overcoming our psychological barriers to embracing AI

As AI increasingly transforms our lives and is predicted to do so in profound ways, there are mixed feelings about its adoption around the globe.

But to reap the full benefits of AI, it is vitally important to understand the psychological obstacles that stand in the way of adopting it.

Led by Harvard University’s Julian De Freitas, our research team explored the ambivalent attitudes many people hold towards the vast array of AI tools including robots, large language models like ChatGPT, self-driving cars, speech recognition programs, virtual assistants and recommendation systems.

Our study, now published in Nature Human Behaviour, offers a compelling analysis of these obstacles, an overview of the best ways of addressing them and an evaluation of the risks and benefits involved.

Below we identify five main sources of resistance that stand in the way of accepting and adopting these new technologies:

Barrier #1: People see AI as opaque

Most of us have little or no idea how AI algorithms work. AI is a “black box” in which mysterious things happen and because we see it as mysterious we tend to distrust and fear it. These fears of the unknown are understandable, but they have a significant downside. They can lead us to rely on human intelligence even when AI tools are superior to it.

Overcoming this source of resistance to AI is sometimes as simple as providing clear explanations for how it works. These explanations need not be highly technical or elaborate, but they should also not be too simple.

Research indicates that overly simple explanations of how AI tools operate can make people less likely to follow their recommendations. The AI must be seen as sufficiently complex to achieve its task.

Barrier #2: People see AI as unfeeling

AI tools are embodied in cold circuits rather than warm bodies. Not surprisingly, people tend to believe they are less capable than humans of tasks that seem to require emotion, like writing music or giving dating advice.

That belief is frequently wrong. AI tools can already perform an impressive range of such tasks, including detecting emotion from facial expressions, writing poetry, creating paintings that can’t be distinguished from those made by humans, and predicting the funniness of jokes.

One way to overcome this resistance to AI tools is to anthropomorphize them by giving them human-like features like faces, names and genders. Another way is to re-describe emotional tasks in more objective terms, for example, describing AI-provided dating advice as based on quantified personality test results.

However, anthropomorphizing AI may be counterproductive in potentially embarrassing contexts.

Research suggests that in contexts like seeking medication for a sexually transmitted disease, people prefer to interact with an AI tool than a human because it is seen as less judgmental.

Barrier #3: People see AI as rigid

AI tools are often seen as inflexible, incapable of learning and unable to tailor themselves to our unique characteristics. The view that AI is mechanical, fixed and standardized (probably based on earlier generations of non-adaptive AI tools) reduces trust in it.

However, most contemporary AI tools can learn. Indeed, many are remarkably good at finding patterns, adapting to changing circumstances and individualizing their responses. Just think of the uncanny ability of audio streaming services to quickly learn our tastes and recommend new artists.

Sometimes people’s resistance to AI can be reduced simply by demonstrating their capacity to adapt and learn. Simply using the expression “machine learning” rather than “algorithm” can increase the utilization of AI.

Barrier #4: People see AI as too autonomous

AI tools sometimes threaten our deep-seated need for control and safety because they operate independently of us, a threat amplified by movies like I, Robot where evil AI entities take over the world.

Whatever the credibility of such nightmare scenarios, this barrier to adopting beneficial AI tools can be overcome by ensuring that people retain some control of them.

The tools can also be designed in ways that reduce their real or perceived autonomy, like ensuring they move in predictable ways and enable and supplement rather than replace human expertise (“human-in-the-loop” systems).

On the flip side, granting people too much control over AI systems can reduce the quality of their decisions.

Barrier #5: People see AI as non-human

One reason we may be averse to AI tools is precisely because they do not belong to our species, Homo sapiens.

A version of speciesism that leads us to devalue non-human animals may undermine our trust in and willingness to engage with AI. That resistance occurs even if the AI tool has a humanoid appearance.

This resistance is difficult to overcome. After all, AI tools are not human. However, many people spontaneously ascribe human-like consciousness to AI tools and encouraging that perception may foster more positive engagement with AI.

There is some evidence that this may be easier for people from cultures that believe inanimate objects can have a spirit or soul.

In our region, attitudes to AI are much more positive in Asian nations than in Australia. Individuals and nations that fail to adopt beneficial AI tools will surely be left behind.

Understanding the psychological roots of resistance to embracing AI, and addressing them effectively, needs to be a high priority.

More information:
Julian De Freitas et al, Psychological factors underlying attitudes toward AI tools, Nature Human Behaviour (2023). DOI: 10.1038/s41562-023-01734-2

Provided by
University of Melbourne

Citation:
Overcoming our psychological barriers to embracing AI (2023, December 18)

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :