Workers fear being watched – and misunderstood

Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. It is used in contexts both mundane, like entertainment, and high stakes, like the workplace, hiring and health care.

A wide range of industries already use emotion AI, including call centers, finance, banking, nursing and caregiving. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice.

Scholars have raised concerns about emotion AI’s scientific validity and its reliance on contested theories about emotion. They have also highlighted emotion AI’s potential for invading privacy and exhibiting racial, gender and disability bias.

Some employers use the technology as though it were flawless, while some scholars seek to reduce its bias and improve its validity, discredit it altogether or suggest banning emotion AI, at least until more is known about its implications.

I study the social implications of technology. I believe that it is crucial to examine emotion AI’s implications for people subjected to it, such as workers – especially those marginalized by their race, gender or disability status.

Can AI actually read your emotions? Not exactly.

Workers’ concerns

To understand where emotion AI use in the workplace is going, my colleague Karen Boyd and I set out to examine inventors’ conceptions of emotion AI in the workplace. We analyzed patent applications that proposed emotion AI technologies for the workplace. Purported benefits claimed by patent applicants included assessing and supporting employee well-being, ensuring workplace safety, increasing productivity and aiding in decision-making, such as making promotions, firing employees and assigning tasks.

We wondered what workers think about these technologies. Would they also perceive these benefits? For example, would workers find it beneficial for employers to provide well-being support to them?

My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey partly representative of the U.S. population and partly oversampled for people of color, trans and nonbinary people and people living with mental illness. These groups may be more likely to experience harm from emotion AI. Our study had 289 participants from the representative sample and 106 participants from the oversample. We found that 32% of respondents reported experiencing or expecting no benefit to them from emotion AI use, whether current or anticipated, in their workplace.

While some workers noted potential benefits of emotion AI use in the workplace like increased well-being support and…

Access the original article