Online images may be turning back the clock on gender bias

A picture is worth a thousand words, as the saying goes, and research has shown that the human brain does indeed better retain information from images than from text. These days, we are taking in more visual content than ever as we peruse picture-packed news sites and social media platforms.

And much of that visual content, according to new Berkeley Haas research published in the journal Nature, is reinforcing powerful gender stereotypes.

Through a series of experiments, observations, and the help of large language models, professors Douglas Guilbeault and Solène Delecourt found that female and male gender associations are more extreme among Google Images than within text from Google News. What’s more, while the text is slightly more focused on men than women, this bias is over four times stronger in images.

“Most of the previous research about bias on the internet has been focused on text, but we now have Google Images, TikTok, YouTube, Instagram—all kinds of content based on modalities besides text,” says Delecourt. “Our research suggests that the extent of bias online is much more widespread than previously shown.”

Not only is online gender bias more prevalent in images than in text, the study revealed, but such bias is more psychologically potent in visual form. Strikingly, in one experiment, study participants who looked at gender-biased images—as opposed to those reading gender-biased text—demonstrated significantly stronger biases even three days later.

As online worlds grow more and more visual, it’s important to understand the outsized potency of images, says Guilbeault, the lead author of the paper.

“We realized that this has implications for stereotypes—and no one had demonstrated that connection before,” Guilbeault says. “Images are a particularly sticky way for stereotypes to be communicated.”

The extent of bias—and its effects

To zero in on gender bias in online images, Guilbeault and Delecourt teamed up with co-authors Tasker Hull from Psiphon, Inc., a software company that develops censorship-navigation tools; doctoral researcher Bhargav Srinivasa Desikan of Switzerland’s École Polytechnique Fédérale de Lausanne (now at IPPR in London); Mark Chu from Columbia University; and Ethan Nadler from the University of Southern California. They designed a novel series of techniques to compare bias in images versus text and to investigate its psychological impact in both mediums.

First, the researchers pulled 3,495 social categories—which included occupations like “doctor” and “carpenter” as well as social roles like “friend” and “neighbor”—from Wordnet, a large database of related words and concepts.

To calculate the gender balance within each category of images, the researchers