Subtle labels that tip off how online recommendation systems choose their selections, such as book and movie choices, may influence whether people trust those systems, according to a team of researchers. They may also tip off people on whom to blame when those recommendations go awry.
In a study, the researchers investigated whether people trust three of the most common filters for making movie recommendations, namely content, collaborative and demographic filters. Collaborative filtering, which was the most trusted system, provides recommendations based on choices made by other users with similar tastes. Content-based filtering offers recommendations that are similar to previous content selected by a user, whereas demographic filtering presents choices based on demographic information that the user provides, such as age and gender.
Gaining users’ trust of these systems could help organizations deliver content and services more effectively, according to the researchers, who report their findings today (May 2) in the proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’22), the premier publication for research on human-computer interaction.
“These are the kinds of recommendation systems that we use on a daily basis—on platforms like Netflix and Amazon—that recommend content and products for us,” said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State. “What we found is that when these systems tell us a little bit about why they recommend certain content and how they came up with that particular recommendation, this information is important for building trust with the user.”
The participants were significantly more likely to trust collaborative filtering even when it did not deliver the best recommendations, said Sundar, who is also an affiliate of Penn State’s Institute for Computational and Data Sciences (ICDS). “They felt that the system made higher-quality recommendations compared to participants who were told that the system used content-based filtering.” He added that this suggests users care more about what others recommend, highlighting the powerful influence of social endorsements.
Among the other two types of systems, content-based filtering was trusted more than demographic filtering by study participants because they saw the movie suggestions as reflecting their personal identity.
Could it be that the users preferred different recommender systems because one type actually makes more preferable recommendations? It’s possible, but not this time, according to Joseph Walther, a co-author of the study, who is the Bertelsen Presidential Chair and director of the Center for Information Technology and Society at the University of California, Santa Barbara.
“We took that into account by making sure it only appeared that one or another different recommender system was operating. The actual recommendations didn’t differ—only the apparent source of the recommendations did,” Walther added.
The team also found that when a collaborative or content-based system suggested good movies, the users of the system typically credited themselves for the good performance. When the recommendations yielded bad movies, however, they blamed the system for the wrong choices.
“We have this bias that when we get good recommendations, we’re more likely to think it is because of us,” said Mengqi Liao, a doctoral student in mass communication at Penn State and first author of the paper. “We are, after all, inputting our information, so we are helping the system to make the choices. But, on the other hand, we’re more likely to think it is the system’s fault if it gives us bad recommendations.”
According to Liao, this “self-serving bias” was less pronounced when participants used the demographic filtering system, however. She added that users hold different expectations for the demographic recommender system.
“This could be due to the human tendency to trust things that we are responsible for, as opposed to the things that machines are responsible for, in order to maintain our self-esteem and self-control, which is crucial in human-computer interaction,” said Liao.
The researchers said that designers and developers might want to keep the user in mind when they are labeling their recommendation systems. Interface cues of the user’s contribution can serve to trigger the “ownness heuristic,” which can add to the credibility of the system.
“One obvious design implication of this finding is that user’s contribution to the outcome of the system should be explicit at every stage of the interaction,” said Liao. “The interface should emphasize collaboration, with the user in the driver’s seat, making the key decisions that are related to the outcome.”
According to the researchers, rather than just displaying recommendations, designers and developers should also tell users the logic behind the recommendations.
“It’s important to include a message on the interface that tells users how those products were recommended,” said Liao. “That will have huge persuasive appeal. This psychological effect might even be independent of the performance of the system because we showed that collaborative filtering is trusted more regardless of whether the system offers good or bad recommendations.”
The researchers recruited 226 participants from an online crowdsourced research platform. The participants were randomly assigned to one of six different types of movie recommendation systems, featuring two different outcomes—good recommendations or bad recommendations—from three different filtering systems: content, collaborative or demographic filtering.
After filling out a demographic survey or a questionnaire assessing their movie tastes, the participants were asked to interact with the systems. There was no difference in the recommendations, rather the only difference between the systems was labeling and descriptions on how the three systems worked.
Provided by
Pennsylvania State University
Citation:
Subtle signals can influence whether people trust online recommendations (2022, May 3)