How do people like to interact with robots when navigating a crowded environment? And what algorithms should roboticists use to program robots to interact with humans?
These are the questions that a team of mechanical engineers and computer scientists at the University of California San Diego sought to answer in a study presented recently at the International Conference on Robotics and Automation (ICRA) 2024 in Japan.
“To our knowledge, this is the first study investigating robots that infer human perception of risk for intelligent decision-making in everyday settings,” said Aamodh Suresh, first author of the study, who earned his Ph.D. in the research group of Professor Sonia Martinez Diaz in the UC San Diego Department of Mechanical and Aerospace Engineering. He is now a postdoctoral researcher for the U.S. Army Research Lab.
“We wanted to create a framework that would help us understand how risk-averse humans are–or not–when interacting with robots,” said Angelique Taylor, second author of the study, who earned her Ph.D. in the Department of Computer Science and Engineering at UC San Diego in the research group of Professor Laurel Riek. Taylor is now on faculty at Cornell Tech in New York.
The team turned to models from behavioral economics. But they wanted to know which ones to use. The study took place during the pandemic, so the researchers had to design an online experiment to get their answer.
Subjects—largely STEM undergraduate and graduate students—played a game, in which they acted as Instacart shoppers. They had a choice between three different paths to reach the milk aisle in a grocery store. Each path could take anywhere from five to 20 minutes. Some paths would take them near people with COVID, including one with a severe case.
The paths also had different risk levels for getting coughed on by someone with COVID. The shortest path put subjects in contact with the most sick people. But the shoppers were rewarded for reaching their goal quickly.
The researchers were surprised to see that people consistently underestimated in their survey answers indicating their willingness to take risks of being in close proximity to shoppers infected with COVID-19. “If there is a reward in it, people don’t mind taking risks,” said Suresh.
As a result, to program robots to interact with humans, researchers decided to rely on prospect theory, a behavioral economics model developed by Daniel Kahneman, who won the Nobel Prize in economics for his work in 2002. The theory holds that people weigh losses and gains compared to a point of reference.
In this framework, people feel losses more than they feel gains. So, for example, people will choose to get $450 rather than betting on something that has a 50% chance of winning them $1100. So, subjects in the study focused on getting the reward for completing the task quickly, which was certain, instead of weighing the potential risk of contracting COVID.
Researchers also asked people how they would like robots to communicate their intentions. The responses included speech, gestures, and touch screens.
Next, researchers hope to conduct an in-person study with a more diverse group of subjects.
The findings are published on the arXiv preprint server.
More information:
Aamodh Suresh et al, Robot Navigation in Risky, Crowded Environments: Understanding Human Preferences, arXiv (2023). DOI: 10.48550/arxiv.2303.08284
Provided by
University of California – San Diego
Citation:
How risk-averse are humans when interacting with robots? (2024, July 11)