As robots are introduced in an increasing number of real-world settings, it is important for them to be able to effectively cooperate with human users. In addition to communicating with humans and assisting them in everyday tasks, it might thus be useful for robots to autonomously determine whether their help is needed or not.
Researchers at Franklin & Marshall College have recently been trying to develop computational tools that could enhance the performance of socially assistive robots, by allowing them to process social cues given by humans and respond accordingly. In a paper pre-published on arXiv and presented at the AI-HRI symposium 2021 last week, they introduced a new technique that allows robots to autonomously detect when it is appropriate for them to step in and help users.
“I am interested in designing robots that help people with everyday tasks, such as cooking dinner, learning math, or assembling Ikea furniture,” Jason R. Wilson, one of the researchers who carried out the study, told TechXplore. “I’m not looking to replace people that help with these tasks. Instead, I want robots to be able to supplement human assistance, especially in cases where we do not have enough people to help.”
Wilson believes that when a robot helps humans to complete a given task, it should do so in a ‘dignified’ way. In other words, he thinks that robots should ideally be sensitive to their users’ humanity, respecting their dignity and autonomy.
There are several ways in which roboticists can consider the dignity and autonomy of users in their designs. In their recent work, Wilson and his students Phyo Thuta Aung and Isabelle Boucher specifically focused on preserving a user’s autonomy.
“One way for a robot to support autonomy is to ensure that the robot finds a balance between helping too much and too little,” Wilson explained. “My prior work has looked at algorithms for adjusting the robot’s amount of assistance based on how much help the user needs. Our recent study focused on estimating how much help the user needs.”
When humans need help with a given task, they can explicitly ask for assistance or convey that they are struggling in implicit ways. For example, they could make comments such as “hmm, I am not sure,” or express their frustration through their facial expressions or body language. Other implicit strategies used by humans to communicate that they need help involve the use of their eye gaze.
“For example, a person may look at the task they are working on, then look at a person that can help them and then look back at the task,” Wilson said. “This gaze pattern, called confirmatory gaze, is used to request that the other person look at what they are looking at, perhaps because they are unsure if it is correct.”
The key objective of the recent study carried out by Wilson, Aung and Boucher was to allow robots to automatically process eye-gaze-related cues in useful ways. The technique they created can analyze different types of cues, including a user’s speech and eye gaze patterns.
“The architecture we are developing automatically recognizes the user’s speech and analyzes it to determine if they are expressing that they want or need help,” Wilson explained. “At the same time, the system also detects users’ eye gaze patterns, determining if they are exhibiting a gaze pattern associated with needing help.”
In contrast with other techniques to enhance human-robot interactions, the approach does not require information about the specific task that users are completing. This means that it could be easily applied to robots operating in various real-world contexts and trained to tackle different tasks.
While the model created by Wilson and his colleagues can enhance user experiences without the need for task-specific details, developers can still provide these details to enhance its accuracy and performance. In initial tests, the framework achieved highly promising results, so it could soon be used to improve the performance of both existing and newly developed social robots.
“We are now continuing to explore what social cues would best allow a robot to determine when a user needs help and how much help they want,” Wilson said. “One important form of nonverbal communication that we are not using yet is emotional expression. More specifically, we are looking at analyzing facial expressions to see when a user feels frustrated, bored, engaged or challenged.”
Using gazes for effective tutoring with social robots
More information:
Jason R. Wilson, Phyo Thuta Aung, Isabelle Boucher, Enabling a social robot to process social cues to detect when to help a user. arXiv:2110.11075v1 [cs.RO], arxiv.org/abs/2110.11075
2021 Science X Network
Citation:
A technique that allows robots to detect when humans need help (2021, November 10)
retrieved 14 November 2021
from https://techxplore.com/news/2021-11-technique-robots-humans.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.