As human interaction with robots and artificial intelligence increases exponentially in areas like healthcare, manufacturing, transportation, space exploration, defense technologies, information about how humans and autonomous systems work within teams remains scarce.
Recent findings from human systems engineering research demonstrate that human-autonomy teaming comes with interaction limitations that can leave these teams less efficient than all-human teams.
Existing knowledge about teamwork primarily is based on human-to-human or human-to-automation interaction, which positions humans as supervisors of automated partners.
But as autonomy has increasingly developed decision-making skills based on spontaneous situation assessments, it can become a teammate rather than a servant. These shared decision interactions are identified as human-autonomy teaming, or HAT.
Nancy Cooke is a cognitive psychologist and professor of human systems engineering at the Polytechnic School at Arizona State University (ASU). She explores how an artificial intelligence agent can contribute to team communications failure, and how to improve those interactions, in her discussion at the annual meeting of the American Association for the Advancement of Science (AAAS).
As the director of ASU’s Center for Human, Artificial Intelligence and Robot Teaming (CHART), a unit of the Global Security Initiative, Cooke applies her expertise in human teamwork and decision making to human-technology teams.
“One of the key aspects of being on a team is interacting with team members, and a lot of that on human teams happens by communicating in natural language, which is a bit of a sticking point for AI and robots,” Cooke said.
Her discussion addresses a study in which teams of two humans and an AI, or “synthetic teammate,” fly an unmanned aerial vehicle (UAV). The AI was the pilot, while the people served as a sensor operator and navigator.
The AI, developed by the Air Force Research Laboratory, communicated with the people via text chat.
“The team could function pretty well with the agent as long as nothing went wrong. As soon as things get tough or the team has to be a little adaptive, things start falling apart, because the agent isn’t a very good team member.”
The AI was unable to anticipate its teammates’ needs the way humans do. As a result, it didn’t provide critical information until asked—it doesn’t give a “heads up.”
“The whole team kind of fell apart,” Cooke said. “The humans would say, ‘OK, you aren’t going to give me any information proactively, I’m not going to give you any either.’ It’s everybody for themselves.”
Cooke’s presentation will address the importance of developing effective synthetic teammates and enhanced HAT interactions as these teams become more common and begin to engage in complex and dynamic environments beyond the UAV study parameters.
Scientists help soldiers figure out what robots know
More information:
Nathan J. McNeese et al. Teaming With a Synthetic Teammate: Insights into Human-Autonomy Teaming, Human Factors: The Journal of the Human Factors and Ergonomics Society (2017). DOI: 10.1177/0018720817743223
Provided by
Arizona State University
Citation:
How humans can build better teamwork with robots (2021, February 8)
retrieved 8 February 2021
from https://techxplore.com/news/2021-02-humans-teamwork-robots.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.