Abstract

There is considerable interest in multirobot systems capable of performing spatially distributed, hazardous, and complex tasks as a team leveraging the unique abilities of humans and automated machines working alongside each other. The limitations of human perception and cognition affect operators’ ability to integrate information from multiple mobile robots, switch between their spatial frames of reference, and divide attention among many sensory inputs and command outputs. Automation is necessary to help the operator manage increasing demands as the number of robots (and humans) scales up. However, more automation does not necessarily equate to better performance. A generalized robot confidence model was developed, which transforms key operator attention indicators to a robot confidence value for each robot to enable the robots’ adaptive behaviors. This model was implemented in a multirobot test platform with the operator commanding robot trajectories using a computer mouse and an eye tracker providing gaze data used to estimate dynamic operator attention. The human-attention-based robot confidence model dynamically adapted the behavior of individual robots in response to operator attention. The model was successfully evaluated to reveal evidence linking average robot confidence to multirobot search task performance and efficiency. The contributions of this work provide essential steps toward effective human operation of multiple unmanned vehicles to perform spatially distributed and hazardous tasks in complex environments for space exploration, defense, homeland security, search and rescue, and other real-world applications.

Highlights

  • Concepts related to confidence are often linked to human trust in autonomy and allocation of control or how a human operator uses available autonomy levels

  • Research related to robot confidence is typically aimed at altering human trust in autonomy or allocating control authority

  • The size of the study was in-part influenced by the two-factor counterbalancing scheme used to balance both robot behavior and target set, preliminary data collection, and prior studies conducted by our group

Read more

Summary

Introduction

Researchers have long sought to enable multiple robots working together as a team [1,2,3,4,5,6,7,8,9,10] to perform distributed tasks such as area exploration, search, and surveillance [11,12,13,14,15,16,17,18,19] and complex tasks in hostile conditions such as the assembly of structures in orbit, lunar, and planetary environments [20,21,22,23,24,25,26]. Concepts related to confidence are often linked to human trust in autonomy and allocation of control or how a human operator uses available autonomy levels. Operator confidence typically refers to the self-assurance of a human in their ability to perform a task or trust in a robot’s ability to function autonomously. Research related to robot confidence is typically aimed at altering human trust in autonomy or allocating control authority. Other research includes a robot expressing its certainty in performing policy learned from a human teacher [40,41,42] and modeling a robot’s confidence in a human co-worker [43] or its ability to predict human actions in a shared environment [44]. A similar concept is algorithm self-confidence applied, for example, to a visual classification algorithm [45]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call