Abstract

The U.S. Air Force envisions future applications in which a single human operator manages multiple heterogeneous unmanned vehicles (UVs). To support this vision, a range of play-based interfaces were designed by which an operator can team with autonomy (consisting of several intelligent agents/services) to manage twelve air, ground, and sea surface UVs performing security defense tasks for a simulated military base. To enable flexible delegation control, the interfaces were designed to enable the operator to use one or more of three control modalities in calling and editing plays that define UV actions. Specifically, each step defining a play could be completed: (1) manually, via mouse/click inputs, (2) by touching a touchscreen monitor, or (3) via speech commands. This paper reports results relevant to input modality from two experiments where operators were free to choose which modality to employ. Operators overwhelmingly used the mouse compared to the touchscreen or speech and were faster and more accurate with the mouse. Subjective data also favored the mouse modality with operators commenting that it was more intuitive to use with the play calling interfaces. Results are discussed and recommendations for further multimodal research are provided.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call