Abstract

Advances in unmanned aircraft systems (UAS) have paved the way for progressively higher levels of intelligence and autonomy, supporting new modes of operation, such as the one-to-many (OTM) concept, where a single human operator is responsible for monitoring and coordinating the tasks of multiple unmanned aerial vehicles (UAVs). This paper presents the development and evaluation of cognitive human-machine interfaces and interactions (CHMI2) supporting adaptive automation in OTM applications. A CHMI2 system comprises a network of neurophysiological sensors and machine-learning based models for inferring user cognitive states, as well as the adaptation engine containing a set of transition logics for control/display functions and discrete autonomy levels. Models of the user’s cognitive states are trained on past performance and neurophysiological data during an offline calibration phase, and subsequently used in the online adaptation phase for real-time inference of these cognitive states. To investigate adaptive automation in OTM applications, a scenario involving bushfire detection was developed where a single human operator is responsible for tasking multiple UAV platforms to search for and localize bushfires over a wide area. We present the architecture and design of the UAS simulation environment that was developed, together with various human-machine interface (HMI) formats and functions, to evaluate the CHMI2 system’s feasibility through human-in-the-loop (HITL) experiments. The CHMI2 module was subsequently integrated into the simulation environment, providing the sensing, inference, and adaptation capabilities needed to realise adaptive automation. HITL experiments were performed to verify the CHMI2 module’s functionalities in the offline calibration and online adaptation phases. In particular, results from the online adaptation phase showed that the system was able to support real-time inference and human-machine interface and interaction (HMI2) adaptation. However, the accuracy of the inferred workload was variable across the different participants (with a root mean squared error (RMSE) ranging from 0.2 to 0.6), partly due to the reduced number of neurophysiological features available as real-time inputs and also due to limited training stages in the offline calibration phase. To improve the performance of the system, future work will investigate the use of alternative machine learning techniques, additional neurophysiological input features, and a more extensive training stage.

Highlights

  • In recent decades, advances in remote sensing as well as developments in system automation and human-machine interfaces (HMI) have been supporting new unmanned aircraft systems (UAS) operational concepts

  • The assessment of the human operator’s cognitive state through the real-time measurement of neurophysiological parameters holds promise to support new forms of adaptive automation, such as intelligent agents that can sense, predict, and provide adaptive decision support to the user during periods of sustained workload; or systems that can dynamically allocate tasks to teams of individuals based on their cognitive profiles

  • As relatively high levels of automation and autonomy are required to support multi-unmanned aerial vehicles (UAVs) operations, adaptive automation is an important enabler to achieve an optimal distribution of task load among agents in the human-machine team

Read more

Summary

Introduction

Advances in remote sensing as well as developments in system automation and human-machine interfaces (HMI) have been supporting new unmanned aircraft systems (UAS) operational concepts. A growing interest in multi-UAV research has led to the development of several notable multi-UAV simulator testbeds over the last decade [9,10,11,12,13]. While neuroergonomics has been studied in the context of air traffic control (ATC) [20,21] and piloting [22,23,24] tasks, there has been limited research in the UAV or multi-UAV domains. There is a common purpose underlying the study of neuroergonomics in these domains: to support the realtime inference of the human operators’ cognitive state, in turn driving the development of adaptive automation needed for more autonomous operations.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.