Abstract

Levels of Automation (LOA) provide a method for describing authority granted to automated system elements to make individual decisions. However, these levels are technology-centric and provide little insight into overall system operation. The current research discusses an alternate classification scheme, referred to as the Level of Human Control Abstraction (LHCA). LHCA is an operator-centric framework that classifies a system’s state based on the required operator inputs. The framework consists of five levels, each requiring less granularity of human control: Direct, Augmented, Parametric, Goal-Oriented, and Mission-Capable. An analysis was conducted of several existing systems. This analysis illustrates the presence of each of these levels of control, and many existing systems support system states which facilitate multiple LHCAs. It is suggested that as the granularity of human control is reduced, the level of required human attention and required cognitive resources decreases. Thus, it is suggested that designing systems that permit the user to select among LHCAs during system control may facilitate human-machine teaming and improve the flexibility of the system.

Highlights

  • The vision of humans working effectively in a team with Artificial Intelligent Agents (AIAs) was clearly stated over 60 years ago [1]

  • We explore the concept of human–agent teaming through this hierarchical control structure

  • The results indicate that some systems are not capable of being controlled at more than one Level of Human Control Abstraction (LHCA), while others permit the system to operate at multiple LHCA

Read more

Summary

Introduction

The vision of humans working effectively in a team with Artificial Intelligent Agents (AIAs) was clearly stated over 60 years ago [1]. Artificial intelligence technologies and automation have been incorporated to help us control systems to include nuclear power plants, aircrafts, and, more recently, automobiles. These systems often fall short of creating an interactive human–AIA team, typically placing the human operator into a supervisory role. In this role, the operator is required to recognize anomalies, assume control under time pressure, and apply their skills and knowledge to save the system in the direst of circumstances [5]. One may cite many reasons why we continue to implement systems that place human operators in a supervisory role, one potential reason is that the commonly applied design frameworks fail to lead the designer to fully appreciate the complexity of human–AIA interaction

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.