Abstract

Abstract: Shared Control, where the machine and the human share tasks and control the situation together, and its extension cooperative automation are promising approaches to overcome automation-induced problems, such as lack of situation awareness and degradation of skill. However, the design of Shared Controllers and/or cooperative human-machine systems should be done in a very careful manner. One of the major issues is conflicts between the human and the machine: how to detect these conflicts, and how to resolve them, if necessary? A complicating factor is that when the human is right, conflicts are undesirable (resulting in nuisance, degraded performance, etc), but when the machine is right, conflicts are desirable (warning the operator, or proper assistance or overruling). Research has pointed out several types and causes of conflicts, but offers no coherent framework for design and evaluation guidelines. In this paper, we propose such a theoretical framework in order to structure and relate different types of conflicts. The framework is inspired by a hierarchical task analysis, and identifies five possible sources of conflicts: intent, information gathering, information processing, decision-making and action implementation. Examples of conflicts in several application domains such as automobile, telerobotics, and surgery are discussed to illustrate the applicability of this framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.