Abstract
Automation is replacing or working alongside human operators in various settings. However, its widespread implementation poses potential risks for operators, one of which is referred to as “automation complacency” or simply “complacency.” This concept originated in the field of aviation crash investigations and has since been developed and researched in human-automation interaction (HAI). It holds professional operators in safety-critical systems accountable for system failures. The increasing prevalence of vehicle automation has shed more light on this issue, as it has been identified as a probable cause of recent traffic crashes involving partial and conditional automation. Furthermore, it even plays a significant role in judicial decision-making. In this position paper, we delve into seven key questions surrounding this concept, including its conceptualization, operationalization, detectability beforehand, and prevention. We also examine its causal influence on traffic crashes and its potential to be perceived as nothing more than a blame game. When subjecting these questions to critical scrutiny, we argue that while it is an important phenomenon in HAI, there are valid concerns regarding its appropriate usage in crash investigations and liability litigation. Applying it uncritically in these fields may give rise to moral and ethical issues and result in new consumer harm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Human–Computer Interaction
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.