Abstract
Warnings about the risks of literal-minded automation—a system that can’t tell if its model of the world is the world it is actually in—have been sounded for over 70 years. The risk is that a system will do the “right” thing—its actions are appropriate given its model of the world, but it is actually in a different world—producing unexpected/unintended behavior and potentially harmful effects. This risk—wrong, strong, and silent automation—looms larger today as our ability to deploy increasingly autonomous systems and delegate greater authority to such systems expands. It already produces incidents, outages of valued services, financial losses, and fatal accidents across different settings. This paper explores this general and out-of-control risk by examining a pair of fatal aviation accidents which revolved around wrong, strong and silent automation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Cognitive Engineering and Decision Making
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.