Abstract
Abstract Automation surprises occur when an automated system behaves differently than its operator expects. If the actual system behavior and the operator's ‘mental model’ are both described as finite state transition systems, then mechanized techniques known as ‘model checking’ can be used automatically to discover any scenarios that cause the behaviors of the two descriptions to diverge from one another. These scenarios identify potential surprises and pinpoint areas where design changes, or revisions to training materials or procedures, should be considered. The mental models can be suggested by human factors experts, or can be derived from training materials, or can express simple requirements for ‘consistent’ behavior. The approach is demonstrated by applying the Muro state exploration system to a ‘kill-the-capture’ surprise in the MD-88 autopilot. This approach does not supplant the contributions of those working in human factors and aviation psychology, but rather provides them with a tool to examine properties of their models using mechanized calculation. These calculations can be used to explore the consequences of alternative designs and cues, and of systematic operator error, and to assess the cognitive complexity of designs. The description of model checking is tutorial and is hoped to be accessible to those from the human factors community to whom this technology may be new.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.