Abstract

The article “Situation Awareness Misconceptions and Misunderstandings” (Endsley, 2015) discusses common fallacies noted in the literature in the interpretation of Endsley’s 1995 model of situation awareness (SA; Endsley, 1995). The clarifications presented in the article provide a more complete and comprehensible explanation of the model. As in the complex domains that we study, our attention suffers from limitations as we simplify each other’s work in our eager and well-intentioned quest to attack new and exciting problems. Dr. Endsley is to be thanked for giving us pause to rebuild our own SA of SA in her thoughtful article. SA has, for many years, been a powerful and influential construct. In our own work in cognitive work analysis (CWA), we have viewed SA as a complementary framework that challenges and drives CWA. Without doubt, the output of a CWA-based design process should be the design of a system that promotes better SA and performance (Burns et al., 2008). CWA and goaldirected task analysis may organize the world in slightly different dimensions, but the overall intent is the same: to create systems that support human decision making as well as we can. Indeed, it is this common intent that unites us in our field. To advance cognitive engineering, there are times when we must challenge each other, critique each other’s models, hunt for flaws, and identify promising new directions. Assuredly, this helps us progress, strengthen our methods, and deepen our understanding. This article clearly identifies such activity and responds to it. We are all better for this exercise, as it challenges both SA and all our approaches to grow and deepen. Acknowledging this, we would like to change our perspective to a larger one and discuss challenges facing cognitive engineering as a whole. In these challenges, our existing methods, SA, CWA, and other approaches must adapt and grow. Advancements in intelligent systems and automation have increased in the amount of data produced by information systems and yet placed the human in new roles. In many cases, these roles are partially in the loop and partially out of the loop and may involve supervisory control or may have the human working in systems with very little supervisory control at all because the automation is largely nontransparent. We outline three core areas of challenges: self-awareness and self-regulation, memory failures or performance with incorrect SA, and design for unstructured environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.