Abstract

Our technology laden world continues to push the limits of human cognitive performance. Human performers are increasingly expected to assume roles of passive monitors rather than active engagers of technology systems [1]. Active and physical tasks have shifted to more sedentary tasks requiring significant cognitive workload at a rapid pace. Consequently, researchers and academics alike struggle to find a balance between effective user interface, usability, and ergonomic designs that will allow the performer to successfully complete their tasks while sustaining attention in these complex environments. It is no surprise that human error is at the root of tragic mishaps relating to vigilance across a wide range of applications and operational environments [2–4]. Researching vigilance is not new [5–7]. In fact, vigilance has been studied in laboratory settings for nearly seventy years across many conditions and tasks [8]. Traditional laboratory tasks involve static displays with simple image targets presented to individuals over prolonged periods of time. Participants are required to detect rare and temporally spaced targets among abundant “noise” images while sustaining their attention. The results using these vigilance tasks have found evidence of vigilance decrements, increased stress [7], and high cognitive demand [9]. The issue of training the skill to sustain attention has also been addressed [10, 11]. Findings from traditional research show that the most effective way of improving vigilance performance is through providing feedback in the form of knowledge of results [12]. Although the contrived, laboratory-based vigilance tasks can produce and mitigate the vigilance decrement, tasks that directly relate to complex operational environments are severely underrepresented in research. There have only been few researchers that utilize dynamic environments in vigilance research. For example, Szalma et al. [13] developed a video game-based training platform with the goal to extend the traditional vigilance training paradigm to complex, dynamic, and virtual environments that are more representative of visual detection tasks in the real world. Our current research is focused on extending the vigilance training paradigm to operationally relevant areas with the development of a game-based system for training operator attention within unmanned aerial systems (UAS). UAS are an integral part of mission operations within many branches of our military. New developments and improved technology allow extended mission operations of UAS up to, and exceeding 12 h. However, many UAS mishaps are the result of mechanical failures, and an alarming rate – 60.2% – of mishaps have been attributed to operator error [2]. This finding is not surprising, as UAS operations are highly cognitively demanding. Prolonged shiftwork and surveillance missions require sustained attention toward tracking or identifying rare targets, often in visually degraded conditions. This paper discusses current efforts to take the vigilance training paradigm out of the laboratory setting and into operational environments, including our current work in creating game-based training of vigilance for UAS operators. We describe the challenges associated with defining and standardizing targets, developing scenarios, and assessing performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call