Most professional visual searchers (e.g., radiologists, baggage screeners) face an interesting conundrum—they must be highly accurate while also performing in a timely fashion. Airport security personnel, for example, are tasked with preventing any and all dangerous items from getting aboard a plane, but they must also be speedy to keep the passengers flowing through the checkpoint. It is not easy to simultaneously prioritize two primary job requirements (accuracy and speed) that are in direct contrast to one another. While a certain level of error is inevitable in almost any cognitive task, it is arguable that many professional search environments might be even more vulnerable to error given the contradictory goals imposed upon the searchers. As such, it is critical to explore every means possible to minimize mistakes. One critical question when exploring means to improve search performance in professional settings is how do professional searchers develop the ability to search for, and steadily learn to reliably detect, targets. How do searchers improve their search efficacy over the course of repeatedly discovering an item (or by receiving feedback when missing it)? This process of iterative learning across exposures to targets is referred to here as “long-term visual search” (LTVS). To investigate LTVS the current study utilized “big data” from the mobile app Airport Scanner (Kedlin Co.; see Mitroff et al., 2015) to assess search ability improvements. Airport Scanner is a publicly available mobile app, where the users serve as airport security officers looking for prohibited items in simulated X-ray baggage images. Over 10 million users have downloaded the app, creating over 2.6 billion trials of data (see Mitroff et al., 2015). Airport Scanner contains hundreds of different targets—granting the possibility to look at how search performance develops, both generally and item-by-item, across a large number of target types and with immense power. To effectively measure search improvement, only Airport Scanner users with a minimum of 250 target-present trials were included in this study. The first analysis collapsed performance across 26 distinct targets that varied in salience, frequency, and when they were introduced into gameplay. Despite variability, uniform patterns to overall search improvement were found—detection rate and response speed both revealed steep learning curves followed by a uniform plateau in performance. Second, performance assessments were conducted individually on the 26 target items. Specifically, accuracy and response time values were standardized (z-scored) to place items on a level-playing field despite differences in target characteristics (e.g., salience, frequency). There was variability in improvement and peak performance for search accuracy across targets, but very little variability in response time performance. While individual target types led to an array of required target observations to obtain mean accuracy (i.e., reach plateau), there was general uniformity for response time with most items taking approximately 14 target-present trials to reach mean proficiency in search speed. Understanding the development of LTVS is critical for reducing errors in professional visual searches, and the current study demonstrated the iterative nature of learning, providing potential insights for improving training procedures.
Read full abstract