Abstract
Vigilance refers to the ability of an observer to detect signals over a prolonged period of time. An important component of vigilance is the performance decrement, in which a decline in the correct detection of critical signals occurs as a function of time on task (e.g., Becker, Warm, Dember, & Howe, 1994). Typically, this decline in performance is accompanied by high perceived workload and stress (Warm, Parasuraman, & Matthews, 2008). One problem with traditional measures of mental workload, however, is these measures do not always converge on a single factor of workload. Instead, analyses indicate that workload is most likely multi-faceted (Matthews, Reinerman-Jones, Wohleber, Lin, Mercado, & Abich, 2015). The present research sought to compare two measures of mental workload, the NASA-TLX and Multiple Resource Questionnaire (MRQ), in terms of their respective abilities to measure mental workload in two different types of vigilance tasks (cognitive and sensory). We examined the factor analytic structure of both measures, as well as the intercorrelations of each measure’s scales. We also examined how the validity and reliability of each measure changed based on task type. Exploratory factor analyses (EFA) revealed the factor analytic structure of each mental workload measure to vary depending on task type. The scales of the NASA-TLX combined into one factor for the cognitive task, while the sensory task saw these same scales split between task-related and operator-related sources of workload. EFA for the MRQ scales revealed an emphasis on spatial resources in the sensory condition, whereas the cognitive condition evoked several factors involving the senses (auditory, location, visual). Reliability scores, measured using Cronbach’s α, were high for the MRQ for the cognitive and sensory tasks (α = .840 and α = .866, respectively). Reliability for the NASA-TLX, though, differed markedly for the two tasks, with α = .790 in the cognitive task but α = .439 in the sensory task. Finally, intercorrelations between the two measures showed the cognitive task to bring about both higher intercorrelations within each scale, as well as higher correlations between the scales, than the sensory task. Taken together, our results indicate that the NASA-TLX and MRQ are measuring different constructs depending on the task. Our work extends the results of Matthews et al. (2015) by showing that task parameters should be considered when choosing how to evaluate mental workload. It appears that some measures are reliable only in specific contexts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.