Abstract

There are a number of key data-centric questions that must be answered when developing classifiers for operator functional states. “Should a supervised or unsupervised learning approach be used? What degree of labeling and transformation must be performed on the data? What are the trade-offs between algorithm flexibility and model interpretability, as generally these features are at odds?” Here, we focus exclusively on the labeling of cognitive load data for supervised learning. We explored three methods of labeling cognitive states for three-state classification. The first method labels states derived from a tertiary split of trial difficulty during a spatial memory task. The second method was more adaptive; it employed a mixed-effects stress–strain curve and estimated an individual’s performance asymptotes with respect to the same spatial memory task. The final method was similar to the second approach; however, it employed a mixed-effects Rasch model to estimate individual capacity limits within the context of item response theory for the spatial memory task. To assess the strength of each of these labeling approaches, we compared the area under the curve (AUC) for receiver operating curves (ROCs) from elastic net and random forest classifiers. We chose these classifiers based on a combination of interpretability, flexibility, and past modeling success. We tested these techniques across two groups of individuals and two tasks to test the effects of different labeling techniques on cross-person and cross-task transfer. Overall, we observed that the Rasch model labeling paired with a random forest classifier led to the best model fits and showed evidence of both cross-person and cross-task transfer.

Highlights

  • People can have “off days” where even the simplest tasks seem difficult, or days where they are “in the zone” and tasks that would normally take hours are quick and easy

  • We tested three hypotheses for the effects of incorporating individual differences into class labeling: (1) this would lead to better initial training and cross-validation of supervised machine learning algorithms, (2) this would lead to better performance of an algorithm trained on person “A” and used to predict the states of person “B,” and (3) this would lead to better performance of an algorithm trained on task “A” and used to predict the states of persons performing task “B.” Notable is the fact that hypothesis 3 assumes that both task “A” and task “B” tax the same cognitive construct, in our case mental workload caused by memory load

  • The aim of the current study was to assess the importance of incorporating individual differences into the labeling schema for supervised machine learning to predict mental workload states from neurophysiological data

Read more

Summary

Introduction

People can have “off days” where even the simplest tasks seem difficult, or days where they are “in the zone” and tasks that would normally take hours are quick and easy. Being “off ” or “in the zone” are poorly defined common terms used to express a person’s current state of mind. We are able to use these vague terms to express our state of mind to each other. As automation and advanced intelligent systems become commonplace, there is a growing need to be able to precisely communicate a person’s state of mind to these systems. A interesting construct of state of mind is mental workload. The field of human factors commonly discusses three mental

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call