Abstract

To further extend the applicability of wearable sensors, methods for accurately extracting subtle psychological information from the sensor data are required. However, accessing subjective information in everyday life, such as cognitive load, remains challenging. To bring consensus on methods for cognitive load monitoring, a machine learning challenge is organized. The participants developed machine learning methods for cognitive load classification using wrist-worn physiological sensors' data, namely heart rate, R-R intervals, skin conductance, and skin temperature. The data from subjects solving cognitive tasks of varying difficulty is used for the challenge. This article presents a systematic comparison and multi-strategic performance evaluation of the thirteen methods submitted to this challenge. A systematic comparison of preprocessing techniques, classification algorithms, and implementation techniques is presented. Performance variations for different task difficulty levels, different subjects, and different experiment periods are evaluated. The results indicate that the most robust methods used multimodal sensor data, classical classification approaches such as decision trees and support vector machines or their ensembles, and Bayesian hyperparameter optimization for hyperparameter tuning. The most accurate models used handcrafted features that are further selected using sequential backward floating search and evaluated using stratified person-aware cross-validation strategy. Moreover, the results indicated better classification performance for specific test subjects, the tasks with the highest difficulty, and in some cases, the time elapsed since the start of the experiment. This dependency is likely due to model overfitting or due to the subjective nature of the psychophysiological process. The intersubject variability in responses is challenging to be captured through objective binary labels for cognitive load, thereby warranting more sophisticated annotation approaches.

Highlights

  • The availability of small, wearable, and low-cost sensors combined with advanced signal processing and information extraction capabilities is driving the revolution in mobile behavior monitoring for applications such as sports analytics, ambient-assisted living, and lifestyle monitoring [1]

  • LEARNED Our meta-analysis presented in the previous section reveals the superiority of certain data processing techniques when it comes to the use of wrist-worn device-originated physiological signals for cognitive load inference

  • To move beyond headon pitting of different methods, and to guide future efforts in automated cognitive load inference, certain peculiarities of sensor data elicited during human cognitive engagement are listed below

Read more

Summary

Introduction

The availability of small, wearable, and low-cost sensors combined with advanced signal processing and information extraction capabilities is driving the revolution in mobile behavior monitoring for applications such as sports analytics, ambient-assisted living, and lifestyle monitoring [1]. The applicability of wearable sensors is enhanced by the extraction of subtle physiological information that can serve as the basis of psychological monitoring. Assessing psychophysiological information in everyday life remains challenging [2] since the association of wearable sensor data to human psychophysiological states is not as explicit as it is for physical states. Smartphones can count steps and distinguish human physical activities (e.g., running vs walking), but cannot recognize emotions and other affective states (e.g., cognitive load). The inability of humans to recognize their psychophysiological states in a VOLUME XX, 2017.

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call