Identifying the cognitive workload of operators is crucial in complex human-automation collaboration systems. An excessive workload can lead to fatigue or accidents, while an insufficient workload may diminish situational awareness and efficiency. However, existing supervised learning-based methods for workload recognition are ineffective when dealing with imperfect input data, such as missing or noisy data, which is not practical in real applications. This study introduces a robust Electroencephalogram (EEG)-enabled cognitive workload recognition model using self-supervised learning. The proposed method, DMAEEG, combines the training strategies of denoising autoencoders and masked autoencoders, demonstrating strong robustness against noisy and incomplete data. More specifically, we adopt the temporal convolutional network and multi-head self-attention mechanisms as the backbone, effectively capturing both the temporal and spatial features from EEG. Extensive experiments are conducted to verify the effectiveness and robustness of the proposed method on an open dataset and a self-collected dataset. The results indicate that DMAEEG performs superior to other state-of-the-art across various evaluation metrics. Moreover, DMAEEG maintains high accuracy in workload inference even when EEG signals are corrupted with a high masking ratio or strong noises. This signifies its superiority in capturing robust intrinsic patterns from imperfect EEG data. The proposed method significantly contributes to decoding EEG signals for workload recognition in real-world applications, thereby enhancing the safety and reliability of human-automation interactions.
Read full abstract