Oscillatory components are discriminative features of mental workload monitoring in electrodermal activity (EDA) signals. Currently, most feature extraction techniques mainly focus on time domain or linear techniques but rarely consider the non-stationary nature, individual differences, and recording noise. This study has proposed a new time–frequency feature extraction method based on Tunable Q-factor Wavelet Transform providing an effective quantification of oscillatory behavior of a single transient. The extracted features have been classified using a recurrent neural network with the advantage of dynamic mapping procedure. The proposed method has been evaluated using EDA data acquired during an arithmetic task of five workload levels with both subtle and distinct differences. In a comparative study, the effects of several decomposition parameters and cognitive factors have been explored. Experimental results have shown that it achieves an average accuracy rate of 98.52% for three workload levels discrimination by optimizing the number of features to 3 (features extracted from a low frequency sub-band) from 15 (features extracted from five frequency sub-bands), i.e., 80% feature reduction. The greatest contribution coming fromthe smallest frequency sub-bands has also been obtained. This method is superior to other existing methods in separating 3 to 5 workload levels with both distinct and subtle differences extracting effective oscillatory characteristics. Considering the merits of a simple feature extraction procedure and cost-effectiveness of a single lead EDA signal as well as high discriminability between several workload levels, it provides a trade-off between performance and computational complexity, thus demonstrating its potential for online human-error prevention systems.
Read full abstract