Detection of exact emotions through multi-modal physiological signals provides relevant information for different processes. Numerous computational approaches have been presented for the precise analysis of emotion types. But due to some problems like ruined signal quality, increased time consumption, and the necessity of high storage space, classification accuracy’s efficiency worsens. Hence, this research classified multi-modal physiological signals based on machine and deep learning (DL) models. The proposed work implements the Hierarchical Extreme Puzzle Learning Machine (HEPLM) approach to classify the actual output of embedded emotions. The proposed work comprises four steps: pre-processing, signal-to-image conversion, feature extraction, and classification. Pre-processing is carried out using Savitzky-Golay smoothing filtering (SGF) for the removal of noise and to increase signal quality. Hybrid wavelet scattering with Synchro squeezing Wavelet Transform approach converts the signal into an image. In feature extraction process, the valuable features are extracted using ResNet-152 and the Inception v3 model, whereas the features are combined through an ensemble approach. HEPLM is used in the final classification process, combining Puzzle Optimization Algorithm (POA) and Hierarchical Extreme Learning Machine (HELM) to reduce feature dimensionality and improve classification accuracy. The dataset adopted in the proposed work is Wearable Stress and Affect Detection (WESAD) to collect multi-modal physiological signals. The presentation of the projected work is assessed with metrics like accuracy, recall, precision, F1 score, kappa, and so on. The proposed effort demonstrates better results of emotion classification when compared to the existing methods by holding 96.29% of accuracy.
Read full abstract