Abstract

Stress is subjective and is manifested differently from one person to another. Thus, the performance of generic classification models that classify stress status is crude. Building a person-specific model leads to a reliable classification, but it requires the collection of new data to train a new model for every individual and needs periodic upgrades because stress is dynamic. In this paper, a new binary classification (called stressed and non-stressed) approach is proposed for a subject’s stress state in which the inter-beat intervals extracted from a photoplethysomogram (PPG) were transferred to spatial images and then to frequency domain images according to the number of consecutive. Then, the convolution neural network (CNN) was used to train and validate the classification accuracy of the person’s stress state. Three types of classification models were built: person-specific models, generic classification models, and calibrated-generic classification models. The average classification accuracies achieved by person-specific models using spatial images and frequency domain images were 99.9%, 100%, and 99.8%, and 99.68%, 98.97%, and 96.4% for the training, validation, and test, respectively. By combining 20% of the samples collected from test subjects into the training data, the calibrated generic models’ accuracy was improved and outperformed the generic performance across both the spatial and frequency domain images. The average classification accuracy of 99.6%, 99.9%, and 88.1%, and 99.2%, 97.4%, and 87.6% were obtained for the training set, validation set, and test set, respectively, using the calibrated generic classification-based method for the series of inter-beat interval (IBI) spatial and frequency domain images. The main contribution of this study is the use of the frequency domain images that are generated from the spatial domain images of the IBI extracted from the PPG signal to classify the stress state of the individual by building person-specific models and calibrated generic models.

Highlights

  • Stress is a mental, emotional, and physical reaction experienced when a person perceives demands that exceed their ability to cope

  • The average classification accuracy of 99.6%, 99.9%, and 88.1%, and 99.2%, 97.4%, and 87.6% were obtained for the training set, validation set, and test set, respectively, using the calibrated generic classification-based method for the series of inter-beat interval (IBI) spatial and frequency domain images

  • The main contribution of this study is the use of the frequency domain images that are generated from the spatial domain images of the IBI extracted from the PPG signal to classify the stress state of the individual by building person-specific models and calibrated generic models

Read more

Summary

Introduction

Emotional, and physical reaction experienced when a person perceives demands that exceed their ability to cope. Kizito et al proposed a hybrid stress prediction method, which revealed an increase in generic model accuracy from 42.5% to 95.2% by combining 100 person-specific samples used to train the generic model They tested their new approach on two different datasets and found that the calibrated stress detection model outperformed the generic one. Jing et al proposed a new classification model for the drive’s stress level using IBI images for the ECG signal and CNN They compared the accuracy of this approach with the ANN method using time-domain features (mean IBI and root mean squared difference of adjacent IBIs (RMSSD), and standard deviation of IBIs (SDNN)). A new stress classification approach is proposed to classify the individual stress state into stressed or non-stressed by converting spatial images of inter-beat intervals of a PPG signal to frequency domain images and we use these pictures to train several CNN models.

Materials and Data
Proposed Stress Image-Based Detection Model
Spatial Domain Image Generation
Frequency Domain Image Generation
Deep Learning-Based Classification
Results
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call