Abstract
AbstractStress is a complex issue with wide ranging physical and psychological impacts on human daily performance. Specifically, acute stress detection is becoming a valuable application in contextual human understanding. Two common approaches to training a stress detection model are subject-dependent and subject-independent training method. Although subject-dependent training method is proven to be the most accurate approach to build stress detection models, subject-independent one is a more practical and cost-efficient method, as it facilitates the deployment of stress level detection and management systems in consumer-grade wearable devices without requiring additional data for training from end-users. To improve the performance of subject-independent stress detection models, in this paper, we introduce a stress-related bio-signal processing pipeline with a simple neural network architecture using statistical features extracted from multimodal contextual sensing sources including Electrodermal Activity (EDA), Blood Volume Pulse (BVP), and Skin Temperature (ST) captured from a consumer-grade wearable device. Using our proposed model architecture, we compare the accuracy of stress detection models that use measures from each individual signal source with the one employing the fusion of multiple sensor sources. Extensive experiments on the publicly available WESAD dataset demonstrate that our proposed model outperforms conventional methods as well as providing \(1.63\%\) higher mean accuracy score compared to the state-of-the-art model while maintaining a low standard deviation. Our experiments also show that combining features from multiple sources produces more accurate predictions than using only one sensor source individually.KeywordsAffective computingStress detection modelHuman contextMultimodal sensing
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have