Abstract

Existing approaches for pain assessment are mostly based on facial expressions which cannot fully describe the expressiveness of pain. Although few researchers have also implemented multimodal approaches, they have not considered body movement for pain assessment. Besides, assessing pain in an unconstrained setting is challenging due to occlusions and poor illumination. Hence, to overcome these challenges we present a multi-stream framework for behavioural multiparametric pain assessment. To detect pain from face images a domain adaptation technique with a stacked BiLSTM network is implemented for joint spatio-temporal modelling. Further, a FCN-based-BiLSTM model is implemented to learn pain-related body dynamics. To learn pain-related features from pain sound VGG-based features are extracted and then a linear model is used for pain classification. Finally, a decision-level fusion approach is implemented to learn jointly from all pain classifiers. The proposed pain assessment system has achieved 92 % of accuracy after evaluating on self-created pain dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call