Abstract

Objective: Pain assessment is of great importance in both clinical research and patient care. Facial expression analysis is becoming a key part of pain detection because it is convenient, automatic, and real-time. The aim of this study is to present a cold pain intensity estimation experiment, investigate the importance of the spatial-temporal information on facial expression based cold pain, and study the performance of the personalized model as well as the generalized model. Methods: A cold pain experiment was carried out and facial expressions from 29 subjects were extracted. Three different architectures (Inception V3, VGG-LSTM, and Convolutional LSTM) were used to estimate three intensities of cold pain: No pain, Moderate pain, and Severe Pain. Architectures with Sequential information were compared with single-frame architecture, showing the importance of spatial-temporal information on pain estimation. The performances of the personalized model and the generalized model were also compared. Results: A mean F1 score of 79.48% was achieved using Convolutional LSTM based on the personalized model. Conclusion: This study demonstrates the potential for the estimation of cold pain intensity from facial expression analysis and shows that the personalized spatial-temporal framework has better performance in cold pain intensity estimation. Significance: This cold pain intensity estimator could allow convenient, automatic, and real-time use to provide continuous objective pain intensity estimations of subjects and patients.

Highlights

  • Pain is an unpleasant sensory and emotional experience due to actual or potential tissue damage or injury [1]

  • We investigate the plausibility of using three deep learning architectures, Inception V3, VGG-Long Short-Term Memory networks (LSTM), and C-LSTM to automatically estimate cold pain intensity in videos based on facial expressions

  • We investigated three main deep learning architectures: deep Convolutional Neural Networks (CNN) InceptionV3 [29] where single-frame was taken as input, the CNN+LSTM architecture [15] where VGG-16 was the CNN that extracted spatial information and LSTM was linked to exploiting the temporal information, and the fully recurrent C-LSTM

Read more

Summary

Introduction

Pain is an unpleasant sensory and emotional experience due to actual or potential tissue damage or injury [1]. Pain management and assessment are of importance in health and patient care. Pain is measured by patients’ selfreported information. The three most common measurements of self-reported assessment are visual analog scales (VAS), numerical rating scales (NRS), and verbal rating scales (VRS) [2]. Self-reported assessment is considered as a gold. We thank all the participants who have participated in this work

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call