Abstract

Automated detection of pain intensity from facial expressions, especially from face images that show a patient's health, remains a significant challenge in the medical diagnostics and health informatics area. Expert systems that prudently analyse facial expression images, utilising an automated machine learning algorithm, can be a promising approach for pain intensity analysis in health domain. Deep neural networks and emerging machine learning techniques have made significant progress in both the feature identification, mapping and the modelling of pain intensity from facial images, with great potential to aid health practitioners in the diagnosis of certain medical conditions. Consequently, there has been significant research within the pain recognition and management area that aim to adopt facial expression datasets into deep learning algorithms to detect the pain intensity in binary classes, and also to identify pain and non-pain faces. However, the volume of research in identifying pain intensity levels in multi-classes remains rather limited. This paper reports on a new enhanced deep neural network framework designed for the effective detection of pain intensity, in four-level thresholds using a facial expression image. To explore the robustness of the proposed algorithms, the UNBC-McMaster Shoulder Pain Archive Database, comprised of human facial images, was first balanced, then used for the training and testing of the classification model, coupled with the fine-tuned VGG-Face pre-trainer as a feature extraction tool. To reduce the dimensionality of the classification model input data and extract most relevant features, Principal Component Analysis was applied, improving its computational efficiency. The pre-screened features, used as model inputs, are then transferred to produce a new enhanced joint hybrid CNN-BiLSTM (EJH-CNN-BiLSTM) deep learning algorithm comprised of convolutional neural networks, that were then linked to the joint bidirectional LSTM, for multi-classification of pain. The resulting EJH-CNN-BiLSTM classification model, tested to estimate four different levels of pain, revealed a good degree of accuracy in terms of different performance evaluation techniques. The results indicated that the enhanced EJH-CNN-BiLSTM classification algorithm was explored as a potential tool for the detection of pain intensity in multi-classes from facial expression images, and therefore, can be adopted as an artificial intelligence tool in the medical diagnostics for automatic pain detection and subsequent pain management of patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call