Abstract

Neonates are not able to verbally communicate pain, hindering the correct identification of this phenomenon. Several clinical scales have been proposed to assess pain, mainly using the facial features of the neonate, but a better comprehension of these features is yet required, since several related works have shown the subjectivity of these scales. Meanwhile, computational methods have been implemented to automate neonatal pain assessment and, although performing accurately, these methods still lack the interpretability of the corresponding decision-making processes. To address this issue, we propose in this work a facial feature extraction framework to gather information and investigate the human and machine neonatal pain assessments, comparing the visual attention of the facial features perceived by health-professionals and parents of neonates with the most relevant ones extracted by eXplainable Artificial Intelligence (XAI) methods, considering the VGG-Face and N-CNN deep learning architectures. Our experimental results show that the information extracted by the computational methods are clinically relevant to neonatal pain assessment, but yet do not agree with the facial visual attention of health-professionals and parents, suggesting that humans and machines can learn from each other to improve their decision-making processes. We believe that these findings might advance our understanding of how humans and machines code and decode neonatal facial responses to pain, enabling further improvements in clinical scales widely used in practical situations and in face-based automatic pain assessment tools as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call