Abstract

Orthopedic disorders are common among horses, often leading to euthanasia, which often could have been avoided with earlier detection. These conditions often create varying degrees of subtle long-term pain. It is challenging to train a visual pain recognition method with video data depicting such pain, since the resulting pain behavior also is subtle, sparsely appearing, and varying, making it challenging for even an expert human labeller to provide accurate ground-truth for the data. We show that a model trained solely on a dataset of horses with acute experimental pain (where labeling is less ambiguous) can aid recognition of the more subtle displays of orthopedic pain. Moreover, we present a human expert baseline for the problem, as well as an extensive empirical study of various domain transfer methods and of what is detected by the pain recognition method trained on clean experimental pain in the orthopedic dataset. Finally, this is accompanied with a discussion around the challenges posed by real-world animal behavior datasets and how best practices can be established for similar fine-grained action recognition tasks. Our code is available at https://github.com/sofiabroome/painface-recognition.

Highlights

  • We present an explainability study of orthopedic pain detection in 25 video clips, firstly for a human expert baseline consisting of 27 equine veterinarians, secondly for one of our neural networks trained to recognize acute pain

  • Multiple-instance learning (MIL) has been used extensively within deep learning for the task of weakly supervised action localization (WSAL), where video level class labels alone are used to determine the temporal extent of class occurrences in videos [23–27]

  • The convolutional LSTM layer was first introduced by Shi et al [56], Sharing pain: Using pain domain transfer for video recognition of low grade orthopedic pain in horses and replaces the matrix multiplication transforms of the classical LSTM equations with convolutions

Read more

Summary

Introduction

Equids are prey animals by nature, showing as few signs of pain as possible to avoid predators [1]. In many widely used datasets for action recognition [10–12], specific objects and scenery may add class information This is not the case in our scenario, since the only valid evidence present in the video are poses, movements and facial expressions of the horse. Being an easier-to-observe special case, recordings of acute pain (applied for short duration and completely reversibly, under ethically controlled conditions) have been used to investigate pain-related facial expressions [21] and for automatic equine pain recognition from video [22]. Until now, it has not been studied how this generalizes to the more clinically relevant orthopedic pain.

Related work
Weakly supervised action recognition and localization
Automatic pain recognition in animals
Pain in horses
Method
Datasets
Cross-validation within one domain
Treating weak labels as dense labels
Architectures
Domain transfer
Veterinary expert baseline experiment
27 Experts
Experiments
Domain transfer to EOP(j)
Why is the expert performance so low?
Significance of results
Expected generalization of results
Differences in pain biology and display of pain in PF and EOP(j)
Pain intensity and binarization of labels
Labels in the real world
Weakly supervised training on EOP(j)
Findings
Conclusions
Future work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call