Abstract

Introduction/Background Since nearly all areas of clinical care involve face-to-face interaction, it is important for learners to be able to recognize pain in patients. While patient self-report (i.e. pain scales) can be a helpful tool in the clinical toolbox, in most cases they must rely on a keen level of sensory observation skills.1 In fact, in some cases the pain-rating scale may not be a reliable tool due to variability across patients, as well as across clinicians’ own interpretation of the scale.2 Thus, there is a desire within the clinical education community to help hone the pain observation skills of learners. However, few training tools exist. To help bridge this gap, we have developed technology that may one day become part of an educational tool. We have designed a method for expressing pain on virtual patients using a naturalistic database of people expressing pain. Methods We extracted source videos from the UMBC-McMaster Pain Archive, a fully labeled, naturalistic data set of 200 video sequences from 25 participants suffering from shoulder pain. The participants performed range-of-motion tests on both their affected and unaffected limbs under the instruction of a physiotherapist. Videos were labeled by facial expression recognition experts, with a pain score from zero to twelve.3 We included all source videos with a pain score greater than three. For each video, we used a CLM-based facial-feature tracker4 to track 68 points on the face. Each facial expression is comprised into certain facial segments called action units (AUs),5 and we mapped these facial movements to a virtual patient in real time using a technique called performance-driven animation. Thus, the virtual patient’s expression directly matched those of a real person in the source video. We used several avatars in the Steam Source SDK6 as our virtual patient representations. In order to validate people’s ability to identify pain on virtual avatars using our method, we conducted an online study involving 50 participants recognizing expressions of pain on virtual avatars (34 female, 16 male, mean age = 38.6). To test the flexibility of our system, we also created three avatars to see if there was any significant effect of gender on participants’ perception of pain (male, female, and androgynous). Results: Conclusion We found that people are able to accurately identify naturalistic facial expressions of pain when expressed by a virtual patient using performance-driven pain synthesis (overall pain accuracy = 67.33%). We also found no significant difference in participants’ accuracy in identifying pain across our three virtual avatar types (all p > .05, with accuracy’s of 66.67%, 65.33%, and 70% for female, male, and androgynous avatars respectively). Our Results are encouraging, suggesting that virtual patients expressing pain using a performance-driven animation method may be useful as part of a general training tool for learners. However, one limitation of this study was it was conducted on the general population, who may interpret pain differently than clinicians.7 Additional work is needed to explore this difference further, and we intend to conduct a study with clinical students in the future.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call