Abstract
The analysis of facial movements in patients with amyotrophic lateral sclerosis (ALS) can provide important information about early diagnosis and tracking disease progression. However, the use of expensive motion tracking systems has limited the clinical utility of the assessment. In this study, we propose a marker-less video-based approach to discriminate patients with ALS from neurotypical subjects. Facial movements were recorded using a depth sensor (Intel® RealSense SR300) during speech and nonspeech tasks. A small set of kinematic features of lips was extracted in order to mirror the perceptual evaluation performed by clinicians, considering the following aspects: (1) range of motion, (2) speed of motion, (3) symmetry, and (4) shape. Our results demonstrate that it is possible to distinguish patients with ALS from neurotypical subjects with high overall accuracy (up to 88.9%) during repetitions of sentences, syllables, and labial non-speech movements (e.g., lip spreading). This paper provides strong rationale for the development of automated systems to detect neurological diseases from facial movements. This work has a high social impact, as it opens new possibilities to develop intelligent systems to support clinicians in their diagnosis, introducing novel standards for assessing the oro-facial impairment in ALS, and tracking disease progression remotely from home.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.