Abstract

Stroke survivors often experience unilateral sensorimotor impairment. The restoration of upper limb function is an important determinant of quality of life after stroke. Wearable technologies that can measure hand function at home are needed to assess the impact of new interventions. Egocentric cameras combined with computer vision algorithms have been proposed as a means to capture hand use in unconstrained environments, and have shown promising results in this application for individuals with cervical spinal cord injury (cSCI). The objective of this study was to examine the generalizability of this approach to individuals who have experienced a stroke. An egocentric camera was used to capture the hand use (hand-object interactions) of 6 stroke survivors performing daily tasks in a home simulation laboratory. The interaction detection classifier previously trained on 9 individuals with cSCI was applied to detect hand use in the stroke survivors. The processing pipeline consisted of hand detection, hand segmentation, feature extraction, and interaction detection. The resulting average F1 scores for affected and unaffected hands were 0.66 ± 0.25 and 0.80 ± 0.15, respectively, indicating that the approach is feasible and has the potential to generalize to stroke survivors. Using stroke-specific training data may further increase the accuracy obtained for the affected hand.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.