Abstract

Background The evaluation of hand function after spinal cord injury (SCI) is conducted in clinical settings, which may not accurately reflect hand function in the real world, thereby limiting the efficacy assessment of new treatments. Wearable cameras, also known as egocentric video, are a novel method to evaluate hand function in non-clinical environments. Nonetheless, manual processing of vast quantities of complex video data is difficult, highlighting the need for automated data analysis. The objective of this study was to automatically identify distinct hand postures in egocentric video using unsupervised machine learning. Methods Seventeen participants with cervical SCI recorded activities of daily living in a home simulation laboratory. A hand pose estimation algorithm was applied on detected hands to determine 2D joint locations, which were lifted to 3D coordinates. The resulting hand posture information was subjected to a number of clustering techniques. Hand grasps were manually labelled into four categories for evaluation purposes: power, precision, intermediate, and non-prehensile. Results K-Means clustering consistently exhibited the highest Silhouette score, which reflects the presence of discrete clusters in the data. When comparing with manual annotations, Spectral Clustering applied to a feature space consisting of 2D pose estimation with confidence scores yield the best performance as quantified by maximum match (0.48), Fowlkes-Mallows score (0.46), and normalized mutual information (0.22). Conclusions This is the first attempt to develop an unsupervised, data-driven hand taxonomy for individuals with SCI using wearable technology. The findings suggest that the method is capable of grouping similar hand grasps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call