Abstract
We propose a view-invariant method towards the assessment of the quality of human movements which does not rely on skeleton data. Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pre-trained 2D convolutional neural network (CNN) (e.g., VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. We release the only publicly-available, multi-view, non-skeleton, non-mocap, rehabilitation movement dataset (QMAR), and provide results for both cross-subject and cross-view scenarios on this dataset. We show that VI-Net achieves average rank correlation of 0.66 on cross-subject and 0.65 on unseen views when trained on only two views. We also evaluate the proposed method on the single-view rehabilitation dataset KIMORE and obtain 0.66 rank correlation against a baseline of 0.62.
Highlights
Beyond the realms of action detection and recognition, action analysis includes the automatic assessment of the quality of human action or movement, for example, in sports action analysis [1,2,3,4], skill assessment [5,6], and patient rehabilitation movement analysis [7,8]
View-invariant human movement analysis from RGB is a significant challenge in action analysis applications, such as sports, skill assessment, and healthcare monitoring
We proposed a novel RGB based view-invariant method to assess the quality of human movement which can be trained from a relatively small dataset and without any knowledge about viewpoints used for data capture
Summary
Beyond the realms of action detection and recognition, action analysis includes the automatic assessment of the quality of human action or movement, for example, in sports action analysis [1,2,3,4], skill assessment [5,6], and patient rehabilitation movement analysis [7,8] In the latter application, clinicians observe patients performing specific actions in the clinic, such as walking or sitting-to-standing, to establish an objective marker for their level of functional mobility. By automating such mobility disorder assessment using computer vision, health service authorities can decrease costs, reduce hospital visits, and diminish the variability in clinicians’ subjective assessment of patients. The Kinect can provide 3D pose efficiently in optimal conditions, it is dependent on several parameters, including distance and viewing direction between the subject and the sensor
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.