Abstract

Practitioners commonly perform movement quality assessment through qualitative assessment protocols, which can be time-intensive and prone to inter-rater measurement bias. The advent of portable and inexpensive marker-less motion capture systems can improve assessment through objective joint kinematic analysis. The current study aimed to evaluate various machine learning models that used kinematic features from Kinect position data to classify a performer's Movement Competency Screen (MCS) score. A Kinect V2 sensor collected position data from 31 physically active males as they performed bilateral squat, forward lunge, and single-leg squat; and the movement quality was rated according to the MCS criteria. Features were extracted and selected from domain knowledge-based kinematic variables as model input. Multiclass logistic regression (MLR) was then performed to translate joint kinematics into MCS score. Performance indicators were calculated after a 10-fold cross validation of each model developed from Kinect-based kinematic variables. The analyses revealed that the models' sensitivity, specificity, and accuracy ranged from 0.66 to 0.89, 0.58 to 0.86, and 0.74 to 0.85, respectively. In conclusion, the Kinect-based automated movement quality assessment is a suitable, novel, and practical approach to movement quality assessment.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.