Practitioners commonly perform movement quality assessment through qualitative assessment protocols, which can be time-intensive and prone to inter-rater measurement bias. The advent of portable and inexpensive marker-less motion capture systems can improve assessment through objective joint kinematic analysis. The current study aimed to evaluate various machine learning models that used kinematic features from Kinect position data to classify a performer's Movement Competency Screen (MCS) score. A Kinect V2 sensor collected position data from 31 physically active males as they performed bilateral squat, forward lunge, and single-leg squat; and the movement quality was rated according to the MCS criteria. Features were extracted and selected from domain knowledge-based kinematic variables as model input. Multiclass logistic regression (MLR) was then performed to translate joint kinematics into MCS score. Performance indicators were calculated after a 10-fold cross validation of each model developed from Kinect-based kinematic variables. The analyses revealed that the models' sensitivity, specificity, and accuracy ranged from 0.66 to 0.89, 0.58 to 0.86, and 0.74 to 0.85, respectively. In conclusion, the Kinect-based automated movement quality assessment is a suitable, novel, and practical approach to movement quality assessment.