Abstract

This article explores validity to introduce eye movement of children in association with body movement as feature quantities of machine learning, in particular, to assess children’s maturity level of musical expression. There is a growing awareness of applying machine learning technique to yield practical benefit on predicting developmental degrees of music education by observing body movements with motion capture technology. From professional teaching perspective, it is believed only experienced teachers can evaluate children’s music achievement because coordinated functioning of body movements need carefully be observed in connection with various music factor such as rhythm, beat strength, tones, etc. If new teachers can take advantage of having objective results of evaluation of children’s expression level by machines, it will enhance educational efficiency to certain levels which experts could attain. The author aims to improve classification accuracy of machine learning by focusing on eye movements data simultaneously recorded with body movements. In this study, children (n=43) at two child facilities participated in the data capture of both eye movement and body movement during musical expression. Feature quantities were extracted from the results of a three-way of ANOVA, and applied to machine learning to improve the classification accuracy of developmental degree in musical expression. Specifically, when using several classifiers such as NN (Neural Network model), the classification accuracy of developmental degree of musical development was more precisely in the case of both body movement data and eye movement data included in feature quantities than in the case of only body movement data as feature quantities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call