Abstract

The purpose of this study is to extract the developmental characteristics of musical expression in early childhood from a viewpoint of change of elements of body movement and to generate an evaluation model based on those feature quantities. In this paper, the author inspected new feature quantities for classification and discrimination by machine learning of the developmental degree of musical expression in early childhood. Firstly, the author presented a verification regarding results of statistical analysis of elements of movement in musical expression in early childhood using 3D motion capture and evaluation of musical development degrees through machine learning. First, ANOVA was attempted on full-body movements in this investigation. The motion capture data of 3-year-old, 4-year-old, and 5-year-old children in child facilities (n=178) was quantitatively analyzed using a three-way non-repeated ANOVA. As a result, there was a statistically significant difference in body part movement. A significant difference was found in right hand movement, such as moving distance and moving average acceleration. Second, with simultaneously recorded children's video and associated motion capture data, machine learning (decision trees, Sequential Minimum Optimization algorithm (SMO), Support Vector Machine (SVM), and neural network (multi-layer perceptron) were used to build classification models for evaluation of degree of musical development classified by educators. Among the many trained classification models, the multi-layer perceptron produced the best confusion matrix findings and demonstrated reasonable classifying precision and usefulness to assist educators in evaluating children's musical development levels. The movement of the pelvis has a substantial association with musical growth degree as a result of multilayered perceptron machine learning. Its classification accuracy was shown to be consistent, indicating that the model might be used to help educators assess children's capacity to express themselves musically. Thereafter, as a recent study, based on the results of classification and discrimination by machine learning of the developmental degree of musical expression, the author also presented some results of eye tracking on musical expression in order to find out other feature quantities. Eye-tracking is now widely used to study human physical response regarding cognitive and emotional aspects. The author believes eye-tracking data depicts useful information to understand musical expression from data of gazepath, fixations and saccades. 3-year-old, 4-year-old, and 5-year-old children in child facilities (n=118) participated in eye tracking during singing a song using an eye tracker (Tobii3). Quantitative analysis using ANOVA was conducted on the calculated data. As a result, the increase in data such as number of saccade occurrences and size, and the moving average velocity of saccade showed that saccades during musical expression in early childhood tended to be larger in major key than in minor key. The result verified that effective feature quantities for machine learning could be extracted from the calculated data of eye movement during musical expression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call