Abstract
AbstractThis study integrates multi‐node wearable sensor data to improve music performance skills. A window‐adding method is used during time‐frequency feature extraction. By incorporating kernel functions, we present a generalized discriminant analysis (GDA) method to reduce the high‐dimensional sensor features while retaining performance traits. Experiments demonstrate that the proposed GDA approach achieves higher accuracy (92.71%), precision (90.54%), and recall (88.68%) compared to linear discriminant analysis (82.39% accuracy) and principal component analysis (88.56% accuracy) in classifying motions performed by music performers. The integrated analysis of wearable sensor data facilitates comprehensive feedback to strengthen proficiency across various music performance skills.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have