Abstract

Identification and analysis of piano performance music are essential activities in the study of music knowledge and enjoyment. The goal of this study is to employ recurrent neural networks (RNNs) to create a multimedia identification and analysis method for piano performance music. RNNs excel at capturing the temporal relationships and dynamics seen in musical performances. The project entails gathering a large dataset of piano performance recordings that spans a variety of genres, performers, and playing techniques. The audio and video components of the performances are pre-processed in order to extract pertinent information. Long Short-Term Memory (LSTM), an RNN architecture, is used to mimic the sequential nature of the performances. The RNN is trained on the features that were retrieved in order to discover the patterns and traits connected to various piano performances. The similarity between the performance representations may be measured using similarity measures using Euclidean distance. The RNN-based system may be further developed to do tasks like score following, expressive performance analysis, and stylistic variation creation to aid in performance analysis. The system may provide insights into timing accuracy, dynamics, phrasing, and other expressive qualities of the piano performances by matching performance data with related musical scores. The proposed method RNN-LSTM provides accuracy about 99%, precision about 97% , recall about 98.9% and F1 Score of 97.6%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call