Abstract

In this paper, we use melodic multisensor information fusion combined with 5G IoT to conduct an in-depth study and analysis of the experience model of piano performance. In the form of multimodal data, two main storage forms, audio and MIDI, are chosen. First, audio signal processing technology and deep learning technology are used to extract shallow and high-level feature sequences in turn, and then, the alignment of the two modal data is completed with the help of sequence alignment algorithm. For the problem that encrypted data cannot be queried by uploading blockchain, this paper proposes an IoT encrypted data query mechanism based on blockchain and Bloom’s filter. The blockchain stores IoT encrypted indexes by temporal attributes to ensure data consistency, tamper-evident, and traceability. A new loss function training multimodal model is designed for piano performance signals. The piano performance generated by this model differs from the traditional piano performance generation in that it does not need to add complex piano performance rules manually but generates piano performance directly with piano performance theory rules by training the initial piano performance dataset and improves the stability of the generated piano performance by chord constraints and enhances the note dependence on time. In the analysis of the experimental results, the generated melodies were invited 50 for evaluation and analysis. The overall style-based GAN network piano performance generation model proposed in the study makes the generated piano performance melodies more pleasing to the ear through chord constraints and the content of autonomous learning moments, which has important theoretical and practical implications for the creation and realization of mass and batch piano performances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call