Abstract

In this paper, we develop associative memorization architecture of the musical features from time sequential data of the music audio signals. This associative memorization architecture is constructed by using deep learning architecture. Challenging purpose of our research is the development of the new composition system that automatically creates a new music based on some existing music. How does a human composer make musical compositions or pieces? Generally speaking, music piece is generated by the cyclic analysis process and re-synthesis process of musical features in music creation procedures. This process can be simulated by learning models using Artificial Neural Network (ANN) architecture. The first and critical problem is how to describe the music data, because, in those models, description format for this data has a great influence on learning performance and function. Almost of related works adopt symbolic representation methods of music data. However, we believe human composers never treat a music piece as a symbol. Therefore raw music audio signals are input to our system. The constructed associative model memorizes musical features of music audio signals, and regenerates sequential data of that music. Based on experimental results of memorizing music audio data, we verify the performances and effectiveness of our system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.