Abstract

Musicology is a growing focus in computer science. Past research has had success in automatically generating music through learning-based agents [1] that make use of neural networks and through model and rule-based approaches [2]. These methods require a significant amount of information, either in the form of a large dataset for learning or a comprehensive set of rules based on musical concepts. This paper explores a model in which a minimal amount of musical information is needed to compose a desired style of music. This paper makes use of objectness, a concept directly derived from imagery and pattern recognition to extract specific musical objects from a single musical piece. This is then used as the foundation to produce a new generated musical piece that is similar in style to the original. The overall musical piece is generated through a partial evolution. This method eliminates the need for a large amount of pre-provided data and directly composes music based on a singular source piece.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call