Abstract

Musicology is a growing focus in computer science. Past research has had success in automatically generating music through learning-based agents that make use of neural networks and through model and rule-based approaches. These methods require a significant amount of information, either in the form of a large dataset for learning or a comprehensive set of rules based on musical concepts. This paper explores a model in which a minimal amount of musical information is needed to compose a desired style of music. This paper takes from two concepts, objectness, and evolutionary computation. The concept of objectness, an idea directly derived from imagery and pattern recognition, was used to extract specific musical objects from single musical inputs which are then used as the foundation to algorithmically produce musical pieces that are similar in style to the original inputs. These musical pieces are the product of evolutionary algorithms which implement a sequential evolution approach wherein a generated output may or may not yet be fully within the fitness thresholds of the input pieces. This method eliminates the need for a large amount of pre-provided data as well as the need for long processing times that are commonly associated with machine-learned art-pieces. This study aims to show a proof of concept of the implementation of the described model.

Highlights

  • Artificial intelligence and machine learning has had great strides in recent years in terms of musicality, in human-like algorithmic composition, so much progress has been made, that there have already been discussions if music can be used as a valid metric in satisfying the Turing Test [1, 14, 26].This study concedes that these previous studies and techniques are able to emulate human musicality by studying a large corpus of music or rulesets

  • While it is natural for a composer or musician to study a specific style or a specific composer's music and be able to produce a musical piece based on this study, it is possible for musicians and composers listen to a single piece of music and take elements from just this single piece and create a new piece of music that is inspired by just this single piece of music

  • This section was constructed with the aim of determining if there is a noticeable bias to either computer or human when all samples are composed by computers and humans

Read more

Summary

Introduction

Artificial intelligence and machine learning has had great strides in recent years in terms of musicality, in human-like algorithmic composition, so much progress has been made, that there have already been discussions if music can be used as a valid metric in satisfying the Turing Test [1, 14, 26].This study concedes that these previous studies and techniques are able to emulate human musicality by studying a large corpus of music or rulesets. This study takes a different approach in musical composition totally eliminating the requirement for large datasets and rather, focusing on a method dubbed "immediate learning" wherein an immediate singular input of music is used as the basis for composition of human-like music While it is natural for a composer or musician to study a specific style or a specific composer's music and be able to produce a musical piece based on this study, it is possible for musicians and composers listen to a single piece of music and take elements from just this single piece and create a new piece of music that is inspired by just this single piece of music. This study focuses primarily on improvised musicality with a primary objective in algorithmic composition with real-time or instantaneous output

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call