Abstract

A system capable of undertaking automatic musical accompaniment with human musicians should be minimally able to undertake real-time listening of incoming music signals from human musicians, and synchronize its own actions in real-time with that of musicians according to a music score. To this, one must also add the following requirements to assure correctness: Fault-tolerance to human or machine listening errors, and best-effort (in contrast to optimal) strategies for synchronizing heterogeneous flows of information. Our approach in Antescofo consists of a tight coupling of real-time Machine Listening and Reactive and Timed-Synchronous systems. The machine listening in Antescofo is in charge of encoding the dynamics of the outside environment (i.e., musicians) in terms of incoming events, tempo and other parameters from incoming polyphonic audio signal; whereas the synchronous timed and reactive component is in charge of assuring correctness of generated accompaniment. The novelty in Antescofo approach lies in its focus on Time as a semantic property tied to correctness rather than a performance metric. Creating automatic accompaniment out of symbolic (MIDI) or audio data follows the same procedure, with explicit attributes for synchronization and fault-tolerance strategies in the language that might vary between different styles of music. In this sense, Antescofo is a cyber-physical system featuring a tight integration of, and coordination between heterogeneous systems including human musicians in the loop of computing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call