Abstract

We present a novel paradigm for the interactive composition and performance of music called Roboser consisting of a real-world device (i.e., a robot), its control software, and a composition engine that produces streams of MIDI data in real time. To analyze the properties of this framework, we present the application of Roboser to a learning mobile robot, called EmotoBot, that is controlled by the Distributed Adaptive Control (DAC) architecture. The EmotoBot composition is based on the generation of real-time sound events that express sensory, behavioral, and the internal states of the robot’s control model. We show that EmotoBot produces a complex set of sonic layers and quantify its ability to generate complex emergent sonic structures. We subsequently describe further applications of the Roboser framework to other interactive systems, including a large-scale interactive exhibition called Ada. Our results show the potential of the Roboser paradigm to define the central-processing stage of interactive composition systems. Moreover, Roboser provides a general framework for transforming information from real-world systems into complex sonic structures and as such constitutes a real-world composition system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.