Abstract
In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. In this work, we propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. On the other hand, earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A performer thus improvises within the behavioural scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output.
Highlights
In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior
We propose to use the complex behavior that emerges from a multi-agent system called the Actor model to drive earGram, a concatenative sound synthesis engine, in real time
While adopting a different audio source has a greater impact on the sonic result, changing the feature space that organizes the audio segments database will offer a lower degree of variability, in musical terms to the creation of variations of the same musical material
Summary
In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. We propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. EarGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A user-performer improvises within the behavioral scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have