Abstract

Sonification is an open-ended design task to construct sound informing a listener of data. Understanding application context is critical for shaping design requirements for data translation into sound. Sonification requires methodology to maintain reproducibility when data sources exhibit non-linear properties of self-organization and emergent behavior. This research formalizes interactive sonification in an extensible model to support reproducibility when data exhibits emergent behavior. In the absence of sonification theory, extensibility demonstrates relevant methods across case studies. The interactive sonification framework foregrounds three factors: reproducible system implementation for generating sonification; interactive mechanisms enhancing a listener's multisensory observations; and reproducible data from models that characterize emergent behavior. Supramodal attention research suggests interactive exploration with auditory feedback can generate context for recognizing irregular patterns and transient dynamics. The sonification framework provides circular causality as a signal pathway for modeling a listener interacting with emergent behavior. The extensible sonification model adopts a data acquisition pathway to formalize functional symmetry across three subsystems: Experimental Data Source, Sound Generation, and Guided Exploration. To differentiate time criticality and dimensionality of emerging dynamics, tuning functions are applied between subsystems to maintain scale and symmetry of concurrent processes and temporal dynamics. Tuning functions accommodate sonification design strategies that yield order parameter values to render emerging patterns discoverable as well as rehearsable, to reproduce desired instances for clinical listeners. Case studies are implemented with two computational models, Chua's circuit and Swarm Chemistry social agent simulation, generating data in real-time that exhibits emergent behavior. Heuristic Listening is introduced as an informal model of a listener's clinical attention to data sonification through multisensory interaction in a context of structured inquiry. Three methods are introduced to assess the proposed sonification framework: Listening Scenario classification, data flow Attunement, and Sonification Design Patterns to classify sound control. Case study implementations are assessed against these methods comparing levels of abstraction between experimental data and sound generation. Outcomes demonstrate the framework performance as a reference model for representing experimental implementations, also for identifying common sonification structures having different experimental implementations, identifying common functions implemented in different subsystems, and comparing impact of affordances across multiple implementations of listening scenarios.

Highlights

  • WHAT DO WE LISTEN TO WHEN WE LISTEN TO DATA?Sonification is an open-ended design task

  • The framework is designed as a canonical model of interactive sonification

  • a simple tripartite structure based on symmetry of data flow

Read more

Summary

Introduction

WHAT DO WE LISTEN TO WHEN WE LISTEN TO DATA?Sonification is an open-ended design task. Understanding the application context is critical for shaping listening scenarios and design requirements and subsequent choice of data translation strategies and sound production. When the experimental data source is unpredictable and the data exhibits emergent behavior, sonification requires a methodology to establish reliable rendition of the dynamics. This research examines the rationale and feasibility to formalize an extensible model for interactive sonification, applied to data that exhibits emergent behavior. An extensible model is proposed as an interactive sonification framework foregrouding three factors: reproducible system implementation for generating sonification, reproducible data from models that characterize emergent behavior, and interactive mechanisms enhancing a listener’s multisensory observations. The work presented here maintains a multisensory and multimodal approach in configuring sound to convey signatures of nonlinear behavior that are characteristic of biological information. The very focus, or rather the intent of the motivation to listen to biological information when working with extended instrumentation and digital abstraction, is what this presentation aims to be in service of

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call