Abstract

To support spatial audio research, we aim to take recordings from complex acoustic environments with moving sources and microphones, however we observe a lack of research tools that can accomplish this. Past approaches recorded people engaging in various tasks, which produces rich data that unfortunately lacks repeatability. We propose using robots to recreate dynamic scenes without the inherent variability of human motion. To be useful, this Mechatronic Acoustic Research System must be remotely accessible, offer concise representations of dynamic scenes, support a variety of robot and audio devices, and synchronize robot motion. In this talk, we show how we solved these challenges. Remote experimentation is facilitated by our virtual interface, which uses a simple GUI to describe robot motion and audio playback/recording. A digital twin physical simulation is used for visualization and validation of motion paths. We propose using the Robot Operating System for multi-robot coordination so that networked robots can be incorporated with little overhead. We use MARS to run experiments where a cable-driven parallel robot moves a loudspeaker along a 3D path while being recorded from distributed Matrix Voice microphone arrays. We evaluate the measured audio to show repeatability of the system, justifying its use in research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call