Abstract

Large-scale distributed arrays can obtain high spatial resolution, but they typically rely on a rigid array structure. If we want to form distributed arrays from mobile and wearable devices, our models need to account for motion. The motion of multiple microphones worn by humans can be difficult to track, but through manifold techniques we can learn the movement through its acoustic response. We show that the mapping between the array geometry and its acoustic response is locally linear and can be exploited in a semi-supervised manner for a given acoustic environment. We will also investigate generative modelling of microphone positions based on their acoustic response to both synthetic and recorded data. Prior work has shown a similar locally linear mapping between source locations and their spatial cues, and we will attempt to combine these findings with our own to develop a localization model suitable for dynamic array geometries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call