Abstract

Large-scale distributed arrays can obtain high spatial resolution, but they typically rely on a rigid array structure. If we want to form distributed arrays from mobile and wearable devices, our models need to account for motion. The motion of multiple microphones worn by humans can be difficult to track, but through manifold techniques we can learn the movement through its acoustic response. We show that the mapping between the array geometry and its acoustic response is locally linear and can be exploited in a semi-supervised manner for a given acoustic environment. We will also investigate generative modelling of microphone positions based on their acoustic response to both synthetic and recorded data. Prior work has shown a similar locally linear mapping between source locations and their spatial cues, and we will attempt to combine these findings with our own to develop a localization model suitable for dynamic array geometries.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.