Abstract

Localizing sound sources in a visual scene has many important applications and quite a few traditional or learning-based methods have been proposed for this task. Humans have the ability to roughly localize sound sources within or beyond the range of the vision using their binaural system. However most existing methods use monaural audio, instead of binaural audio, as a modality to help the localization. In addition, prior works usually localize sound sources in the form of object-level bounding boxes in images or videos and evaluate the localization accuracy by examining the overlap between the ground-truth and predicted bounding boxes. This is too rough since a real sound source is often only a part of an object. In this paper, we propose a deep learning method for pixel-level sound source localization by leveraging both binaural recordings and the corresponding videos. Specifically, we design a novel Binaural Audio-Visual Network (BAVNet), which concurrently extracts and integrates features from binaural recordings and videos. We also propose a point-annotation strategy to construct pixel-level ground truth for network training and performance evaluation. Experimental results on Fair-Play and YT-Music datasets demonstrate the effectiveness of the proposed method and show that binaural audio can greatly improve the performance of localizing the sound sources, especially when the quality of the visual information is limited.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call