Abstract

We present an automatic voice activity detection (VAD) method that is solely based on visual cues. Unlike traditional approaches processing audio, we show that upper body motion analysis is desirable for the VAD task. The proposed method consists of components for body motion representation, feature extraction from a Convolutional Neural Network (CNN) architecture and unsupervised domain adaptation. The body motion representations as images are used by the feature extraction component, which is generic and person-invariant, thus, can be applied to a subject who has never been seen. The endmost component handles the domain-shift problem, which appears due to the fact that the way people move/ gesticulate while speaking might vary from subject to subject, which results in disparate body motion features and consequently poorer VAD performance. The experimental analyses applied on a publicly available real-world VAD dataset show that the proposed method performs better than the state-of-the-art video-only and multimodal VAD approaches. Moreover, the proposed method has a better generalization ability as VAD results are more consistent across different subjects. As another major contribution, we present a new multimodal dataset (called RealVAD), created from a real-world (no role-plays) panel discussion. This dataset contains many actual situations/ challenges that are missing in the previous VAD datasets. We benchmarked the RealVAD dataset by applying the proposed method as well as cross-dataset analyses. Particularly, the results of cross-dataset experiments highlight the remarkable positive contribution of the unsupervised domain adaptation applied.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.