Abstract

Detecting the presence or absence of speech is an important step toward building robust speech-based interfaces. While previous studies have made progress on voice activity detection (VAD), the performance of these systems significantly degrades when subjects employ challenging speech modes that deviate from normal acoustic patterns (e.g., whisper speech), or in noisy/adverse conditions. An appealing approach under these conditions is visual voice activity detection (VVAD), which detects speech using features characterizing the orofacial activity. This study proposes an unsupervised approach that relies only on visual features, and, therefore, is insensitive to vocal style or time-varying background noise. This study proposes an unsupervised approach that relies on visual features. We estimate optical flow variance and geometrical features around lips, extracting the short-time zero crossing rates, short-time variances, and delta features over a small temporal window. These variables are fused using principal component analysis (PCA) to obtain a “combo” feature, which displays a bimodal distributions (speech versus silence). A threshold is automatically determine using the expectation-maximization (EM) algorithm. The approach can be easily transformed into a supervised VVAD, if needed. We evaluate the system in neutral and whisper speech. While speech based VADs generally fail to detect speech activity in whisper speech, given its important acoustic differences, the proposed VVAD achieves near 80% accuracy in both neutral and whisper speech, highlighting the benefits of the system. Index Terms: Visual voice activity detection, whisper speech

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call