Abstract
This paper proposes a robust and automated applause detection algorithm for meeting speech. The features used in the proposed algorithm are the short-time autocorrelation features such as autocorrelation energy decay factor, amplitude and lag values of first local minimum and zero-crossing points extracted from the autocorrelation sequence of a windowed audio signal. We apply decision thresholds for the above acoustic features to identify applause and non-applause segments from the audio stream. The performance of the proposed algorithm is compared with the conventional method using mel frequency cepstral coefficients (MFCC) feature vectors and Gaussian mixture model (GMM) as classifier. We have also analyzed the performance of these algorithms by varying the number of mixtures in GMM (2, 4, 8, 16 and 32) and various thresholds in the proposed method. The methods are tested with a multimedia database of 4 hours 37 minutes of meeting speech and the results are compared. The precision rate, recall rate and F1 score of the proposed method are 94.40%, 90.75% and 92.54% respectively while those of conventional method are 67.47%, 96.13% and 79.29% respectively.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have