Abstract

Recognizing human actions in realistic scenes has emerged as a challenging topic due to various aspects such as dynamic backgrounds. In this paper, we present a novel approach to taking audio context into account for better action recognition performance, since audio can provide strong evidence to certain actions such as phone-ringing to answer-phone. At first, classifiers are established for visual and audio modalities, respectively. Specifically, bag of visual-words model is employed to represent human actions in visual modality, a number of audio features are extracted for audio modality, and Support Vector Machine (SVM) is employed as the classification technique. Then, a decision fusion scheme is utilized to fuse classification results from two modalities. Since audio context is not always helpful, two simple yet effective decision rules are developed for selective fusion. Experimental results on the Hollywood Human Actions (HOHA) dataset demonstrate that the proposed approach can achieve better recognition performance than that of integrating scene context. Therefor, our work provides strong confidence to further explore how audio context influences realistic human action recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call