Abstract

Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interests and touches on many important applications in text mining, computer vision and computational biology. We first introduce a novel inference algorithm, called belief propagation (BP), for learning LDA, and then introduce how to speed up BP for fast topic modeling tasks. Following the “bag-of-words” (BOW) representation for video sequences, this chapter also introduces novel type-2 fuzzy topic models (T2 FTM) to recognize human actions. In traditional topic models (TM) for visual recognition, each video sequence is modeled as a “document” composed of spatial–temporal interest points called visual “word”. Topic models automatically assign a “topic” label to explain the action category of each word, so that each video sequence becomes a mixture of action topics for recognition. The T2 FTM differs from previous TMs in that it uses type-2 fuzzy sets (T2 FS) to encode the semantic uncertainty of each topic. We ca use the primary membership function (MF) to measure the degree of uncertainty that a document or a visual word belongs to a specific action topic, and use the secondary MF to evaluate the fuzziness of the primary MF itself. In this chapter, we implement two T2 FTMs: (1) interval T2 FTM (IT2 FTM) with all secondary grades equal one, and (2) vertical-slice T2 FTM (VT2 FTM) with unequal secondary grades based on our prior knowledge. To estimate parameters in T2 FTMs, we derive the efficient message-passing algorithm. Experiments on KTH and Weizmann human action data sets demonstrate that T2 FTMs are better than TMs to encode visual word uncertainties for human action recognition.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.