Abstract

Learning a hidden Markov model (HMM) is typically based on the computation of a likelihood which is intractable due to a summation over all possible combinations of states and mixture components. This estimation is often tackled by a maximization strategy, which is known as the Baum-Welch algorithm. However, some drawbacks of this approach have led to the consideration of Bayesian methods that add a prior over the parameters in order to work with the posterior probability and the marginal likelihood. These approaches can lead to good models but to the cost of extremely long computations (e.g., Markov Chain Monte Carlo). More recently, variational Bayesian frameworks have been proposed as a Bayesian alternative that keeps the computation tractable and the approximation tight. It relies on the introduction of a prior over the parameters to be learned and on an approximation of the true posterior distribution. After proving good standing in the case of finite mixture models and discrete and Gaussian HMMs, we propose here to derive the equations of the variational learning of the Dirichlet mixture-based HMM, and to extend it to the generalized Dirichlet. The latter case presents several properties that make the estimation more accurate. We prove the validity of this approach within the context of unusual event detection in public areas using the University of California San Diego data sets. HMMs are trained over normal video sequences using the typical Baum-Welch approach versus the variational one. The variational learning leads to more accurate models for the detection and localization of anomaly, and the general HMM approach is shown to be versatile enough to handle the detection of various synthetically generated tampering events.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call