Abstract

Particle methods, also known as Sequential Monte Carlo, have been ubiquitous for Bayesian inference for state-space models, particulary when dealing with nonlinear non-Gaussian scenarios. However, in many practical situations, the state-space model contains unknown model parameters that need to be estimated simultaneously with the state. In this paper, We discuss a sequential analysis for combined parameter and state estimation. An online learning method is proposed to approach the distribution of the model parameter by tuning a flexible proposal mixture distribution to minimize their Kullback-Leibler divergence. We derive the sequential learning method by using a truncated Dirichlet processes normal mixture and present a general algorithm under a framework of the auxiliary particle filtering. The proposed algorithm is verified in a blind deconvolution problem, which is a typical state-space model with unknown model parameters. Furthermore, in a more challenging application that we call meta-modulation, which is a more complex blind deconvolution problem with sophisticated system evolution equations, the proposed method performs satisfactorily and achieves an exciting result for high efficiency communication.

Highlights

  • State-space model, a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the dependence between the unobserved state variable and the observed measurement, is a fundamental model for statistical inference with diversely applications in fields like statistics, econometrics, and information engineering (West and Harrison, 1997; Cappeet al., 2005)

  • An online learning method is proposed to approach the distribution of the model parameter by tuning a flexible proposal mixture distribution to minimize their Kullback-Leibler divergence

  • The maximum likelihood approach generally converges rather slowly, but it may be a good choice for large data sets because of its low complexity; Bayesian methods apply directly to the augmented states and Markov chain Monte Carlo steps are utilized to improve the inference/estimation of the model parameter ( Gilks and Berzuini, 2001; Fearnhead, 2002; Andrieu et al, 2010)

Read more

Summary

Introduction

State-space model, a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the dependence between the unobserved state variable and the observed measurement, is a fundamental model for statistical inference with diversely applications in fields like statistics, econometrics, and information engineering (West and Harrison, 1997; Cappeet al., 2005). One early and straightforward way to deal with this problem is to extend the original state to an augmented state that includes the state and the parameter together, and to apply a standard particle filter to perform inference for both of them. This naive approach has been recognized not to be efficient, because the parameter space is not well explored (Kitagawa, 1998; Liu and West, 2001).

Problem Statement and Particle Filter Algorithm
Sequential Learning
Joint Parameter and State Estimation
Blind Deconvolution
Meta-modulation
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.