Abstract

This article presents a filter for state-space models based on Bellman's dynamic programming principle applied to the mode estimator. The proposed Bellman filter generalises the Kalman filter including its extended and iterated versions, while remaining equally inexpensive computationally. The Bellman filter is also (unlike the Kalman filter) robust under heavy-tailed observation noise and applicable to a wider range of (nonlinear and non-Gaussian) models, involving e.g. count, intensity, duration, volatility and dependence. The Bellman-filtered states are shown to be convergent, in quadratic mean, towards a small region around the true state. (Hyper)parameters are estimated by numerically maximising a filter-implied log-likelihood decomposition, which is an alternative to the classic prediction-error decomposition for linear Gaussian models. Simulation studies reveal that the Bellman filter performs on par with (or even outperforms) state-of-the-art simulation-based techniques, e.g. particle filters and importance samplers, while requiring a fraction (e.g. 1%) of the computational cost, being straightforward to implement and offering full scalability to higher dimensional state spaces.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call