Abstract

This paper deals with a parametrized family of partially observed bivariate Markov chains. We establish that, under very mild assumptions, the limit of the normalized log-likelihood function is maximized when the parameters belong to the equivalence class of the true parameter, which is a key feature for obtaining the consistency of the maximum likelihood estimators (MLEs) in well-specified models. This result is obtained in the general framework of partially dominated models. We examine two specific cases of interest, namely, hidden Markov models (HMMs) and observation-driven time series models. In contrast with previous approaches, the identifiability is addressed by relying on the uniqueness of the invariant distribution of the Markov chain associated to the complete data, regardless its rate of convergence to the equilibrium.

Highlights

  • Maximum likelihood estimation is a widespread method for identifying a parametric model of a time series from a sample of observations

  • Under a well-specified model assumption, it is of prime interest to show the consistency of the estimator, that is, its convergence to the true parameter, say θ⋆, as the sample size goes to infinity

  • The proof generally involves two important steps: 1) the maximum likelihood estimator (MLE) converges to the maximizing set Θ⋆ of the asymptotic normalized log-likelihood, and 2) the maximizing set reduces to the true parameter

Read more

Summary

Introduction

Maximum likelihood estimation is a widespread method for identifying a parametric model of a time series from a sample of observations. Sense, namely, that the estimated parameter converges to the set of all the parameters associated to the same distribution as the one of the observed sample This consistency result is referred to as equivalence-class consistency, as introduced by [24]. When the observed variable is discrete, general consistency results have been obtained only recently in [9] or [10] (see in [20] for the existence of stationary and ergodic solutions to some observation-driven time series models) In these contributions, the way of proving that the maximizing set Θ⋆ reduces to {θ⋆} requires checking specific conditions in each given example and seems difficult to assert in a more general context, for instance when the distribution of the observations given the hidden variable depends on an unknown parameter.

A general approach to identifiability
Application to hidden Markov models
Application to observation-driven models
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.