Abstract

IIn this paper, we extend the framework of the convergence ofstochastic approximations. Such a procedure is used in many methods such as parameters estimation inside a Metropolis Hastings algorithm, stochastic gradient descent or stochastic Expectation Maximization algorithm. It is given by θ n+1 = θn + ∆ n+1 H θn (X n+1) , where (Xn)n∈N is a sequence of random variables following a parametric distribution which depends on (θn)n∈N, and (∆n)n∈N is a step sequence. The convergence of such a stochastic approximation has already been proved under an assumption of geometric ergodicity of the Markov dynamic. However, in many practical situations this hypothesis is not satisfied, for instance for any heavy tail target distribution in a Monte Carlo Metropolis Hastings algorithm. In this paper, we relax this hypothesis and prove the convergence of the stochastic approximation by only assuming a subgeometric ergodicity of the Markov dynamic. This result opens up the possibility to derive more generic algorithms with proven convergence. As an example, we first study an adaptive Markov Chain Monte Carlo algorithm where the proposal distribution is adapted by learning the variance of a heavy tail target distribution. We then apply our work to the Independent Component Analysis when a positive heavy tail noise leads to a subgeometric dynamic in an Expectation Maximization algorithm.

Highlights

  • A common problem across scientific fields is to find the roots of a non-linear function h : Θ → R

  • We relaxed the condition of geometric ergodicity previously needed to ensure the convergence of stochastic approximations with Markovian dynamics

  • We provide theoretical guarantees for a wider class of algorithms that are used in practice

Read more

Summary

Introduction

A common problem across scientific fields is to find the roots of a non-linear function h : Θ → R. These examples show that these methods may be used in practice without any theoretical guarantee of convergence This situation leads us to study the convergence of such stochastic algorithms for Markov chains with a relaxed assumption of subgeometric ergodicity. The first corollary proves the convergence of a stochastic approximation used to adapt the variance of the proposal within a Metropolis Hastings algorithm We prove this convergence for two different classes of heavy tail target distributions including the Weibull and the Pareto distributions among others. The second corollary is about the independent component analysis model where distributions with positive heavy tails lead to a subgeometric ergodic Markov Chain in a Stochastic Approximation Expectation Maximization Monte Carlo Markov Chain (SAEM MCMC) algorithm

Stochastic approximation framework with Markovian dynamic
Markovian dynamic
Truncation process
Control of the fluctuations and main convergence theorem
Sketch of proof
Presentation of the algorithm
Application to the Pareto distribution
Application to independent component analysis
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.