Abstract
Classical coupling constructions arrange for copies of the same Markov process started at two different initial states to become equal as soon as possible. In this paper, we consider an alternative coupling framework in which one seeks to arrange for two different Markov (or other stochastic) processes to remain equal for as long as possible, when started in the same state. We refer to this “un-coupling” or “maximal agreement” construction as MEXIT, standing for “maximal exit”. After highlighting the importance of un-coupling arguments in a few key statistical and probabilistic settings, we develop an explicit MEXIT construction for stochastic processes in discrete time with countable state-space. This construction is generalized to random processes on general state-space running in continuous time, and then exemplified by discussion of MEXIT for Brownian motions with two different constant drifts.
Highlights
Coupling is a device commonly employed in probability theory for learning about distributions of certain random variables by means of judicious construction in ways which depend on other random variables (Lindvall [15] and Thorisson [30])
Such coupling constructions are often used to prove convergence of Markov processes to stationary distributions (Pitman [21]), especially for Markov chain Monte Carlo (MCMC) algorithms (Roberts and Rosenthal [24], and references therein), by seeking to build two different copies of the same Markov process started at two different initial states in such a way that they become equal at a fast rate
We have studied an alternative coupling framework in which one seeks to arrange for two different Markov processes to remain equal for as long as possible, when started in the same state
Summary
Coupling is a device commonly employed in probability theory for learning about distributions of certain random variables by means of judicious construction in ways which depend on other random variables (Lindvall [15] and Thorisson [30]) Such coupling constructions are often used to prove convergence of Markov processes to stationary distributions (Pitman [21]), especially for Markov chain Monte Carlo (MCMC) algorithms (Roberts and Rosenthal [24], and references therein), by seeking to build two different copies of the same Markov process started at two different initial states in such a way that they become equal at a fast rate. We believe the current work complements Vollering [31] well It offers an explicit treatment of discretetime countable-state-space, generalizes the continuous-time case, and discusses a number of significant applications of MEXIT.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.