Abstract
It is possible to represent each of a number of Markov chains as an evolving sequence of connected subsets of a directed acyclic graph that grow in the following way: initially, all vertices of the graph are unoccupied, particles are fed in one-by-one at a distinguished source vertex, successive particles proceed along directed edges according to an appropriate stochastic mechanism, and each particle comes to rest once it encounters an unoccupied vertex. Examples include the binary and digital search tree processes, the random recursive tree process and generalizations of it arising from nested instances of Pitman's two-parameter Chinese restaurant process, tree-growth models associated with Mallows' $\phi$ model of random permutations and with Schützenberger's non-commutative $q$-binomial theorem, and a construction due to Luczak and Winkler that grows uniform random binary trees in a Markovian manner. We introduce a framework that encompasses such Markov chains, and we characterize their asymptotic behavior by analyzing in detail their Doob-Martin compactifications, Poisson boundaries and tail $\sigma$-fields.
Highlights
Several stochastic processes appearing in applied probability may be viewed as growing connected subsets of a directed acyclic graph that evolve according to the following dynamics: initially, all vertices of the graph are unoccupied, particles are fed in one-by-one at a distinguished source vertex, successive particles proceed alongTrickle-down processes and their boundaries directed edges according to an appropriate stochastic mechanism, and each particle comes to rest once it encounters an unoccupied vertex
We are interested in the question: “What is the asymptotic behavior of such a set-valued Markov chain?” For several of the models we consider, any finite neighborhood of the source vertex will, with probability one, be eventually occupied by a particle and so a rather unilluminating answer to our question is to say in such cases that the sequence of sets converges to the entire vertex set V
A prime example of a Markov chain that fits into the trickle-down framework is the binary search tree (BST) process, and so we spend some time describing the BST process in order to give the reader some concrete motivation for the definitions we introduce later
Summary
Several stochastic processes appearing in applied probability may be viewed as growing connected subsets of a directed acyclic graph that evolve according to the following dynamics: initially, all vertices of the graph are unoccupied, particles are fed in one-by-one at a distinguished source vertex, successive particles proceed along. The sequence (GSru , DSru ), r ∈ N, obtained by time-changing the sequence (Gun, Dnu), n ∈ N, so that we only observe it when it changes state is a Markov chain with the same distribution as (Gn, Dn), n ∈ N It follows from this observation that we may construct the tree-valued stochastic process (Tn)n∈N from an infinite collection of independent, identically distributed Pólya urns, with one urn for each vertex of the complete binary tree {0, 1} , by running the urn for each vertex according to a clock that depends on the evolution of the urns associated with vertices that are on the path from the root to the vertex. We note that there are a number of other papers that investigate the DoobMartin boundary of Markov chains on various combinatorial structures such as Young diagrams and partitions – see, for ex., [43, 34, 19, 24, 21, 20]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.