Abstract

This paper examines how “lumping”, the aggregation of different states of a Markov chain into one state, affects the underlying properties of the Markov process. Specifically, a Markov chain model of income convergence for US states is estimated, and different quantile lumpings are tested to determine if they preserve the Markov property. This work ties into the broader literature on modelling regional income convergence using Markov processes, specifically with attempts to quanity how reasonable choices about state space compression are and the quantification of these choices’ consequences. First, we estimate a rank Markov model. From this, we find that Markov models for regional income convergence lose the Markov property when quantile lumps are large and contain many states, but perform well when lumps get smaller, containing fewer states. This new positive finding and technical work paves the way for broader studies of lumpability of discrete Markov models for geographic or policy regions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call