Abstract

Equilibria in stochastic economic models are often time series which fluctuate in complex ways. But it is sometimes possible to summarize the long run, characteristics of these fluctuations. For example, if the law of motion determined by economic interactions is Markovian and if the equilibrium time series converges in a specific probabilistic sense then the long run behavior is completely determined by an invariant probability distribution. This paper develops and unifies a number of results found in the probability literature which enable one to prove, under very general conditions, the existence of an invariant distribution and the convergence of the corresponding Markov process. VIRTUALLY ALL OF ECONOMIC THEORY focuses upon the study of economic equilibrium. This concept has recently undergone several subtle elaborations. No longer must a system of markets in equilibrium be thought of as one at rest in a static steady state. Instead there is a growing body of literature (e.g., [4, 5, 12, 16, 20, 21]) which defines equilibrium as a stochastic process of market clearing prices and quantities which is consistent with the self-interested behavior of economic agents. Needless to say equilibrium stochastic processes can be very complex time series which fluctuate in irregular ways. For theoretical and econometric purposes it is useful to have a convenient way of summarizing the average behavior of such processes over time. This paper draws together and unifies a number of fundamental results from the probability literature which enable one to do this for discrete time, Markov processes on general state spaces. The starting point of the analysis is a set S of economic states (e.g., prices and/or quantities). The only technical restriction placed upon S is that it be a Borel subset of a complete, separable metric space. The second datum is a transition probability P(s, ) on S. The number P(s,A) records the probability that the economic system moves from the state s to some state in the Borel subset A of S during one unit of elapsed time. In economic applications the transition probability is usually derived from hypotheses about market clearing and the maximizing behavior of economic agents. The transition probability (together with an initial probability measure on S) defines a discrete time Markov process. One way of summarizing the dynamic behavior implied by P is to look for an invariant probability. A probability measure X on S is invariant for P if for all Borel subsets A of S one has the equality f P(s, A )X(ds) = X(A). An invariant probability is a kind of probabilistic steady state for the dynamics defined by P. Of course there may be no invariant probability for P at all; and even if one exists it may convey no information about the behavior of the process over time except under very special initial conditions. There is a second way of summarizing the behavior of Markov processes defined by the transition probability P. Let P (s,A) denote the n step transition

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call