Abstract

A probability distribution governing the evolution of a stochastic process has infinitely many Bayesian representations of the form μ = ∫ Θ μ θ dλ(θ). Among these, a natural representation is one whose components (μ θ 's) are learnable (one can approximate μ θ by conditioning μ on observation of the process) and sufficient for prediction (μ θ 's predictions are not aided by conditioning on observation of the process). We show the existence and uniqueness of such a representation under a suitable asymptotic mixing condition on the process. This representation can be obtained by conditioning on the tail-field of the process, and any learnable representation that is sufficient for prediction is asymptotically like the tail-field representation. This result is related to the celebrated de Finetti theorem, but with exchangeability weakened to an asymptotic mixing condition, and with his conclusion of a decomposition into i.i.d. component distributions weakened to components that are learnable and sufficient for prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call