Abstract

Given a sequence X=(X1,X2,…) of random observations, a Bayesian forecaster aims to predict Xn+1 based on (X1,…,Xn) for each n≥0. To this end, in principle, she only needs to select a collection σ=(σ0,σ1,…), called “strategy” in what follows, where σ0(·)=P(X1∈·) is the marginal distribution of X1 and σn(·)=P(Xn+1∈·|X1,…,Xn) the nth predictive distribution. Because of the Ionescu–Tulcea theorem, σ can be assigned directly, without passing through the usual prior/posterior scheme. One main advantage is that no prior probability is to be selected. In a nutshell, this is the predictive approach to Bayesian learning. A concise review of the latter is provided in this paper. We try to put such an approach in the right framework, to make clear a few misunderstandings, and to provide a unifying view. Some recent results are discussed as well. In addition, some new strategies are introduced and the corresponding distribution of the data sequence X is determined. The strategies concern generalized Pólya urns, random change points, covariates and stationary sequences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call