Abstract

Consider the classical nonparametric regression problem yi = f(ti) + ɛii = 1,...,n where ti = i/n, and ɛi are i.i.d. zero mean normal with variance σ2. The aim is to estimate the true function f which is assumed to belong to the smoothness class described by the Besov space Bpqq. These are functions belonging to Lp with derivatives up to order s, in Lp sense. The parameter q controls a further finer degree of smoothness. In a Bayesian setting, a prior on Bpqq is chosen following Abramovich, Sapatinas and Silverman (1998). We show that the optimal Bayesian estimator of f is then also a.s. in Bpqq if the loss function is chosen to be the Besov norm of Bpqq. Because it is impossible to compute this optimal Bayesian estimator analytically, we propose a stochastic algorithm based on an approximation of the Bayesian risk and simulated annealing. Some simulations are presented to show that the algorithm performs well and that the new estimator is competitive when compared to the more standard posterior mean.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call