Abstract
Consider the classical nonparametric regression problem yi = f(ti) + ɛii = 1,...,n where ti = i/n, and ɛi are i.i.d. zero mean normal with variance σ2. The aim is to estimate the true function f which is assumed to belong to the smoothness class described by the Besov space Bpqq. These are functions belonging to Lp with derivatives up to order s, in Lp sense. The parameter q controls a further finer degree of smoothness. In a Bayesian setting, a prior on Bpqq is chosen following Abramovich, Sapatinas and Silverman (1998). We show that the optimal Bayesian estimator of f is then also a.s. in Bpqq if the loss function is chosen to be the Besov norm of Bpqq. Because it is impossible to compute this optimal Bayesian estimator analytically, we propose a stochastic algorithm based on an approximation of the Bayesian risk and simulated annealing. Some simulations are presented to show that the algorithm performs well and that the new estimator is competitive when compared to the more standard posterior mean.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.