Abstract

There are essentially two statistical paradigms, the Bayesian and frequentist. Despite their obvious differences the two approaches have certain points in common. In particular both are density (or likelihood) based and neither has a concept of approximation. By a concept of approximation we mean some formal admission of the fact that the statistical models are not true representations of the data. We argue that the relationship between the data and the model is a fundamental one which cannot be reduced to either diagnostics or model validation. We argue further that a concept of approximation must be formulated in a weak topology different from the strong topology of densities. For this reason there can be no density or likelihood based concept of approximation. The concept of approximation we suggest goes back to [Donoho, D. L. (1988). One-sided inference about functionals of a density. Annals of Statistics, 16, 1390–1420] and [Davies, P. L. (1995). Data features. Statistica Neerlandica, 49, 185–245] and requires ‘typical’ data sets simulated under the model ‘look like’ the real data set. This idea is developed using examples from nonparametric regression.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.