Abstract
Posterior probabilistic statistical inference without priors is an important but so far elusive goal. Fisher’s fiducial inference, Dempster–Shafer theory of belief functions, and Bayesian inference with default priors are attempts to achieve this goal but, to date, none has given a completely satisfactory picture. This article presents a new framework for probabilistic inference, based on inferential models (IMs), which not only provides data-dependent probabilistic measures of uncertainty about the unknown parameter, but also does so with an automatic long-run frequency-calibration property. The key to this new approach is the identification of an unobservable auxiliary variable associated with observable data and unknown parameter, and the prediction of this auxiliary variable with a random set before conditioning on data. Here we present a three-step IM construction, and prove a frequency-calibration property of the IM’s belief function under mild conditions. A corresponding optimality theory is developed, which helps to resolve the nonuniqueness issue. Several examples are presented to illustrate this new approach.
Highlights
In a statistical inference problem, one attempts to convert experience, in the form of observed data, to knowledge about the unknown parameter of interest
The goal of this paper is to develop a new framework for statistical inference, called inferential models (IMs)
The familiar sampling model appears in the A-step, but it is the corresponding association which is of primary importance
Summary
In a statistical inference problem, one attempts to convert experience, in the form of observed data, to knowledge about the unknown parameter of interest. The classical frequentist approach assigns probabilistic assessments of uncertainty (e.g., confidence levels) by considering repeated sampling from the super-population of possible data sets These uncertainty measures do not depend on the observed data, so their meaningfulness in a given problem is questionable. Efforts to get probabilistic inference without prior specification include Fisher’s fiducial inference (Zabell 1992) and its variants (Hannig 2009, 2013; Hannig and Lee 2009), confidence distributions (Xie and Singh 2013; Xie et al 2011), Fraser’s structural inference (Fraser 1968), and the Dempster–Shafer theory (Dempster 2008; Shafer 1976) These methods generate probabilities for inference, but these probabilities may not be easy to interpret, e.g., they may not be properly calibrated across users or experiments. Difficulties remain in choosing good reference priors for highdimensional problems so, despite these efforts, a fully satisfactory framework of objective Bayes inference has yet to emerge
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.