Abstract

We are living in the big data era, as current technologies and networks allow for the easy and routine collection of data sets in different disciplines. Bayesian Statistics offers a flexible modeling approach which is attractive for describing the complexity of these datasets. These models often exhibit a likelihood function which is intractable due to the large sample size, high number of parameters, or functional complexity. Approximate Bayesian Computational (ABC) methods provides likelihood-free methods for performing statistical inferences with Bayesian models defined by intractable likelihood functions. The vastity of the literature on ABC methods created a need to review and relate all ABC approaches so that scientists can more readily understand and apply them for their own work. This article provides a unifying review, general representation, and classification of all ABC methods from the view of approximate likelihood theory. This clarifies how ABC methods can be characterized, related, combined, improved, and applied for future research. Possible future research in ABC is then outlined.

Highlights

  • Bayesian models are applied for statistical inference in many scientific fields

  • The posterior distribution is the main object of Bayesian Statistics and it is the result of the combination of two information sources, namely the prior distribution, which reflects extra-experimental knowledge, and the likelihood function, which formalizes the information provided by the data through the use of a given statistical model

  • Approximate Bayesian Computational (ABC) methods are useful in the general scenario where for the given Bayesian model of interest for data analysis, the likelihood is not evaluated or intractable, but it is still possible to either draw samples from this likelihood conditionally on the model parameters [e.g., 98, 277, 216, 23]; or to find a point-estimate of some model parameter function based on a sufficient statistic of the data [e.g., 253], on an empirical likelihood [185], or on a bootstrap method [e.g., 285]

Read more

Summary

Introduction

The posterior distribution is the main object of Bayesian Statistics and it is the result of the combination of two information sources, namely the prior distribution, which reflects extra-experimental knowledge, and the likelihood function, which formalizes the information provided by the data through the use of a given statistical model Posterior inferences with such models can be undertaken by applying classical Monte Carlo (MC) methods, which provide iterative algorithms that can generate approximate samples from the posterior distribution, without the marginal likelihood.

Examples of Bayesian models with intractable likelihoods
Mixed-effects model
Hidden Potts model
Approximate likelihoods and sampling algorithm
ABC methods
ABC Method
Combining ABC with other algorithms
ABC for model choice
Software packages for ABC
Open problems in ABC
The curse of dimensionality
ABC model choice issues
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call