Abstract

Given a set of competing models of some phenomenon together with measurement data, Bayesian model selection (BMS) is a process of finding the model that is the best candidate for being the true data-generating process. BMS relies on the computation of Bayesian model evidence, which is defined as the marginal likelihood of the measurement data (i.e., the average likelihood over a model's parameter space). In this article, we introduce a new method for computing Bayesian model evidence. Our method consists of three key elements. First, all competing model functions are emulated by Gaussian processes. Model evaluations for training the Gaussian processes are chosen one by one in a sequential manner. Second, a model-time allocation strategy decides how many model evaluations are spent on each of the competing models. Third, a sequential sampling strategy selects design points in each model's parameter space. In numerical experiments, the method shows a speed-up of more than 1,000 compared to Monte Carlo estimation. While, in lower-dimensional cases, the use of Gaussian processes alone is very effective, in higher-dimensional cases, the model-time allocation strategy and the sampling strategy become more important as they focus the effort on the right model and in the right areas of the parameter domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call