Abstract

The generation of decision-theoretic Bayesian optimal designs is complicated by the significant computational challenge of minimising an analytically intractable expected loss function over a, potentially, high-dimensional design space. A new general approach for approximately finding Bayesian optimal designs is proposed which uses computationally efficient normal-based approximations to posterior summaries to aid in approximating the expected loss. This new approach is demonstrated on illustrative, yet challenging, examples including hierarchical models for blocked experiments, and experimental aims of parameter estimation and model discrimination. Where possible, the results of the proposed methodology are compared, both in terms of performance and computing time, to results from using computationally more expensive, but potentially more accurate, Monte Carlo approximations. Moreover, the methodology is also applied to problems where the use of Monte Carlo approximations is computationally infeasible.

Highlights

  • The process of designing a physical experiment fits naturally within the Bayesian approach to statistical inference

  • The result of which is that, even by using the approximate coordinate exchange (ACE) algorithm, finding Bayesian optimal designs with the double loop Monte Carlo (DLMC) approximation to the expected loss has been confined to simple problems where the number of models under consideration is |M| = 1 and inference has been focused on parameter estimation

  • We have proposed the use of normal-based approximations to posterior summaries to aid in the approximation of the loss function

Read more

Summary

Introduction

The process of designing a physical experiment fits naturally within the Bayesian approach to statistical inference. A Gaussian process prediction of the expected loss is sequentially minimised over each one-dimensional element of the design space This can be seen as a generalisation of the approaches of Müller and Parmigiani (1995) and Weaver et al (2016) to higher dimensional design spaces via the use of coordinate exchange, and allowed consideration of examples with design spaces of dimensionality nearly two orders of magnitude greater than previously addressed in the literature. The result of which is that, even by using the ACE algorithm, finding Bayesian optimal designs with the DLMC approximation to the expected loss has been confined to simple problems where the number of models under consideration is |M| = 1 and inference has been focused on parameter estimation. We apply the proposed approach to problems where use of the DLMC approximation to find a design would be computationally infeasible

Normal-based approximations to posterior quantities
Finding the posterior mode and Fisher information
Examples
Parameter estimation
Model discrimination
Logistic regression
Mechanistic modelling of chemical reactions
Conclusion
Comparison procedure
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.