Abstract

Complex computer simulations are commonly required for accurate data modelling in many scientific disciplines, making statistical inference challenging due to the intractability of the likelihood evaluation for the observed data. Furthermore, sometimes one is interested on inference drawn over a subset of the generative model parameters while taking into account model uncertainty or misspecification on the remaining nuisance parameters. In this work, we show how non-linear summary statistics can be constructed by minimising inference-motivated losses via stochastic gradient descent such that they provide the smallest uncertainty for the parameters of interest. As a use case, the problem of confidence interval estimation for the mixture coefficient in a multi-dimensional two-component mixture model (i.e. signal vs background) is considered, where the proposed technique clearly outperforms summary statistics based on probabilistic classification, a commonly used alternative which does not account for the presence of nuisance parameters.

Highlights

  • Simulator-based inference is currently at the core of many scientific fields, such as population genetics, epidemiology, and experimental particle physics

  • The parameters of a neural network are optimised by stochastic gradient descent within an automatic differentiation framework, where the considered loss function accounts for the details of the statistical model as well as the expected effect of nuisance parameters

  • The family of summary statistics s(D) considered in this work is composed by a neural network model applied to each dataset observation f (x; φ) : X ⊆ Rd → Y ⊆ Rb, whose parameters φ will be learned during training by means of stochastic gradient descent, as will be discussed later

Read more

Summary

Introduction

Simulator-based inference is currently at the core of many scientific fields, such as population genetics, epidemiology, and experimental particle physics. In many cases the implicit generative procedure defined in the simulation is stochastic and/or lacks a tractable probability density p(x|θ), where θ ∈ Θ is the vector of model parameters. Given some experimental observations D = {x0, ..., xn}, a problem of special relevance for these disciplines is statistical inference on a subset of model parameters ω ∈ Ω ⊆ Θ. This can be approached via likelihood-free inference algorithms such as Approximate Bayesian Computation (ABC) [1], simplified synthetic likelihoods [2] or density estimation-by-comparison approaches [3]. The choice of summary statistics for such cases becomes critical, given that naive choices might cause loss of relevant information and a corresponding degradation of the power of resulting statistical inference

Objectives
Methods
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.