Abstract
This article reviews and studies the properties of Bayesian quadrature weights, which strongly affect stability and robustness of the quadrature rule. Specifically, we investigate conditions that are needed to guarantee that the weights are positive or to bound their magnitudes. First, it is shown that the weights are positive in the univariate case if the design points locally minimise the posterior integral variance and the covariance kernel is totally positive (e.g. Gaussian and Hardy kernels). This suggests that gradient-based optimisation of design points may be effective in constructing stable and robust Bayesian quadrature rules. Secondly, we show that magnitudes of the weights admit an upper bound in terms of the fill distance and separation radius if the RKHS of the kernel is a Sobolev space (e.g. Matérn kernels), suggesting that quasi-uniform points should be used. A number of numerical examples demonstrate that significant generalisations and improvements appear to be possible, manifesting the need for further research.
Highlights
This article is concerned with Bayesian quadrature (O’Hagan 1991; Rasmussen and Ghahramani 2002; Briol et al 2019), a probabilistic approach to numerical integration and an example of a probabilistic numerical method (Larkin 1972; Hennig et al 2015; Cockayne et al 2019)
By regarding the pairs D:={(xi, f (xi ))}in=1 obtained as “observed data”, the posterior distribution Iν( f GP) | D becomes a Gaussian random variable. This posterior distribution is useful for uncertainty quantification and decision making in subsequent tasks; this is one factor that makes Bayesian quadrature a promising approach in modern scientific computation, where quantification of discretisation errors is of great importance (Briol et al 2019; Oates et al 2017)
Using a result on stability of kernel interpolants by De Marchi and Schaback (2010), we derive an upper bound on the sum of absolute weights for some typical cases where the Gaussian process has finite degree of smoothness and the reproducing kernel Hilbert space (RKHS) induced by the covariance kernel is norm-equivalent to a Sobolev space
Summary
This article is concerned with Bayesian quadrature (O’Hagan 1991; Rasmussen and Ghahramani 2002; Briol et al 2019), a probabilistic approach to numerical integration and an example of a probabilistic numerical method (Larkin 1972; Hennig et al 2015; Cockayne et al 2019). This article reviews existing, and derives new, results on properties of the Bayesian quadrature weights, focusing in particular on their positivity and magnitude. Oettershagen (2017) for a recent review] done in the 1970s on kernel-based quadrature already revealed certain interesting properties of the Bayesian quadrature weights These results seem not well-known in the statistics and machine learning community. Using a result on stability of kernel interpolants by De Marchi and Schaback (2010), we derive an upper bound on the sum of absolute weights for some typical cases where the Gaussian process has finite degree of smoothness and the RKHS induced by the covariance kernel is norm-equivalent to a Sobolev space. We discuss the equivalent characterisation of this quadrature rule as the worst-case optimal integration rule in the RKHS H(k) induced by the covariance kernel k of the Gaussian process
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.