Abstract

This paper presents convergence analysis of kernel-based quadrature rules in misspecified settings, focusing on deterministic quadrature in Sobolev spaces. In particular, we deal with misspecified settings where a test integrand is less smooth than a Sobolev RKHS based on which a quadrature rule is constructed. We provide convergence guarantees based on two different assumptions on a quadrature rule: one on quadrature weights and the other on design points. More precisely, we show that convergence rates can be derived (i) if the sum of absolute weights remains constant (or does not increase quickly), or (ii) if the minimum distance between design points does not decrease very quickly. As a consequence of the latter result, we derive a rate of convergence for Bayesian quadrature in misspecified settings. We reveal a condition on design points to make Bayesian quadrature robust to misspecification, and show that, under this condition, it may adaptively achieve the optimal rate of convergence in the Sobolev space of a lesser order (i.e., of the unknown smoothness of a test integrand), under a slightly stronger regularity condition on the integrand.

Highlights

  • This paper discusses the problem of numerical integration, which has been a fundamental task in numerical analysis, statistics, computer science including machine learning and other areas

  • A limitation of standard Monte Carlo methods is that a huge number of design points may be needed for providing an accurate approximation of the integral; this comes from the fact that the rate of convergence of Monte Carlo methods is typically of the order E[|P f − Pn f |] = O(n−1/2) as n → ∞, where E[·] denotes the expectation with respect to the random sample

  • The current paper contains significantly new topics mainly in the following points: (i) We establish the rate of convergence for Bayesian quadrature with deterministic design points and show that it can achieve minimax optimal rates in Sobolev spaces (Sect. 3); (ii) we apply our general convergence guarantees in misspecified settings to the specific case of Bayesian quadrature and reveal the conditions required for Bayesian quadrature to be robust to misspecification (Sect. 5); to make the contribution (ii) possible, we derive finite sample bounds on quadrature error in misspecified settings

Read more

Summary

Introduction

This paper discusses the problem of numerical integration (or quadrature), which has been a fundamental task in numerical analysis, statistics, computer science including machine learning and other areas. A limitation of standard Monte Carlo methods is that a huge number of design points (i.e., large n) may be needed for providing an accurate approximation of the integral; this comes from the fact that the rate of convergence of Monte Carlo methods is typically of the order E[|P f − Pn f |] = O(n−1/2) as n → ∞, where E[·] denotes the expectation with respect to the random sample. Similar examples can be seen in applications to statistics and machine learning, as mentioned below In these situations, one can only use a limited number of design points, and it is desirable to have quadrature rules with a faster convergence rate, in order to obtain a reliable solution [46]

Kernel-Based Quadrature Rules
Misspecified Settings
Contributions
Basic Definitions and Notation
Sobolev Spaces and Reproducing Kernel Hilbert Spaces
Examples of Kernel-Based Quadrature Rules
Convergence Rates of Bayesian Quadrature
Main Results
Convergence Rates Under an Assumption on Quadrature Weights
Convergence Rates Under an Assumption on Design Points
Bayesian Quadrature in Misspecified Settings
Problem Setting
Results
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call