Abstract

We present an iterative algorithm for approximating an unknown function sequentially using random samples of the function values and gradients. This is an extension of the recently developed sequential approximation (SA) method, which approximates a target function using samples of function values only. The current paper extends the development of the SA methods to the Sobolev space and allows the use of gradient information naturally. The algorithm is easy to implement, as it requires only vector operations and does not involve any matrices. We present tight error bound of the algorithm, and derive an optimal sampling probability measure that results in fastest error convergence. Numerical examples are provided to verify the theoretical error analysis and the effectiveness of the proposed SA algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call