Abstract

In this paper, we introduce a method for multivariate function approximation using function evaluations, Chebyshev polynomials, and tensor-based compression techniques via the Tucker format. We develop novel randomized techniques to accomplish the tensor compression, provide a detailed analysis of the computational costs, provide insight into the error of the resulting approximations, and discuss the benefits of the proposed approaches. We also apply the tensor-based function approximation to develop low-rank matrix approximations to kernel matrices that describe pairwise interactions between two sets of points; the resulting low-rank approximations are efficient to compute and store (the complexity is linear in the number of points). We present an adaptive version of the function and kernel approximation that determines an approximation that satisfies a user-specified relative error over a set of random points. We extend our approach to the case where the kernel requires repeated evaluations for many values of (hyper)parameters that govern the kernel. We give detailed numerical experiments on example problems involving multivariate function approximation, low-rank matrix approximations of kernel matrices involving well-separated clusters of sources and target points, and a global low-rank approximation of kernel matrices with an application to Gaussian processes. We observe speedups up to 18X over standard matrix-based approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call