Abstract

In this paper we develop fast and memory efficient numerical methods for learning functions of many variables that admit sparse representations in terms of general bounded orthonormal tensor product bases. Such functions appear in many applications including, e.g., various Uncertainty Quantification (UQ) problems involving the solution of parametric PDE that are approximately sparse in Chebyshev or Legendre product bases (Chkifa et al. in Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. arXiv:1602.05823 , 2016; Rauhut and Schwab in Math Comput 86(304):661–700, 2017). We expect that our results provide a starting point for a new line of research on sublinear-time solution techniques for UQ applications of the type above which will eventually be able to scale to significantly higher-dimensional problems than what are currently computationally feasible. More concretely, let $${\mathcal {B}}$$ be a finite Bounded Orthonormal Product Basis (BOPB) of cardinality $$|{\mathcal {B}}| = N$$ . Herein we will develop methods that rapidly approximate any function f that is sparse in the BOPB, that is, $$f: {\mathcal {D}} \subset {\mathbb {R}}^D \rightarrow {\mathbb {C}}$$ of the form $$\begin{aligned} f(\varvec{x}) = \sum _{b \in {\mathcal {S}}} c_b \cdot b(\varvec{x}) \end{aligned}$$ with $${\mathcal {S}} \subset {\mathcal {B}}$$ of cardinality $$|{\mathcal {S}}| = s \ll N$$ . Our method adapts the CoSaMP algorithm (Needell and Tropp in Appl Comput Harmon Anal 26(3):301–321, 2009) to use additional function samples from f along a randomly constructed grid $${\mathcal {G}} \subset {\mathbb {R}}^D$$ with universal approximation properties in order to rapidly identify the multi-indices of the most dominant basis functions in $${\mathcal {S}}$$ component by component during each CoSaMP iteration. It has a runtime of just $$(s \log N)^{{\mathcal {O}}(1)}$$ , uses only $$(s \log N)^{{\mathcal {O}}(1)}$$ function evaluations on the fixed and nonadaptive grid $${\mathcal {G}}$$ , and requires not more than $$(s \log N)^{{\mathcal {O}}(1)}$$ bits of memory. We emphasize that nothing about $${\mathcal {S}}$$ or any of the coefficients $$c_b \in {\mathbb {C}}$$ is assumed in advance other than that $${\mathcal {S}} \subset {\mathcal {B}}$$ has $$|{\mathcal {S}}| \le s$$ . Both $${\mathcal {S}}$$ and its related coefficients $$c_b$$ will be learned from the given function evaluations by the developed method. For $$s\ll N$$ , the runtime $$(s \log N)^{{\mathcal {O}}(1)}$$ will be less than what is required to simply enumerate the elements of the basis $${\mathcal {B}}$$ ; thus our method is the first approach applicable in a general BOPB framework that falls into the class referred to as sublinear-time. This and the similarly reduced sample and memory requirements set our algorithm apart from previous works based on standard compressive sensing algorithms such as basis pursuit which typically store and utilize full intermediate basis representations of size $$\varOmega (N)$$ during the solution process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call