Abstract

We consider sparsity-based techniques for the approximation of high-dimensional functions from random pointwise evaluations. To date, almost all the works published in this field contain some a priori assumptions about the error corrupting the samples that are hard to verify in practice. In this paper, we instead focus on the scenario where the error is unknown. We study the performance of four sparsity-promoting optimization problems: weighted quadratically-constrained basis pursuit, weighted LASSO, weighted square-root LASSO, and weighted LAD-LASSO. From the theoretical perspective, we prove uniform recovery guarantees for these decoders, deriving recipes for the optimal choice of the respective tuning parameters. On the numerical side, we compare them in the pure function approximation case and in applications to uncertainty quantification of ODEs and PDEs with random inputs. Our main conclusion is that the lesser-known square-root LASSO is better suited for high-dimensional approximation than the other procedures in the case of bounded noise, since it avoids (both theoretically and numerically) the need for parameter tuning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.