Abstract

We study the L_2-approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number e_n is the minimal worst-case error that can be achieved with n function values, whereas the approximation number a_n is the minimal worst-case error that can be achieved with n pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that en≲1kn∑j≥knaj2,\\documentclass[12pt]{minimal}\t\t\t\t\\usepackage{amsmath}\t\t\t\t\\usepackage{wasysym}\t\t\t\t\\usepackage{amsfonts}\t\t\t\t\\usepackage{amssymb}\t\t\t\t\\usepackage{amsbsy}\t\t\t\t\\usepackage{mathrsfs}\t\t\t\t\\usepackage{upgreek}\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\t\t\t\t\\begin{document}$$\\begin{aligned} e_n \\,\\lesssim \\, \\sqrt{\\frac{1}{k_n} \\sum _{j\\ge k_n} a_j^2}, \\end{aligned}$$\\end{document}where k_n asymp n/log (n). This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. Our result applies, in particular, to Sobolev spaces H^s_mathrm{mix}(mathbb {T}^d) with dominating mixed smoothness s>1/2 and dimension din mathbb {N}, and we obtain en≲n-slogsd(n).\\documentclass[12pt]{minimal}\t\t\t\t\\usepackage{amsmath}\t\t\t\t\\usepackage{wasysym}\t\t\t\t\\usepackage{amsfonts}\t\t\t\t\\usepackage{amssymb}\t\t\t\t\\usepackage{amsbsy}\t\t\t\t\\usepackage{mathrsfs}\t\t\t\t\\usepackage{upgreek}\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\t\t\t\t\\begin{document}$$\\begin{aligned} e_n \\,\\lesssim \\, n^{-s} \\log ^{sd}(n). \\end{aligned}$$\\end{document}For d>2s+1, this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak’s (sparse grid) algorithm is optimal.

Highlights

  • N 1 kn a2j, j ≥kn where kn n/ log(n)

  • This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable

  • The approximation numbers are quite well understood in many cases because they are equal to the singular values of the embedding operator id : H → L2

Read more

Summary

Introduction

N 1 kn a2j , j ≥kn where kn n/ log(n). This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. In particular, to Sobolev spaces Hms ix(Td ) with dominating mixed smoothness s > 1/2 and dimension d ∈ N, and we obtain en n−s logsd (n). Keywords L2-approximation · Sampling numbers · Rate of convergence · Random matrices · Sobolev spaces with mixed smoothness

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call