Abstract

We study algorithms for the approximation of functions, the error is measured in an L 2 norm. We consider the worst case setting for a general reproducing kernel Hilbert space of functions. We analyze algorithms that use standard information consisting in n function values and we are interested in the optimal order of convergence. This is the maximal exponent b for which the worst case error of such an algorithm is of order n - b . Let p be the optimal order of convergence of all algorithms that may use arbitrary linear functionals, in contrast to function values only. So far it was not known whether p > b is possible, i.e., whether the approximation numbers or linear widths can be essentially smaller than the sampling numbers. This is (implicitly) posed as an open problem in the recent paper [F.Y. Kuo, G.W. Wasilowski, H. Woźniakowski, On the power of standard information for multivariate approximation in the worst case setting, J. Approx. Theory, to appear] where the authors prove that p > 1 2 implies b ⩾ 2 p 2 / ( 2 p + 1 ) > p - 1 2 . Here we prove that the case p = 1 2 and b = 0 is possible, hence general linear information can be exponentially better than function evaluation. Since the case p > 1 2 is quite different, it is still open whether b = p always holds in that case.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.