Abstract
In this paper, we consider the problem of approximating functions from a Korobov space on [−1,1]d by ReLU shallow neural networks and present a rate O(m−25(1+2d)logm) of uniform approximation by networks of m hidden neurons. This is achieved by combining a novel Fourier analysis approach and a probability argument. We apply our approximation theory to a learning algorithm for regression based on ReLU shallow neural networks and derive learning rates of order O(N−4(d+2)9d+8logN) for the excess generalization error with the sample size N when the regression function lies in the Korobov space.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.