Abstract

In this paper, we consider the problem of approximating functions from a Korobov space on [−1,1]d by ReLU shallow neural networks and present a rate O(m−25(1+2d)log⁡m) of uniform approximation by networks of m hidden neurons. This is achieved by combining a novel Fourier analysis approach and a probability argument. We apply our approximation theory to a learning algorithm for regression based on ReLU shallow neural networks and derive learning rates of order O(N−4(d+2)9d+8log⁡N) for the excess generalization error with the sample size N when the regression function lies in the Korobov space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call