We study the necessary and sufficient complexity of ReLU neural networks – in terms of depth and number of weights – which is required for approximating classifier functions in an Lp-sense.As a model class, we consider the set Eβ(Rd) of possibly discontinuous piecewise Cβ functions f:[−12,12]d→R, where the different “smooth regions” of f are separated by Cβ hypersurfaces. For given dimension d≥2, regularity β>0, and accuracy ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd) up to an L2 error of ε. The constructed networks have a fixed number of layers, depending only on d and β, and they have O(ε−2(d−1)∕β) many nonzero weights, which we prove to be optimal. For the proof of optimality, we establish a lower bound on the description complexity of the class Eβ(Rd). By showing that a family of approximating neural networks gives rise to an encoder for Eβ(Rd), we then prove that one cannot approximate a general function f∈Eβ(Rd) using neural networks that are less complex than those produced by our construction.In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise Cβ(Rd) functions, this minimal depth is given – up to a multiplicative constant – by β∕d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions.Finally, we analyze approximation in high-dimensional spaces where the function f to be approximated can be factorized into a smooth dimension reducing feature map τ and classifier function g – defined on a low-dimensional feature space – as f=g∘τ. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.
Read full abstract