Abstract

In recent years, deep neural networks have been employed to approximate nonlinear continuous functionals F defined on Lp([−1,1]s) for 1≤p≤∞. However, the existing theoretical analysis in the literature either is unsatisfactory due to the poor approximation results, or does not apply to the rectified linear unit (ReLU) activation function. This paper aims to investigate the approximation power of functional deep ReLU networks in two settings: F is continuous with restrictions on the modulus of continuity, and F has higher order Fréchet derivatives. A novel functional network structure is proposed to extract features of higher order smoothness harbored by the target functional F. Quantitative rates of approximation in terms of the depth, width and total number of weights of neural networks are derived for both settings. We give logarithmic rates when measuring the approximation error on the unit ball of a Hölder space. In addition, we establish nearly polynomial rates (i.e., rates of the form exp−a(logM)b with a>0,0<b<1) when measuring the approximation error on a space of analytic functions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.