Abstract
In neural network theory the complexity of constructing networks to approximate input-output functions is of interest. We study this in the more general context of approximating elements f of a normed space F using partial information about f. We assume information about f and the size of the network are limited, as is typical in radial basis function networks. We show complexity can be essentially split into two independent parts, information ε-complexity and neural ε-complexity. We use a worst case setting, and integrate elements of information-based complexity and nonlinear approximation. We consider deterministic and/or randomized approximations using information possibly corrupted by noise. The results are illustrated by examples including approximation by piecewise polynomial neural networks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.