Abstract

It has been known for some years that the uniform-density problem for forward neural networks has a positive answer: any real-valued, continuous function on a compact subset of R/sup d/ can be uniformly approximated by a sigmoidal neural network with one hidden layer. We design here algorithms for efficient uniform approximation by a certain class of neural networks with one hidden layer which we call nearly exponential. This class contains, e.g., all networks with the activation functions 1/(1+e/sup -t/), tanh(t), or e/sup t/ /spl and/1 in their hidden layers. The algorithms flow from a theorem stating that such networks attain the order of approximation O(N/sup -1/d/), d being dimension and N the number of hidden neurons. This theorem, in turn, is a consequence of a close relationship between neural networks of nearly exponential type and multivariate algebraic and exponential polynomials. The algorithms need neither a starting point nor learning parameters; they do not get stuck in local minima, and the gain in execution time relative to the backpropagation algorithm is enormous. The size of the hidden layer can be bounded analytically as a function of the precision required.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.