Abstract

This work is an organized review on the representational capabilities of artificial neural networks and the questions that arise in their implementation. It covers the Kolmogorov's superposition theorem and different statements regarding how it could be related to the representational power of neural networks. Generalization capability of neural networks is then considered and methods of improving this capability are discussed. Some theorems and statements concerning the bound on the number of hidden layers, form of the activation function, and time complexity of training of neural networks are other subjects of this article. © 1995 John Wiley & Sons, Inc.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call