Abstract

AbstractThis paper examines the architecture of back-propagation neural networks used as approximators by addressing the interrelationship between the number of training pairs and the number of input, output, and hidden layer nodes required for a good approximation. It concentrates on nets with an input layer, one hidden layer, and one output layer. It shows that many of the currently proposed schemes for selecting network architecture for such nets are deficient. It demonstrates in numerous examples that overdetermined neural networks tend to give good approximations over a region of interest, while underdetermined networks give approximations which can satisfy the training pairs but may give poor approximations over that region of interest. A scheme is presented that adjusts the number of hidden layer nodes in a neural network so as to give an overdetermined approximation. The advantages and disadvantages of using multiple output nodes are discussed. Guidelines for selecting the number of output nodes are presented.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.