Abstract
Of crucial importance to the successful use of artificial neural networks for pattern classification problems is how the appropriate network size can be automatically determined. This issue is addressed by formulating the process as an automatic search in the space of functions that corresponds to a subclass of multilayer feedforward networks. Learning is thus a dynamic network construction process which involves adjusting both the network weights and the topology. Adding new hidden units corresponds to extracting higher-level features from the original input features for reducing the residual classification errors. It can be shown that the resultant network approximates a Bayesian classifier that implements the Bayesian decision rule for classification. The empirical results of several pattern classification experiments are also reported.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.