Abstract

Neural network-based modeling often involves trying multiple networks with different architectures and training parameters in order to achieve acceptable model accuracy. Typically, one of the trained networks is chosen as best, while the rest are discarded. Hashem and Schmeiser (1995) proposed using optimal linear combinations of a number of trained neural networks instead of using a single best network. Combining the trained networks may help integrate the knowledge acquired by the components networks and thus improve model accuracy. In this paper, we extend the idea of optimal linear combinations (OLCs) of neural networks and discuss issues related to the generalization ability of the combined model. We then present two algorithms for selecting the component networks for the combination to improve the generalization ability of OLCs. Our experimental results demonstrate significant improvements in model accuracy, as a result of using OLCs, compared to using the apparent best network. © 1997 Elsevier Science Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.