Abstract

Neural networks based modeling often involves trying multiple networks with different architectures and/or training parameters in order to achieve acceptable model accuracy. Typically, one of the trained NNs is chosen as best, while the rest are discarded. Hashem and Schmeiser (1992) propose using optimal linear combinations of a number of trained neural networks instead of using a single best network. In this paper, we discuss and extend the idea of optimal linear combinations of neural networks. Optimal linear combinations are constructed by forming weighted sums of the corresponding outputs of the networks. The combination-weights are selected to minimize the mean squared error with respect to the distribution of random inputs. Combining the trained networks may help integrate the knowledge acquired by the component networks and thus improve model accuracy. We investigate some issues concerning the estimation of the optimal combination-weights and the role of the optimal linear combination in improving model accuracy for both well-trained and poorly trained component networks. Experimental results based on simulated data are included. For our examples, the model accuracy resulting from using estimated optimal linear combinations is better than that of the best trained network and that of the simple averaging of the outputs of the component networks. >

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.