Abstract
A stacked generalization strategy in which an evolutionary algorithm generates baselevel predictive models is described. The evolutionary algorithm incorporates model validation and an inductive ranking criterion which encourages diversity of prediction errors. Higher levels of the stack of generalizers yield predictors that work through memory-based correction and combination of predictions produced by populations of models obtained in lower levels of the stack. The strategy has been applied to the classic problem of predicting annual sunspots activity. Baselevel predictors are drawn from a class of recurrent neural networks. Normalized mean squared errors of 0.064 and 0.19 for the conventional test intervals of 1921–1955 and 1956–1979, respectively, improve upon previously published results. The strategy has also been used to accurately forecast the behaviour of a synthetic system which makes random transitions between two states of low-dimensional chaos.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.