Abstract

As a generalization of the multi-layer perceptron (MLP), the circular back-propagation neural network (CBP) possesses better adaptability. An improved version of the CBP (the ICBP) is presented in this paper. Despite having less adjustable weights, the ICBP has better adaptability than the CBP, which quite equals the famous Occam’s razor principle for model selection. In its application to time series, considering both structural changes and correlations of time series itself, we introduce the principle of the discounted least squares (DLS) in CBP and ICBP, respectively, and investigate their predicting capacity further. Introduction of DLS improves the predicting performance of both on a benchmark time series data set. Finally, the comparison of experimental results shows that ICBP with DLS (DLS-ICBP) has better predicting performance than DLS-CBP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call