Abstract
We propose an improved method for reducing the forgetfulness in incremental learning in a feedforward multilayer perceptron. In incremental learning, the network learns using new data items one by one. Naturally, the network is affected mainly by the most recent data, and past data is forgotten. Usually, to solve this problem, all data is stored and used for repeated learning, but this is not efficient use of computing time and memory. We believe the forgetfulness is an increase in the error function's value for past training data, and that it thus can be suppressed by minimizing the change in the value. The change in the error function's value is approximated using eigenvalues and eigenvectors of the coefficient of the second order term of an error function that is expanded with the Taylor expansion. By introducing a constraint for minimizing the change in the value based on this approximation to update the weight parameters, we believe the forgetfulness can be suppressed. Based on the above idea, we proposed a method that assigns the initial value of the momentum term an eigenvector with small eigenvalue. However, this method is a weak constraint for minimizing the change in the value. Thus, while it is an effective method for learning a sine function, it is not effective for learning a chaotic sequence calculated with a logistic map. In this paper, we modify the way in which the initial value of the momentum term is estimated, and propose a method that provides a stronger constraint for updating the weight parameters to minimize the change of the value in the error function. This method can effectively suppress the forgetfulness. This method was tested on a chaotic sequence calculated with a logistic map. The result shows greater suppression of the forgetfulness than with the original method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.