Abstract

In this paper, the relevance of developing methods and algorithms for neural network incremental learning is shown. Families of incremental learning techniques are presented. A possibility of using the extreme learning machine for incremental learning is assessed. Experiments show that the extreme learning machine is suitable for incremental learning, but as the number of training examples increases, the neural network becomes unsuitable for further learning. To solve this problem, we propose a neural network incremental learning algorithm that alternately uses the extreme learning machine to correct the only output layer network weights (operation mode) and the backpropagation method (deep learning) to correct all network weights (sleep mode). During the operation mode, the neural network is assumed to produce results or learn from new tasks, optimizing its weights in the sleep mode. The proposed algorithm features the ability for real-time adaption to changing external conditions in the operation mode. The effectiveness of the proposed algorithm is shown by an example of solving the approximation problem. Approximation results after each step of the algorithm are presented. A comparison of the mean square error values when using the extreme learning machine for incremental learning and the developed algorithm of neural network alternate incremental learning is made.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call