Abstract

Neural networks and deep learning algorithms are designed to function similarly to biological synaptic structures. However, classical deep learning algorithms fail to fully capture the need for continuous learning; this has led to the advent of incremental learning. Incremental learning adds new challenges that are handled differently by modern state-of-the-art approaches. Some of these include: utilization of network memory as additional knowledge increases the size of the network, open-set recognition to be able to identify unrecognized information, and efficient knowledge distillation as most incremental learning algorithms are prone to catastrophic forgetting of previously learned knowledge. Recent advancements achieve incremental learning through a multitude of methods. Most methods are characterized by augmenting the normal algorithm of neural network training by both directly modifying the neural network structure and by adding additional learning steps. This paper analyzes and provides a comprehensive survey of existing methods and various techniques used for incremental learning. A novel categorization of the methods is also introduced based on recent trends of the state-of-the-art solutions. The study focuses on methods that provide incremental learning success as well as discusses emerging patterns in new research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call