Abstract

As a special case of machine learning, incremental learning can acquire useful knowledge from incoming data continuously while it does not need to access the original data. It is expected to have the ability of memorization and it is regarded as one of the ultimate goals of artificial intelligence technology. However, incremental learning remains a long term challenge. Modern deep neural network models achieve outstanding performance on stationary data distributions with batch training. This restriction leads to catastrophic forgetting for incremental learning scenarios since the distribution of incoming data is unknown and has a highly different probability from the old data. Therefore, a model must be both plastic to acquire new knowledge and stable to consolidate existing knowledge. This review aims to draw a systematic review of the state of the art of incremental learning methods. Published reports are selected from Web of Science, IEEEXplore, and DBLP databases up to May 2020. Each paper is reviewed according to the types: architectural strategy, regularization strategy and rehearsal and pseudo-rehearsal strategy. We compare and discuss different methods. Moreover, the development trend and research focus are given. It is concluded that incremental learning is still a hot research area and will be for a long period. More attention should be paid to the exploration of both biological systems and computational models.

Highlights

  • Incremental learning (IL) refers to a learning system that can continuously learn new knowledge from new samples and can maintain most of the previously learned knowledge

  • Lopez-Paz and Ranzato [102] pointed out that the ability of learners to transfer knowledge should be paid attention to, and proposed the concepts of backward transfer (BWT, which is the influence that learning a task has on the performance on previous tasks) and forward transfer (FWT, which is the influence that learning a task has on the performance on future tasks)

  • For BWT, positive backward transfer can increase the performance on some preceding tasks, and large negative backward transfer is known as catastrophic forgetting (CF)

Read more

Summary

Introduction

Incremental learning (IL) refers to a learning system that can continuously learn new knowledge from new samples and can maintain most of the previously learned knowledge. The external environment of the real world is dynamically changing, which needs the intelligent agent to have the ability of continuous learning and memorizing. An incremental learning model can learn new knowledge and retain the old one in lifelong time. It works like a brain system of an organism and it is one of the ultimate goals of artificial intelligence systems. In recent years, it has played increasingly important roles in fields of intelligent robots, auto-driving and unmanned aerial vehicles, etc. It has played increasingly important roles in fields of intelligent robots, auto-driving and unmanned aerial vehicles, etc. [2,3,4]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call