Abstract

Modern artificial intelligence systems require class-incremental learning while suffering from catastrophic forgetting in many real-world applications. Due to the missing knowledge of past data, performance substantially degrades. Recent methods often used knowledge distillation and bias correction to avoid catastrophic forgetting caused by cognitive bias. However, since these methods mainly learn all samples indiscriminately, the model is hard to learn what it truly needs from the data stream to balance the new and old knowledge, leading to inevitable forgetting. Instead of considering each sample indiscriminately, the model should learn from its curious samples automatically. To tackle this problem, we propose a curiosity-driven class-incremental learning approach via adaptive sample selection for learning a more generalized model with fewer ineffective updates. Specifically, our method quantifies the model’s curiosity in each sample by two properties: uncertainty and novelty. Our model learns informative samples selectively during training utilizing the proposed uncertainty property, which benefits the classification decision boundary. In light of the imbalanced data, a novelty property is used to selectively optimize the model by employing dissimilar samples, endowing it with more robustness and less cognitive bias. Our method successfully reduces catastrophic forgetting and can be flexibly incorporated with other techniques. Extensive experiments and in-depth analysis on the CIFAR-100, Tiny-ImageNet and Caltech-101 datasets show that our approach outperforms competing methods for class-incremental learning in terms of preventing catastrophic forgetting.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.