Abstract
Abstract Incremental learning is an emerging machine-learning approach designed to prevent catastrophic forgetting while learning a new task with knowledge retained from previous tasks. However, existing methods often must fully utilize the distributional information of the old task model’s outputs. To alleviate this, we propose a class incremental learning method grounded in evidence theory to leverage the distributional information of outputs from the old task model, which integrates uncertainty estimation into the knowledge distillation process to ensure the new task model effectively learns from the old task model’s distribution information. Specifically, we incorporate uncertainty estimation based on evidence theory to calculate knowledge distillation loss during training. We compute the uncertainty estimates of outputs from both the new and old task models, and then they are used in the knowledge distillation loss calculation. We propose a novel classification strategy that considers outputs’ probability and uncertainty estimates, which could determine the sample category without requiring an additional training phase or supplementary models. Experiments on the CIFAR-100 and ImageNet datasets demonstrate the effectiveness of the proposed method, showing that it outperforms existing methods by improving accuracy by 1%-2%. Results suggest that leveraging the distribution information of model outputs can effectively mitigate catastrophic forgetting in deep learning models within incremental learning scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.