Abstract

Methods based on dynamic structures are effective in addressing catastrophic forgetting on Class-incremental learning (CIL). However, they often isolate sub-networks and overlook the integration of overall information, resulting in a performance decline. To overcome this limitation, we recognize the importance of knowledge sharing among sub-networks. On the basis of dynamic network, we established a novel two-stage CIL method called SCREAM that includes an Expandable Network (EN) Learning Stage and a Compact Representation (CR) Stage: (1) design a clustering loss function for EN, aggregating related instances and promoting information sharing; (2) design dynamic weight alignment to alleviate the classifier’s bias towards new class knowledge; and (3) design a balanced decoupled distillation for CR, mitigating the impact of the long-tail effect during multiple compressions. To validate the performance of SCREAM, we use 3 widely used datasets and set different Buffersize (replay-buffer) for comparison with the current state-of-the-art models.The result show that on CIFAR-100 ImageNet-100/1000 and Tiny-ImageNet achieve an average accuracy exceeding 2.46%, 1.22% and 1.52%, respectively. When using a smaller buffersize, SCREAM also achieves an average accuracy exceeding 4.60%. Furthermore, SCREAM shows good performance in terms of Resources needed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.