Abstract

Incremental learning using neural networks achieves great success in semantic segmentation but still suffers from catastrophic forgetting. In this article, we propose an effective class-incremental segmentation method without storing old data. To alleviate the issue of forgetting, we present two important modules, i.e., the deep feature distillation (DFD) module and the label mixed (LM) module. The DFD module is established to learn a good feature representation of old classes by distilling a new compact feature representation from different layers of networks. The proposed LM module first identifies the examples (pixels) of old classes with high confidences utilizing the output of old models, and then, they are combined with examples of new classes to supervise the training of new models, which can achieve a good balance between learning new classing and avoiding forgetting old ones. Our ablation studies show that the DFD module and the LM module can make the learning network obtain 6.2% and 15% performance gains [mean Intersection over Union (mIOU)], respectively. Furthermore, by introducing the supervision of output distillation loss, we compare our method with several state-of-the-art methods in the extensive experiments, and the experimental results all show that our method is significantly superior to them on the dataset of aerial images.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.