Abstract

Object detection with the capacity to incrementally adapt to new domains is a crucial yet relatively under-explored research topic. The <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">catastrophic forgetting</i> problem presents a significant challenge to achieve this goal, where the model’s performance improves quickly in new conditions but deteriorates sharply in old ones after several incremental learning sessions. Drawing on recent discoveries in visual memories of the human brain, we introduce the Topology-Preserving Domain Incremental Object Detection (TP-DIOD) approach, which aims to address the catastrophic forgetting problem by extracting the topological structure of the feature space learned by the Convolutional Neural Network (CNN) model and preserving this topology during the subsequent incremental learning sessions. Specifically, we model the feature space topology using the self-organizing map (SOM) and construct an anchor image set based on the centroid vectors of the SOM nodes to memorize the feature space topology. We then develop the anchor loss function to penalize the topological changes of the feature space during the subsequent incremental learning sessions. Experimental evaluations on two sets of datasets demonstrate the effectiveness of the proposed TP-DIOD method in mitigating the catastrophic forgetting problem and achieving high accuracy on both old and new domain datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call