Abstract

The development of object detection networks has reached a high point, and there have been significant improvements in accuracy and detection speed. Object detection is widely used in intelligent robots, self-driving cars, and other edge-intelligent terminals. Unfortunately, when a detector is allowed to learn new objects in an unfamiliar environment, it can catastrophically forget the objects it has already learned. In particular, reliable and stable knowledge cannot be extracted from old models. Based on this, a new multinetwork mean distillation loss function for open-world domain incremental object detection is presented. To better extract reliable and stable knowledge from old models, we enhanced the distillation output of the detector with a ResNet50 backbone and an output RoI head. The distillation output of the intermediate RPN is softened by adaptive distillation. To obtain more stable results, the ResNet50 backbone and RPN on the channel are zero-averaged. Various incremental steps and stability experiments are performed on two benchmark datasets, PASCAL VOC and MS COCO. The experimental results show the excellent performance of our method in different experimental scenarios, and it is superior to the most advanced methods. For example, in the setting of the batch task, incremental object detection on the PASCAL VOC and MS COCO datasets is improved by 3.4% and 2.1%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call