Abstract

Deep convolution networks have been widely used in remote sensing target detection for various applications in recent years. Target detection models with many parameters provide better results but are not suitable for resource-constrained devices due to their high computational cost and storage requirements. Furthermore, current lightweight target detection models for remote sensing imagery rarely have the advantages of existing models. Knowledge distillation can improve the learning ability of a small student network from a large teacher network due to acceleration and compression. However, current knowledge distillation methods typically use mature backbones as teacher and student networks are unsuitable for target detection in remote sensing imagery. In this paper, we propose a target detection model distillation (TDMD) framework using feature transition and label registration for remote sensing imagery. A lightweight attention network is designed by ranking the importance of the convolutional feature layers in the teacher network. Multi-scale feature transition based on a feature pyramid is utilized to constrain the feature maps of the student network. A label registration procedure is proposed to improve the TDMD model's learning ability of the output distribution of the teacher network. The proposed method is evaluated on the DOTA and NWPU VHR-10 remote sensing image datasets. The results show that the TDMD achieves a mean Average Precision (mAP) of 75.47% and 93.81% on the DOTA and NWPU VHR-10 datasets, respectively. Moreover, the model size is 43% smaller than that of the predecessor model (11.8 MB and 11.6 MB for the two datasets).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.