Abstract

In recent years, lightweight object detection networks have been increasingly applied to remote sensing platforms due to their fast inference speed and flexible deployment advantages. Knowledge distillation methods have been widely used to reduce the performance gap between large and small models, and many studies have combined knowledge distillation with object detection. However, existing knowledge distillation methods often overlook the transfer of localization knowledge. Therefore, this paper proposes a method called Discretized Position Knowledge Distillation (DPKD) to improve the use of knowledge distillation in object detection. Specifically, the DPKD method incorporates a Discretization Algorithm Module (DAM), which leverages both general probability distribution and cross-Gaussian distribution to transfer high-quality bounding box position and pose information. Additionally, the Position Knowledge Distillation (PKD) method splits the target and non-target bounding boxes to form the distillation loss function, addressing the issue of missing background knowledge transfer during the distillation process. To further enhance the learning of high-quality bounding boxes, a Region Weighting Module (RWM) based on EIoU is introduced in DPKD, assigning weights to the various bounding boxes in the teacher's output. The effectiveness of DPKD in the field of remote sensing image object detection in multi-modal scenarios was verified through multi-modal training on the publicly available DOTA and HRSID datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call