Underwater object detection in the field of computer vision faces unique challenges such as color distortion, reduced visibility, and blurred edges caused by aquatic conditions, which decrease the efficiency of traditional methods. Furthermore, the limited memory and computational power of underwater equipment necessitate the development of lightweight and efficient real-time monitoring algorithms. To address these issues, this paper designs an online knowledge distillation(KD) framework based on mutual knowledge transfer, named Online-XKD, which aims to enhance the efficiency and generalizability of knowledge distillation through mutual knowledge exchange while maintaining model lightweightness. The core idea of Online-XKD is an internally designed mutual knowledge transfer structure for online distillation within the detection head, which enhances learning outcomes by optimizing distillation efficiency and broadening generalizability. Additionally, we introduce a mask generation feature knowledge distillation architecture based on the backbone network, which significantly enhances the feature extraction capability of the student model by integrating mask masking and feature generation techniques. Moreover, by incorporating PSA and AUGFPN attention modules, the feature extraction capability in complex underwater environments is further enhanced without adding extra parameters, thereby improving detection accuracy. Experimental results on the URPC2020 dataset demonstrate that Online-XKD enhances the detection accuracy of the student model by 3.6 mAP, exceeding initial KD models by 3.3 mAP, and outperforming most existing KD methods in underwater detection tasks. These results confirm the effectiveness and superiority of Online-XKD in applications of underwater object detection, proving its practical significance and effectiveness in enhancing the feature extraction and generalization abilities of lightweight models.
Read full abstract