Abstract

Adverse weather conditions would decrease the image quality, leading to a sharp decline in detection accuracy. Most of the researches focus on object detection in fine weather conditions, rather than in adverse weather conditions. Recently, some methods attempt to reduce the gap between degraded images and clean images to improve the detection accuracy in adverse weather conditions. Specifically, these methods usually conduct image restoration and object detection in a sequential way or by joint learning. While these methods can improve detection accuracy to a certain extent, image restoration models may introduce noise or artifacts and increase computational burden, limiting the accuracy and efficiency of object detection in adverse weather conditions. In this paper, we propose a knowledge distillation-based method, Localization-aware Logit Mimicking (LaLM), to improve detection accuracy in adverse weather conditions by reducing the gap between degraded images and clean images at the prediction level, rather than the image level. Moreover, the localization quality is designed as the mimicking target to make the knowledge distillation more effective. Experiments conducted on three popular benchmarks (i.e., RTTS, ExDark, and RID) demonstrate that our LaLM can achieve the state-of-the-art detection accuracy and inference speed in foggy, rainy, and low-light conditions. Code is available at: https://github.com/VIPLab-CQU/LaLM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call