Abstract

Object detection accuracy degrades seriously in visually degraded scenes. A natural solution is to first enhance the degraded image and then perform object detection. However, it is suboptimal and does not necessarily lead to the improvement of object detection due to the separation of the image enhancement and object detection tasks. To solve this problem, we propose an image enhancement guided object detection method, which refines the detection network with an additional enhancement branch in an end-to-end way. Specifically, the enhancement branch and detection branch are organized in a parallel way, and a feature guided module is designed to connect the two branches, which optimizes the shallow feature of the input image in the detection branch to be as consistent as possible with that of the enhanced image. As the enhancement branch is frozen during training, such a design plays a role in using the features of enhanced images to guide the learning of object detection branch, so as to make the learned detection branch being aware of both image quality and object detection. When testing, the enhancement branch and feature guided module are removed, and so no additional computation cost is introduced for detection. Extensive experimental results, on underwater, hazy, and low-light object detection datasets, demonstrate that the proposed method can improve the detection performance of popular detection networks (YOLO v3, Faster R-CNN, DetectoRS) significantly in visually degraded scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call