Abstract

Although deep learning-based object detection methods have achieved superior performance on conventional benchmark datasets, it is still difficult to detect objects from low-resolution (LR) images under diverse degradation conditions. To this end, a two-stage enhancement method for the LR image object detection (TELOD) framework is proposed. In the first stage, an extremely lightweight task disentanglement enhancement network (TDEN) is developed as a super-resolution (SR) sub-network before the detector. In the TDEN, the SR images can be obtained by applying the recurrent connection manner between an image restoration branch (IRB) and a resolution enhancement branch (REB) to enhance the input LR images. Specifically, the TDEN reduces the difficulty of image reconstruction by dividing the total image enhancement task into two sub-tasks, which are accomplished by the IRB and REB, respectively. Furthermore, a shared feature extractor is applied across two sub-tasks to explore common and accurate feature representations. In the second stage, an auxiliary feature enhancement head (AFEH) driven by high-resolution (HR) image priors is designed to improve the task-specific features produced by the detection Neck without any extra inference costs. In particular, the feature interaction module is built into the AFEH to integrate the features from the enhancement and detection phases to learn comprehensive information for detection. Extensive experiments show that the proposed TELOD significantly outperforms other methods. Specifically, the TELOD achieves mAP improvements of 1.8% and 3.3% over the second best method AERIS on degraded VOC and COCO datasets, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.