Abstract

AbstractThe low‐light environment is integral to everyday activities but poses significant challenges in object detection. Due to the low brightness, noise, and insufficient illumination of the acquired image, the model's object detection performance is reduced. Opposing recent studies mainly developing using supervised learning models, this paper suggests LIDA‐YOLO, an approach for unsupervised adaptation of low‐illumination object detectors. The model improves the YOLOv3 by using normal illumination images as the source domain and low‐illumination images as the target domain and achieves object detection in low‐illumination images through an unsupervised learning strategy. Specifically, a multi‐scale local feature alignment and global feature alignment module are proposed to align the overall attributes of the image and feature biases such as background, scene, and target layout are thus reduced. The experimental results of LIDA‐YOLO on the ExDark dataset achieved the highest performance mAP score of 56.65% compared to several current state‐of‐the‐art unsupervised domain adaptation object detection methods. Compared to I3Net, the performance improvement is 4.04%, and compared to OSHOT, the performance improvement is 6.5%. LIDA‐YOLO achieves a performance improvement of 2.7% compared to the supervised baseline method YOLOv3. Overall, the suggested LIDA‐YOLO model requires fewer samples and presents a stronger generalization ability than previous works.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.