Abstract

Low-light object detection is an important research area in computer vision, but it is also a difficult issue. This research offers a low-light target detection network, NLE-YOLO, based on YOLOV5, to address the issues of insufficient illumination and noise interference experienced by target detection tasks in low-light environments. The network initially preprocesses the input image with an improvement technique before suppressing high-frequency noise and enhancing essential information with C2fLEFEM, a unique feature extraction module. We also created a multi-scale feature extraction module, AMC2fLEFEM, and an attention mechanism receptive field module, AMRFB, which are utilized to extract features of multiple scales and enhance the receptive field. The C2fLEFEM module, in particular, merges the LEF and FEM modules on top of the C2f module. The LEF module employs a low-frequency filter to remove high-frequency noise; the FEM module employs dual inputs to fuse low-frequency enhanced and original features; and the C2f module employs a gradient retention method to minimize information loss. The AMC2fLEFEM module combines the SimAM attention mechanism and uses the pixel relationship to obtain features of different receptive fields, adapt to brightness changes, capture the difference between the target and the background, improve the network's feature extraction capability, and effectively reduce the impact of noise. The AMRFB module employs atrous convolution to enlarge the receptive field, maintain global information, and adjust to targets of various scales. Finally, for low-light settings, we replaced the original YOLOv5 detection head with a decoupled head. The Exdark dataset experiments show that our method outperforms previous methods in terms of detection accuracy and performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call