Abstract

At night or in other low-illumination environments, optical imaging devices cannot capture details and color information in images accurately because of the reduced number of photons captured and the low signal-to-noise ratio. Consequently, the image is very noisy with low contrast and inaccurate color information, which affects human visual perception and creates significant challenges in computer vision tasks. Low-light image enhancement has great research value because it aims to reduce image noise and improve image quality. In this study, we propose an LBP-based progressive feature aggregation network (P-FANet) for low-light image enhancement. The LBP feature has insensitivity to illumination, and it contains rich texture information. In the network, we input the LBP feature into each iteration of the network in an accompanying manner, which helps to restore some detailed information of the low-light image. First, we input the low-light image into the dual attention mechanism model to extract global features. Second, the extracted different features enter the feature aggregation module (FAM) for feature fusion. Third, we use the recurrent layer to share the features extracted at different stages, and use the residual layer to further extract deeper features. Finally, the enhanced image is output. The rationality of the method in this study has been verified through ablation experiments. Many experimental results show that the method in this study has greater advantages in subjective and objective evaluations compared with many other advanced methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call