Abstract

In realistic low-light environments, images captured by imaging devices often have problems such as low brightness and low contrast, serious loss of detail information, and a large amount of noise, posing major challenges to computer vision tasks. Low-light image enhancement can effectively improve the overall quality of the image, which has important significance and application value. In this study, an attention-based multi-channel feature fusion enhancement network (M-FFENet) is proposed to process low-light images. In this network, a feature extraction model is first used to obtain the deep features of the downsampled low-light images and fit them to an affine bilateral grid. Second, the addition of attention-based residual dense blocks (ARDB) allows the network to focus on more details and spatial information. Meanwhile, all color channels are considered. The channel features and bilateral meshes are then linearly interpolated using the feature reconfiguration model (FRM) to obtain high-quality features containing rich color and texture information. Next, the feature fusion module (FFM) is used to fuse features that contain different information. Enhancement model is used to further recover texture and detail in the image. Finally, the enhanced image is output. Numerous experimental results have shown that the method achieves better results in both quantitative and qualitative aspects compared to other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call