Abstract

The challenge in multi-focus image fusion tasks lies in accurately preserving the complementary information from the source images in the fused image. However, existing datasets often lack ground truth images, making it difficult for some full-reference loss functions (such as SSIM) to effectively participate in model training, thereby further affecting the performance of retaining source image details. To address this issue, this paper proposes an unsupervised dual-channel dense convolutional method, DCD, for multi-focus image fusion. DCD designs Patch processing blocks specifically for the fusion task, which segment the source image pairs into equally sized patches and evaluate their information to obtain a reconstructed image and a set of adaptive weight coefficients. The reconstructed image is used as the reference image, enabling unsupervised methods to utilize full-reference loss functions in training and overcoming the challenge of lacking labeled data in the training set. Furthermore, considering that the human visual system (HVS) is more sensitive to brightness than color, DCD trains the dual-channel network using both RGB images and their luminance components. This allows the network to focus more on the brightness information while preserving the color and gradient details of the source images, resulting in fused images that are more compatible with the HVS. The adaptive weight coefficients obtained through the Patch processing blocks are also used to determine the degree of preservation of the brightness information in the source images. Finally, comparative experiments on different datasets also demonstrate the superior performance of DCD in terms of fused image quality compared to other methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.