Abstract
For multi-focus image fusion, the existing deep learning based methods cannot effectively learn the texture features and semantic information of the source image to generate high-quality fused images. Thus, we develop a new adaptive feature concatenate attention network named AFCANet, which adaptively learns cross-layer features and retains the texture features and semantic information of images to generate visually appealing fully focused images. In AFCANet, the encoder-decoder network is used as the backbone network. In the unsupervised training stage, an adaptive cross-layer skip connection mode is designed, and a cross-layer adaptive coordinate attention module is built to acquire meaningful information from the image along with ignoring unimportant information to obtain a better image fusion effect. In addition, in the middle of the encoder-decoder network, we also introduce an effective channel attention module to fully learn the output of the encoder, and accelerate network convergence. In the inference stage, we apply the pixel-based spatial frequency fusion rules to fuse the adaptive features learned by the encoder, which can successfully combine the texture and semantic information of the image and produce a more precise decision map. Extensive experiments on public datasets and the HBU-CVMDSP dataset show that our AFCANet can effectively improve the accuracy of the decision map in the focus and defocus regions, as well as improve the ability to retain the abundant details and edge features of the source image.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of King Saud University - Computer and Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.