Abstract

Existing multi-focus image fusion (MFIF) methods are difficult to achieve satisfactory results in both fusion performance and rate simultaneously. The spatial domain methods are hard to determine the focus/defocus boundary (FDB), and the transform domain methods are likely to damage the content information of the source images. Moreover, the deep learning-based MFIF methods are usually confronted with low rate due to complex models and enormous learnable parameters. To address these issues, we propose a multi-domain lightweight network (MLNet) for MFIF, which can achieve competitive results in both performance and rate. The proposed MLNet mainly includes three modules, namely focus extraction (FE), focus measure (FM) and image fusion (IF). In the interpretable FE module, the image features extracted by discrete cosine transform-based convolution (DCTConv) and local binary pattern-based convolution (LBPConv) are concatenated and fed into the FM module. DCTConv based on transform domain takes DCT coefficients to construct a fixed convolution kernel without parameter learning, which can effectively capture the high/low frequency content of the image. LBPConv based on spatial domain can achieve structure features and gradient information from source images. In the FM module, a 3-layer 1×1 convolution with a few learnable parameters is employed to generate the initial decision map, which has the properties of flexible input. The fused image is obtained by the IF module according to the final decision map. In terms of quantitative and qualitative evaluations, extensive experiments validate that the proposed method outperforms existing state-of-the-art methods on three public datasets. In addition, the proposed MLNet contains only 0.01M parameters, which is 0.2% of the first CNN-based MFIF method [25].

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.