Abstract

Multifocus image fusion is an important technique that aims to generate a single clean image by fusing multiple input images. In this paper, we propose a novel multilevel features convolutional neural network (MLFCNN) architecture for image fusion. In the MLFCNN model, all features learned from previous layers are passed to the subsequent layer. Inside every path between the previous layer and the subsequent layer, we add a 1 × 1 convolution module to reduce the redundancy. In our method, the source images first are fed to our pre-trained MLFCNN model to obtain the initial focus map. Then, the initial focus map is performed by morphological opening and closing operations and followed by a Gaussian filter to obtain the final decision map. Finally, the fused all-in-focus image is generated based on a weighted-sum strategy with the decision map. The experimental results demonstrate that the proposed method outperforms some state-of-the-art image fusion algorithms in terms of both qualitative and objective evaluations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call