Abstract

Multi-focus image fusion is an important approach to obtain the composite image with all objects in focus, and it can be treated as an image segmentation problem, which is solved by convolutional neural networks (CNN). For CNN-based multi-focus image fusion methods, public training dataset does not exist, and the network model determines the recognition accuracy of the focused and defocused pixels. Considering these problems, we proposed a novel CNN-based multi-focus image fusion method by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy in this study. Firstly, the defocused images with five blurred levels were simulated by the Gaussian filter, and a novel training dataset was constructed for multi-focus image fusion. Secondly, the very deep convolutional networks model was simplified to design a Siamese CNN model, and this model was used to recognize the focused and defocused pixels. Thirdly, the focused and defocused regions were detected by the patch-based sequential reconstruction strategy, and the final decision map was refined by the morphological operator. Finally, the multi-focus image fusion was performed. Lytro dataset as a public multi-focus image dataset was used to prove the validation of the proposed method. Information entropy, mutual information, universal image quality index, visual information fidelity, and edge retention were adopted as evaluation metrics, and the proposed method was compared with state-of-the-art methods. Experimental results demonstrated that the proposed method can achieve state-of-the-art fusion results in terms of visual quality and objective assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call