The objective of multi-focus image fusion (MFIF) is to generate a fully focused image through integrating multiple partially focused source images. Most of the existing methods do not fully consider the local gradient variation rate of the source image, which makes it difficult to accurately distinguish the small defocused (focused) region covered by the large focused (defocused) region. In addition, these methods also cause edge blurring because they do not take into account misregistration of the source images. To address these issues, in this paper, we propose a simple and effective multi-focus image fusion framework based on multi-channel Rybak neural network (MCRYNN) model. Specifically, the proposed MCRYNN model is highly sensitive to local gradient changes of images based on input receptive fields, which can process multiple source images in parallel and extract the features of focused regions. Moreover, the decision maps can accurately be generate in proposed method based on the information interaction effect of parallel network structure for multi-focus image fusion task. Finally, we conduct qualitative and quantitative experiments on public datasets, and the results show that the performance of the proposed method outperforms the state-of-the-art methods.
Read full abstract