Abstract

Multi-focus-image-fusion is a crucial embranchment of image processing. Many methods have been developed from different perspectives to solve this problem. Among them, the sparse representation (SR)-based and convolutional neural network (CNN)-based fusion methods have been widely used. Fusing the source image patches, the SR-based model is essentially a local method with a nonlinear fusion rule. On the other hand, the direct mapping between the source images follows the decision map which is learned via CNN. The fusion is a global one with a linear fusion rule. Combining the advantages of the above two methods, a novel fusion method that applies CNN to assist SR is proposed for the purpose of gaining a fused image with more precise and abundant information. In the proposed method, source image patches were fused based on SR and the new weight obtained by CNN. Experimental results demonstrate that the proposed method clearly outperforms existing state-of-the-art methods in addition to SR and CNN in terms of both visual perception and objective evaluation metrics, and the computational complexity is greatly reduced. Experimental results demonstrate that the proposed method not only clearly outperforms the SR and CNN methods in terms of visual perception and objective evaluation indicators, but is also significantly better than other state-of-the-art methods since our computational complexity is greatly reduced.

Highlights

  • In the image processing field, multi-focus-image-fusion is a significant branch [1,2,3]

  • The highlights of the mixed method based on sparse representation (SR) and convolutional neural network (CNN) include: (1) The sorting treatment of image patches based on the CNN model reduces the computational complexity of SR [24,25,26]; (2) The pixel value of the decision map obtained by means of the CNN model is imposed on the norm of sparse vectors, which can more accurately measure the activity level of the source image patches, giving full play to the advantages of strong spatial correlation between patches; (3) SR can handle the in-focused and out-focused junction areas that CNNs with black boxes cannot properly handle, making the patches in the junction area interpretable; and (4) SR can perform the nonlinear fusion of the patches at the junction of in-focused and out-focused area

  • We proposed a multi-focus-image-fusion method based on CNN and SR

Read more

Summary

Method Based on Convolutional

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations

Introduction
Sparse Representation
CNN-Based Image Fusion Method
1: Initialize
3: Dictionary update:
Complementary of the Two Methods
Proposed Fusion Algorithm
CNN-Based Weight Map Generation
Fusion of Image Patches Based on the New SR
Fast Image Fusion Based on Patches
Experiments
Source Images
Evaluation Metrics
Parameters Setting
The Compared Methods
Computational Complexity Analysis
Validity of the Proposed Fusion Method
Fusion of Multi-Focus Color Images
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call