Abstract

The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF) based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP) algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods.

Highlights

  • High-quality images are widely used in different areas of a highly developed society

  • This paper proposes a novel stochastic coordinate coding (SCC)-based image fusion framework integrated with local density peaks clustering

  • The source images are fused by Laplacian energy (LE), disperse wavelet transform (DWT), dual-tree complex wavelet transform (DT-CWT), CVT, non-subsampled contourlet (NSCT), sparse representation with a fixed DCT dictionary (SR-DCT), SR-KSVD, and the proposed method to get a totally focused image, and the corresponding fusion results are shown in Figures 6, 7 and 8c–j respectively

Read more

Summary

Introduction

High-quality images are widely used in different areas of a highly developed society. Yang and Li [21] first applied the sparse representation theory to image fusion field and proposed a multi-focus image fusion method with an MST dictionary. Nejati and Samavi proposed K-SVD dictionary-learning based sparse representation for the decision map construction of multi-focus fusion [6]. These aforementioned sparse-representation based methods do not take the high computation costs into account as K-SVD, and online dictionary learning. An integrated sparse representation framework for multi-focus image fusion is proposed that combines the local density peaks based image-patch clustering and stochastic coordinate coding. 2. An SCC-based dictionary construction method is proposed and applied to sparse representation process, which can obtain a more accurate dictionary and decrease the computation cost of dictionary learning. The rest of this paper is structured as follows: Section 2 presents and specifies the proposed framework; Section 3 simulates the proposed solutions and analyzes experiment results; and Section 4 concludes this paper

Introduction of Framework
Dictionary Construction
Sub-Dictionary Learning Approach
Fusion Scheme
Experiments and Analyses
Experiment Setup
Edge Intensity
Mutual Information
Visual Information Fidelity
Image Quality Comparison
Method
Dictionary Construction Time Comparison
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.