Abstract

Multi-modality image fusion applied to improve image quality has drawn great attention from researchers in recent years. However, noise is actually generated in images captured by different types of imaging sensors, which can seriously affect the performance of multi-modality image fusion. As the fundamental method of noisy image fusion, source images are denoised first, and then the denoised images are fused. However, image denoising can decrease the sharpness of source images to affect the fusion performance. Additionally, denoising and fusion are processed in separate processing modes, which causes an increase in computation cost. To fuse noisy multi-modality image pairs accurately and efficiently, a multi-modality image simultaneous fusion and denoising method is proposed. In the proposed method, noisy source images are decomposed into cartoon and texture components. Cartoon-texture decomposition not only decomposes source images into detail and structure components for different image fusion schemes, but also isolates image noise from texture components. A Gaussian scale mixture (GSM) based sparse representation model is presented for the denoising and fusion of texture components. A spatial domain fusion rule is applied to cartoon components. The comparative experimental results confirm the proposed simultaneous image denoising and fusion method is superior to the state-of-the-art methods in terms of visual and quantitative evaluations.

Highlights

  • Since an image obtained by a single sensor cannot contain sufficient information of one scene in most cases, additional information from other images captured in the same scene can be used as a necessary complement to reduce the limitations of a single image and enhance the visibility [1,2,3]

  • According to the Gaussian scale mixture (GSM) statistics model shown in Equation (4) and the image degraded model shown in Equation (5), the denoising and reconstruction problem of texture components of image patch l can be formulated as Equation (6)

  • From these close-up views of the labeled regions, it shows that the fusion details produced by the proposed method contain better contrast and sharpness, when the noise level raises to σ = 20 and σ = 50

Read more

Summary

Introduction

Since an image obtained by a single sensor cannot contain sufficient information of one scene in most cases, additional information from other images captured in the same scene can be used as a necessary complement to reduce the limitations of a single image and enhance the visibility [1,2,3]. Most of existing SR-based simultaneous image fusion and denoising methods do not specialize in image restoration Both structure and detailed information may be degraded in the denoising process. To optimize this limitation, a novel simultaneous multi-modality image denoising and fusion method is proposed. Source images are decomposed into cartoon and texture components according to a total variation-based method. In this step, image noise is decomposed into texture components. Image noise is decomposed into texture components, which are fused and denoised simultaneously according to an SR-based method. This paper proposes a cartoon-texture decomposition based method to separate image noise and detailed information. The rest of this paper is structured as follows: Section 2 discusses the related work; Section 3 presents the proposed framework; Section 4 simulates the proposed solutions and analyzes experiment results; and Section 5 concludes this paper

Sparse Representation in Image Denoising
Dictionary Construction and Image Decomposition
Simultaneous Image Denoising and Fusion Method
The Proposed Simultaneous Denoising and Fusion Framework
Image Cartoon-Texture Decomposition
Details of Fusion Process
Experiment Setup
Comparison of Simultaneous Fusion and Denoising Methods
Multi-Focus Image Fusion
Multi-Modality Medical Image Fusion
Infrared-Visible Image Fusion
Comparison of Computational Efficiency
Comparison of Processing Results
Conclusions and Future Work
9: Inner Loop
20: References
Objective
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call