Abstract

For noisy images, in most existing sparse representation-based models, fusion and denoising proceed simultaneously using the coefficients of a universal dictionary. This paper proposes an image fusion method based on a cartoon + texture dictionary pair combined with a deep neural network combination (DNNC). In our model, denoising and fusion are carried out alternately. The proposed method is divided into three main steps: denoising + fusion + network denoising. More specifically, (1)denoise the source images using external/internal methods separately; (2)fuse these preliminary denoised results with external/internal cartoon and texture dictionary pair to obtain the external cartoon + texture sparse representation result (E-CTSR) and internal cartoon + texture sparse representation result (I-CTSR); and (3)combine E-CTSR and I-CTSR using DNNC (EI-CTSR) to obtain the final result. Experimental results demonstrate that EI-CTSR outperforms not only the stand-alone E-CTSR and I-CTSR methods but also state-of-the-art methods such as sparse representation (SR) and adaptive sparse representation (ASR) for isomorphic images, and E-CTSR outperforms SR and ASR for heterogeneous multi-mode images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call