Abstract

Image recovery from compressive sensing (CS) measurement data, especially noisy data has always been challenging due to its implicit ill-posed nature, thus, to seek a domain where a signal can exhibit a high degree of sparsity and to design an effective algorithm have drawn increasingly more attention. Among various sparsity-based models, structured or group sparsity often leads to more powerful signal reconstruction techniques. In this paper, we propose a novel entropy-based algorithm for CS recovery to enhance image sparsity through learning the group sparsity of residual. To reduce the residual of similar packed patches, the group sparsity of residual is described by a Laplacian scale mixture (LSM) model, therefore, each singular value of the residual of similar packed patches is modeled as a Laplacian distribution with a variable scale parameter, to exploit the benefits of high-order dependency among sparse coefficients. Due to the latent variables, the maximum a posteriori (MAP) estimation of the sparse coefficients cannot be obtained, thus, we design a loss function for expectation–maximization (EM) method based on relative entropy. In the frame of EM iteration, the sparse coefficients can be estimated with the denoising-based approximate message passing (D-AMP) algorithm. Experimental results have shown that the proposed algorithm can significantly outperform existing CS techniques for image recovery.

Highlights

  • Compressive sensing (CS) [1,2] has drawn quite an amount of attention as a novel digital signal sampling theory when the signal is sparse in some domain

  • The visual results recovered by the NLR-CS algorithm were always algorithm were always inferior to RL-DAMP, with the lower PSNR or Structural Similarity Index (SSIM) values and less details

  • 150 s, in Figure 9b and Figure 10b. These curves demonstrate that RL-DAMP can converge to acurves good demonstrate that can converge to a good reconstructed result in a reasonable amount of time s, in

Read more

Summary

Introduction

Compressive sensing (CS) [1,2] has drawn quite an amount of attention as a novel digital signal sampling theory when the signal is sparse in some domain. Each singular value of similar residual matrices packed and rearranged by similar patches of the intermediate noisy image and the pre-estimations is modeled as a Laplacian distribution with a variable scale parameter, resulting in weighted singular value minimization problems, where weights are adaptively assigned according to the signal-to-noise ratio. To solve this model, the expectation–maximization (EM) [39] method with a loss function of relative entropy is adopted, turning the CS recovery problem into a prior information estimation problem and a singular value minimization problem. Experimental results on natural images show that our approach can achieve more accurate reconstruction than other competing approaches

Compressed Sensing
Denoising-Based Approximate Message Passing
Residual Learning
LSM Prior Modeling
Entropy-Based Algorithm for CS Recovery
Experiments nonlocal sparsity algorithms
Parameter Settings
Experiments on Noiseless Data
Average
Visual
Experiments
Figures and
12. Iterative curves
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call