Abstract

Low-dose computed tomography (LDCT) technique is an important imaging modality, but LDCT images are always severely degraded by mottle noise and streak artifacts. The recently proposed nonlocally centralized sparse representation (NCSR) algorithm has good performance in natural image denoising, but it suffers from residual streak artifacts and can't preserve edges structure information well when implemented in LDCT image denoising. In addition, it has high computational complexity. To address this problem, in this paper, we propose an improved model, i.e. SNCSR model, based on the stationary PCA sub-dictionaries, nonlocally centralized sparse representation and relative total variation. In the SNCSR model, in order to learn more accurate sub-dictionaries, the LDCT image is preprocessed by the improved total variation (ITV) model in which the weighted coefficient of the regularization term is constructed depending on a clipped and normalized local activity. In addition, the maximum eigenvalue of the gradient covariance matrix of the image patch is used to distinguish edge structure information from background region so that the restored image can be represented more sparsely. Moreover, unlike the NCSR model that needs to learn sub-dictionaries in each outer loop, the proposed model learns stationary sub-dictionaries only once before iteration starts, which shorten the computation time significantly. At last, the relative total variation (RTV) algorithm is applied to further reduce the residual artifacts in the recovered image more thoroughly. The experiments are performed on the simulated pelvis phantom, the actual thoracic phantom and the clinical abdominal data. Compared with several other competitive denoising algorithms, both subjective visual effect and objective evaluation criteria show that the proposed SNCSR model has lower computational complexity and can improve LDCT images quality more effectively.

Highlights

  • X-ray computed tomography (CT) has been extensively applied in medical diagnosis due to its high spatial resolution and good image quality

  • Low-dose computed tomography (LDCT) image; (b) Set initial regularization parameter λ and δ; (c) Cluster all image patches extracted from yITV into two clusters, and learn one stationary smooth subdictionary and K − 1 stationary structural sub-dictionaries φK−1 from the smooth image patches and the structural image patches via PCA, respectively; 3

  • With aim to overcome the shortcomings of inability to preserve the edge structure information, high computational complexity and residual streak artifacts in the original nonlocally centralized sparse representation (NCSR) model, in this paper, a modified model (SNCSR) is proposed for LDCT image denoising

Read more

Summary

INTRODUCTION

X-ray computed tomography (CT) has been extensively applied in medical diagnosis due to its high spatial resolution and good image quality. The most straightforward and cost-effective method is to lower the x-ray tube current (mAs), which, results in a severe deterioration of CT images quality with mottle noise and streak artifacts, and increases the misdiagnosis rate. Under such circumstances, many noise-reduction approaches have been reported to address this problem.

TV MODEL
TRADITIONAL NCSR MODEL
PREPROCESSING LDCT IMAGE VIA THE PROPOSED ITV MODEL
EXPERIMENT
Preprocessing
Postprocessing step
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call