Abstract

The aim of this work is to introduce a new joint estimation approach for anatomical content and artifacts in digital radiography. Herein, we apply this approach to the anti-scattering grid suppression problem. These grids are generally used in low-dose radiography to improve the image contrast by absorbing the scattered radiation. However, they introduce an irregular stripe pattern in the clinical images which can hinder the diagnosis made by the radiologist.The problem is tackled in a sparse dictionary representation framework with an observation and prior models accounting for the physical model of image contents. Thus, the anatomical content is supposed to be sparsely represented by a Fourier or K-SVD dictionary. Furthermore, the artifact is modeled by a frequency dictionary. In addition, the noise is modeled by a locally stationary Gaussian distribution with zero mean and unknown variance. The joint estimation of all the projections is then performed using two methods: Beam-forming (IAA) and Bayesian (SUBLIM). Moreover, patch-wise processing is applied to avoid models’ errors due to spatial variations of the clinical contents and the artifact.IAA and SUBLIM performance is demonstrated for the anti-scattering grid extraction problem using experimental and real clinical images. The proposed methods remove gridline artifacts while maintaining the image details and avoiding ringing artifacts. In addition, they produce better results compared to the conventional filtering approaches, especially when the metallic materials are present in or on the patients. SUBLIM outperforms IAA in those regions thanks to a better clinical content model and physical noise prior.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.