Abstract

The signal-to-noise ratio in computed tomography (CT) data should be improved by using adaptive noise estimation for level-dependent threshold determination in the wavelet domain. The projection data measured in CT and, thus, the slices reconstructed from these data are noisy. For a reliable diagnosis and subsequent image processing, like segmentation, the ratio between relevant tissue contrasts and the noise amplitude must be sufficiently large. By separate reconstructions from disjoint subsets of projections, e.g. even and odd numbered projections, two CT volumes can be computed, which only differ with respect to noise. We show that these images allow a position and orientation adaptive noise estimation for level-dependent threshold determination in the wavelet domain. The computed thresholds are applied to the averaged wavelet coefficients of the input data. The final result contains data from the complete set of projections, but shows approximately 50% improvement in signal-to-noise ratio. The proposed noise reduction method adapts itself to the noise power in the images and allows for the reduction of spatially varying and oriented noise.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.