Abstract
Due to the existence of metallic implants in certain patients, the Computed Tomography (CT) images from these patients are often corrupted by undesirable metal artifacts, which causes severe problem of metal artifact. Although many methods have been proposed to reduce metal artifact, reduction is still challenging and inadequate. Some reduced results are suffering from symptom variance, second artifact, and poor subjective evaluation. To address these, we propose a novel method based on generative adversarial nets (GANs) to reduce metal artifacts. Specifically, we firstly encode interactive information (text) and imaging CT (image) to yield multi-modal feature-fusion representation, which overcomes representative ability limitation of single-modal CT images. The incorporation of interaction information constrains feature generation, which ensures symptom consistency between corrected and target CT. Then, we design an enhancement network to avoid second artifact and enhance edge as well as suppress noise. Besides, three radiology physicians are invited to evaluate the corrected CT image. Experiments show that our method gains significant improvement over other methods. Objectively, ours achieves an average increment of 7.44% PSNR and 6.12% SSIM on two medical image datasets. Subjectively, ours outperforms others in comparison in term of sharpness, resolution, invariance, and acceptability.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Multimedia Computing, Communications, and Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.