Abstract

CT metal artefact reduction (MAR) methods based on supervised deep learning are often troubled by domain gap between simulated training dataset and real-application dataset, i.e., methods trained on simulation cannot generalize well to practical data. Unsupervised MAR methods can be trained directly on practical data, but they learn MAR with indirect metrics and often perform unsatisfactorily. To tackle the domain gap problem, we propose a novel MAR method called UDAMAR based on unsupervised domain adaptation (UDA). Specifically, we introduce a UDA regularization loss into a typical image-domain supervised MAR method, which mitigates the domain discrepancy between simulated and practical artefacts by feature-space alignment. Our adversarial-based UDA focuses on a low-level feature space where the domain difference of metal artefacts mainly lies. UDAMAR can simultaneously learn MAR from simulated data with known labels and extract critical information from unlabeled practical data. Experiments on both clinical dental and torso datasets show the superiority of UDAMAR by outperforming its supervised backbone and two state-of-the-art unsupervised methods. We carefully analyze UDAMAR by both experiments on simulated metal artefacts and various ablation studies. On simulation, its close performance to the supervised methods and advantages over the unsupervised methods justify its efficacy. Ablation studies on the influence from the weight of UDA regularization loss, UDA feature layers, and the amount of practical data used for training further demonstrate the robustness of UDAMAR. UDAMAR provides a simple and clean design and is easy to implement. These advantages make it a very feasible solution for practical CT MAR.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.