Abstract

Computed tomography (CT) is the primary diagnostic tool for brain diseases. To determine the appropriate treatment plan, it is necessary to ascertain the patient’s bleeding volume. Automatic segmentation algorithms for hemorrhagic lesions can significantly improve efficiency and avoid treatment delays. However, for deep supervised learning algorithms, a large amount of labeled training data is usually required, making them difficult to apply clinically. In this study, we propose an unsupervised domain adaptation method that is an unsupervised domain adaptation segmentation model that can be trained across modalities and diseases. We call it AMD-DAS for brain CT hemorrhage segmentation tasks. This circumvents the heavy data labeling task by converting the source domain data (MRI with glioma) to our task’s required data (CT with Intraparenchymal hemorrhage (IPH)). Our model implements a two-stage domain adaptation process to achieve this objective. In the first stage, we train a pseudo-CT image synthesis network using the CycleGAN architecture through a matching mechanism and domain adaptation approach. In the second stage, we use the model trained in the first stage to synthesize the pseudo-CT images. We use the pseudo-CT with source domain labels and real CT images to train a domain-adaptation segmentation model. Our method exhibits a better performance than the basic one-stage domain adaptation segmentation method (+11.55 Dice score) and achieves an 86.93 Dice score in the IPH unsupervised segmentation task. Our model can be trained without using a ground-truth label, therefore increasing its application potential. Our implementation is publicly available at https://github.com/GuanghuiFU/AMD-DAS-Brain-CT-Segmentation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.