Computer tomography (CT) has been routinely used for decades in clinical neuroimaging. Compared to Magnetic resonance imaging (MR), CT is more readily available and more cost-effective, but the soft tissue contrast is much lower. In brain CT images, the unclear soft-tissue boundaries and the high noise level hamper accurate segmentation of the gray and white matter, leaving obstacles for the subsequent geometrical quantification of brain structures. To address this challenge, this paper specifically acquires same-patient CT and MR image pairs and proposes a multi-task learning model for simultaneous tissue segmentation and modality transfer. We aim to use the modality transfer task to learn corresponding MR and CT features which assists the segmentation task to achieve more accurate results than single modality learning. Moreover, we add a Shannon entropy loss function to the training loss to further combat the high noise influence and reduce the fragmentation problem of the segmentation results. Experimental results proved that our multi-task framework achieves more accurate segmentation than the single segmentation task, and the Shannon entropy loss results in much fewer broken brain regions than the state-of-the-art (SOTA) U-net method. Our study provides a useful tool for clinical neural CT image analysis.
Read full abstract