Abstract

Lesion detection or segmentation on 3D medical image (e.g., MRI, CT-PET or hybrid modal) is a challenging task which aims to automatically mark the abnormal region on given input image. The methods to tackle the problem often construct a U-shape autoencoders to map the input to a compressed vector in latent space and predict the final probability map. However, approaches for 3D multimodal image segmentation are still not fully explored and localization accuracy can be improved by designing proper structure based on prior. In this paper, we propose a supervised contrastive approach for 3D multi-modality medical image segmentation. A hybrid 3D-2D multi-modality fusion model is designed based on the input image prior. The 3D segmentation problem is converted to a sequence prediction task by 2D decoder with contrastive modeling. The combined framework (HDC, Hybrid Dimension Contrastive) achieves comparable or even greater performance to 3D counterpart while memory efficient. We conduct extensive experiments on two multimodal datasets of brain image BraTS and our epilepsy MRI-PET. The validation results show the effectiveness of our framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.