Abstract

Lesion detection or segmentation on 3D medical image (e.g., MRI, CT-PET or hybrid modal) is a challenging task which aims to automatically mark the abnormal region on given input image. The methods to tackle the problem often construct a U-shape autoencoders to map the input to a compressed vector in latent space and predict the final probability map. However, approaches for 3D multimodal image segmentation are still not fully explored and localization accuracy can be improved by designing proper structure based on prior. In this paper, we propose a supervised contrastive approach for 3D multi-modality medical image segmentation. A hybrid 3D-2D multi-modality fusion model is designed based on the input image prior. The 3D segmentation problem is converted to a sequence prediction task by 2D decoder with contrastive modeling. The combined framework (HDC, Hybrid Dimension Contrastive) achieves comparable or even greater performance to 3D counterpart while memory efficient. We conduct extensive experiments on two multimodal datasets of brain image BraTS and our epilepsy MRI-PET. The validation results show the effectiveness of our framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call