Abstract

3D medical image segmentation has an essential role in medical image analysis, while attention mechanism has improved the performance by a large margin. However, existing methods obtained the attention coefficient in a small receptive field, resulting in possible performance limitations. Radiologists usually scan all the slices first to have an overall idea of the target, and then analyze regions of interest in multiple 2D views in clinic practice. We simulate radiologists' recognition process and propose to exploit the 3D context information in a deeper manner for accurate 3D medical images segmentation. Due to the similarity of human body structure, medical images of different populations have highly similar shape and location information, so we use target region distillation to extract the common segmented region information. Particularly, we proposed two optimizations including Target Area Distillation and Section Attention. Target Area Distillation adds positions information to the original input to let the network has an initial attention of the target, while section attention performs attention extraction in three 2D sections thus with large range of receptive field. We compare our method against several popular networks in two public datasets including ImageCHD and COVID-19. Experimental results show that our proposed method improves the segmentation Dice score by 2-4% over the state-of-the-art methods. Our code has been released to the public (Anonymous link).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call