Abstract

In three-dimensional (3D) medical image segmentation, it is still a great challenge to obtain the multidimensional feature information contained in voxel images using a single view for smaller segmentation targets, and the robustness of models obtained by relying solely on segmentation networks needs to be enhanced. In this article, we propose a three-view contextual cross-slice difference 3D segmentation adversarial network, in which three-view contextual cross-slice difference decoding blocks are introduced to improve the segmentation decoder’s ability to perceive edge feature information. Meanwhile, dense skip connections are used to alleviate the problem that a large amount of shallow feature information is lost in encoding and insufficient information provided by a single long skip connection during image reconstruction. The adversarial network improves the performance of the segmentation network by distinguishing true or false for each patch of the predicted image. Further, the robustness of the segmentation model is improved in the form of adversarial training. We evaluate our model on the publicly available brain tumor BraTS2019 dataset as well as the ADNI1 dataset and achieve optimal results compared to recent excellent models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call