Abstract

In this paper, we propose a new learning-based interactive editing method for prostate segmentation. Although many automatic methods have been proposed to segment the prostate, laborious manual correction is still required for many clinical applications due to the limited performance of automatic segmentation. The proposed method is able to flexibly correct wrong parts of the segmentation within a short time, even few scribbles or dots are provided. In order to obtain the robust correction with a few interactions, the discriminative features that can represent mid-level cues beyond image intensity or gradient are adaptively extracted from a local region of interest according to both the training set and the interaction. Then, the labeling problem is formulated as a semi-supervised learning task, which is aimed to preserve the manifold configuration between the labeled and unlabeled voxels. The proposed method is evaluated on a challenging prostate CT image data set with large shape and appearance variations. The automatic segmentation results originally with the average Dice of 0.766 were improved to the average Dice 0.866 after conducting totally 22 interactions for the 12 test images by using our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call