Abstract

Extraction of prostate contour on transrectal ultrasound (TRUS) is significant in prostate cancer diagnosis. With the low-contrast images of TRUS and the existence of imaging artifacts, detecting the prostate accurately from TRUS images becomes more and more difficult. In this paper, we present four strategies to achieve improved segmentation precision on difficult images. First, to achieve the data sequence, we adopt a variant of the principal curve-based algorithm, where a small part of prior point information is utilized as the approximate initialization. Secondly, we devise an evolution neural network to assist in finding the optimal neural network. Thirdly, we train the fractional-order-based neural network using data sequence as input, thereby decreasing the model error and increasing the precision of results during training. Finally, a smooth and explainable mathematical function of organ boundary is represented by the parameters of a fractional-order-based neural network. The performance of our method was evaluated on two clinical datasets with 1,338 TRUS prostate images using Accuracy (ACC), Dice similarity coefficient (DSC), and Jaccard similarity coefficient (OMG). Results show that our model 1) achieves better performance than the state-of-the-art segmentation models; the corresponding ACC, DSC, and OMG of our method are 95.3 ± 2.2 %, 95.9 ± 2.3 %, and 94.9 ± 2.4 %, respectively, and 2) detects unseen or vague regions satisfactorily.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call