Abstract

Transrectal ultrasound (TRUS) is an important tool for prostate observation, playing a crucial role in medical processes such as multimodal image registration and surgical planning. However, TRUS images are often plagued by blurry boundary information and artifacts, making manual annotation a time-consuming and labor-intensive task, which results in insufficient pixel-labeled data for training neural networks. Therefore, this paper introduces a semi-supervised neural network based on joint tasks for TRUS image segmentation, which combines confidence information from target localization and semantic segmentation to generate pseudo-labels for collectively training unlabeled samples. Additionally, this method supports the generation of pseudo-labels using only location information for weakly supervised training of neural networks. Compared to other state-of-the-art models, our results show that this approach outperforms in segmentation performance with a smaller dataset, achieving an mean Dice similarity coefficient and mean Jaccard similarity coefficient of 92.8 % and 88.4 %, respectively. Furthermore, the experiments reveal that reducing the coupling of two decoders' parameters between the segmentation and detection tasks improves segmentation accuracy as dataset size decreases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call