Abstract

PurposeAs a portable and radiation-free imaging modality, ultrasound can be easily used to image various types of tissue structures. It is important to develop a method which supports the multi-type ultrasound images co-segmentation. However, state-of-the-art ultrasound segmentation methods commonly only focus on the single type images or ignore the type-aware information. MethodsTo solve the above problem, this work proposes a novel type-aware encoder-decoder network (TypeSeg) for the multi-type ultrasound images co-segmentation. First, we develop a type-aware metric learning module to find an optimum latent feature space where the ultrasound images of the same types are close and that of the different types are separated by a certain margin. Second, depending on the extracted features, a decision module decides whether the input ultrasound images have the common tissue type or not, and the encoder-decoder network produces a segmentation mask accordingly. ResultsWe evaluate the performance of the proposed TypeSeg model on the ultrasound dataset that contains four types of tissues. The proposed TypeSeg model achieves the overall best results with the mean IOU score of 87.51% ± 3.93% for the multi-type ultrasound images. ConclusionThe experimental results indicate that the proposed method outperforms all the compared state-of-the-art algorithms for the multi-type ultrasound images co-segmentation task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call