Abstract

Supervised learning-based medical image segmentation solutions usually require sufficient labeled training data. Insufficient available labeled training data often leads to the limitations of model performances, such as over-fitting, low accuracy, and poor generalization ability. However, this dilemma may worsen in the field of medical image analysis. Medical image annotation is usually labor-intensive and professional work. In this work, we propose a novel shape and boundary-aware deep learning model for medical image segmentation based on semi-supervised learning. The model makes good use of labeled data and also enables unlabeled data to be well applied by using task consistency loss. Firstly, we adopt V-Net for Pixel-wise Segmentation Map (PSM) prediction and Signed Distance Map (SDM) regression. In addition, we multiply multi-scale features, extracted by Pyramid Pooling Module (PPM) from input X, with 2 − |SDM| to enhance the features around the boundary of the segmented target, and then feed them into the Feature Fusion Module (FFM) for fine segmentation. Besides boundary loss, the high-level semantics implied in SDM facilitate the accurate segmentation of boundary regions. Finally, we get the ultimate result by fusing coarse and boundary-enhanced features. Last but not least, to mine unlabeled training data, we impose consistency constraints on the three core outputs of the model, namely PSM1, SDM, and PSM3. Through extensive experiments over three representative but challenging medical image datasets (LA2018, BraTS2019, and ISIC2018) and comparisons with the existing representative methods, we validate the practicability and superiority of our model.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.