Abstract

Despite the remarkable performance of deep learning methods on various tasks, most cutting-edge models rely heavily on large-scale annotated training examples, which are often unavailable for clinical and health care tasks. The labeling costs for medical images are very high, especially in medical image segmentation, which typically requires intensive pixel/voxel-wise labeling. Therefore, the strong capability of learning and generalizing from limited supervision, including a limited amount of annotations, sparse annotations, and inaccurate annotations, is crucial for the successful application of deep learning models in medical image segmentation. However, due to its intrinsic difficulty, segmentation with limited supervision is challenging and specific model design and/or learning strategies are needed. In this paper, we provide a systematic and up-to-date review of the solutions above, with summaries and comments about the methodologies. We also highlight several problems in this field, discussed future directions observing further investigations.

Highlights

  • Medical image segmentation, identifying the pixels/voxels of anatomical or pathological structures from background biomedical images, is of vital importance in many biomedical applications, such as computer-assisted diagnosis, radiotherapy planning, surgery simulation, treatment, and follow-up of many diseases

  • Qu et al [256], [265] addressed a more challenging case, where only sparse points annotation, i.e., only a small portion of nuclei locations in each image, were annotated with center points. Their method consists of two stages, the first stage conducts nuclei detection with a self-training strategy, and the second stage performs semi-supervised segmentation with pseudo-labels generated with Voronoi diagram and k-means clustering

  • We reviewed a diverse set of methods for these problems

Read more

Summary

INTRODUCTION

Medical image segmentation, identifying the pixels/voxels of anatomical or pathological structures from background biomedical images, is of vital importance in many biomedical applications, such as computer-assisted diagnosis, radiotherapy planning, surgery simulation, treatment, and follow-up of many diseases. Peng et al [174] applied the idea of co-training to semi-supervised segmentation of medical images They trained multiple models on different subsets of the labeled training data and used a common set of unlabeled training images to exchange information with each other. In this way, unlabeled training data can be leveraged to acquire generic knowledge under different concepts, which can be transferred to various downstream tasks. In [195], Taleb et al introduced a multimodal puzzle task to pretrain a model from multi-modal images, which was finetuned on a limited set of labeled data for the downstream segmentation task

ADVERSARIAL LEARNING
Findings
DISCUSSION AND FUTURE
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.