Abstract

In the context of automatic medical image segmentation based on statistical learning, raters’ variability of ground truth segmentations in training datasets is a widely recognized issue. Indeed, the reference information is provided by experts but bias due to their knowledge may affect the quality of the ground truth data, thus hindering creation of robust and reliable datasets employed in segmentation, classification or detection tasks. In such a framework, automatic medical image segmentation would significantly benefit from utilizing some form of presegmentation during training data preparation process, which could lower the impact of experts’ knowledge and reduce time-consuming labeling efforts. The present manuscript proposes a superpixels-driven procedure for annotating medical images. Three different superpixeling methods with two different number of superpixels were evaluated on three different medical segmentation tasks and compared with manual annotations. Within the superpixels-based annotation procedure medical experts interactively select superpixels of interest, apply manual corrections, when necessary, and then the accuracy of the annotations, the time needed to prepare them, and the number of manual corrections are assessed. In this study, it is proven that the proposed procedure reduces inter- and intra-rater variability leading to more reliable annotations datasets which, in turn, may be beneficial for the development of more robust classification or segmentation models. In addition, the proposed approach reduces time needed to prepare the annotations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call