Abstract

Supervised deep learning approaches for automated diagnosis support require datasets annotated by experts. Intra-annotator variability of a single annotator and inter-annotator variability between annotators can affect the quality of the diagnosis support. As medical experts will always differ in annotation details, quantitative studies concerning the annotation quality are of particular interest. A consistent and noise-free annotation of large-scale datasets by, for example, dermatologists or pathologists is a current challenge. Hence, methods are needed to automatically inspect annotations in datasets. In this paper, we categorize annotation noise in image segmentation tasks, present methods to simulate annotation noise, and examine the impact on the segmentation quality. Two novel automated methods to identify intra-annotator and inter-annotator inconsistencies based on uncertainty-aware deep neural networks are proposed. We demonstrate the benefits of our automated inspection methods such as focused re-inspection of noisy annotations or the detection of generally different annotation styles using the biomedical ISIC 2017 Melanoma image segmentation dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call