Abstract

AbstractMedical image segmentation from noisy labels is an important task since obtaining high‐quality annotations is extremely difficult and expensive. There are a lot of approaches proposed for such task. However, some issues like the overfitting on noisy annotations, the limited learning of boundary features, and no consideration of the corrupted local pixels are still not solved. Therefore, a novel approach named uncertainty‐aware iterative learning (UaIL) is proposed for medical image segmentation with noisy labels. UaIL iteratively and jointly trains two deep networks using the original images and their argumented ones through a joint loss function including softened label loss, hard label loss and consistency loss, which encourages UaIL to produce segmentations that are robust to the perturbations in arbitrary semantic space. The uncertainty of labels is estimated based on the predictions in iterative learning, then the original labels are refined, which improves the learning of boundary features in segmentation. To avoid overfitting, a stopping strategy is designed based on the dice coefficient in iterative learning. Experiments on two public datasets verify the effectiveness of UaIL under different levels of annotation noise. Especially, when there are serious noises in the labels, the dice achieved by UaIL is 1.43% to 15.03% higher than the competing approaches on the two public datasets. The UaIL is further verified on a private dataset, which shows its ability of applying in the real application with noisy labels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call