Abstract

Abstract. The performance of deep learning models in semantic segmentation is dependent on the availability of a large amount of labeled data. However, the influence of label noise, in the form of incorrect annotations, on the performance is significant and mostly ignored. This is a big concern in remote sensing applications, wherein acquired datasets are spatially limited, labeling is done by domain experts with possible sources of high inter-and intra-observer variability leading to erroneous predictions. In this paper, we first simulate the label noise while conducting experiments on two different datasets with very high-resolution aerial images, height data, and inaccurate labels, responsible for the training of deep learning models. We then focus on the effect of these noises on the model performance. Different classes respond differently to the label noise. The typical size of an object belonging to a class is a crucial factor regarding the class-specific performance of the model trained with erroneous labels. Errors caused by relative shifts of labels are the most influential label errors. The model is generally more tolerant of the random label noise than other label errors. It has been observed that the accuracy gets reduced by at least 3% while 5% of label pixels are erroneous. In this regard, our study provides a new perspective of evaluating and quantifying the propagation of label noise in the model performance that is indeed important for adopting reliable semantic segmentation practices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call