Abstract

Deep learning methods are nowadays considered as state-of-the-art approach in many sophisticated problems, such as computer vision, speech understanding or natural language processing. However, their performance relies on the quality of large annotated datasets. If the data are not well-annotated and label noise occur, such data-driven models become less reliable. In this Letter, the authors present very simple way to make the training process robust to noisy labels. Without changing network architecture and learning algorithm, the authors apply modified error measure that improves network generalisation when trained with label noise. Preliminary results obtained for deep convolutional neural networks, trained with novel trimmed categorical cross-entropy loss function, revealed its improved robustness for several levels of label noise.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call