Abstract

AbstractDeep neural networks, trained on large annotated datasets, are often considered as universal and easy-to-use tools for obtaining top performance on many computer vision, speech understanding, or language processing tasks. Unfortunately, these data-driven classifiers strongly depend on the quality of training patterns. Since large datasets often suffer from label noise, the results of training deep neural structures can be unreliable. In this paper, we present experimental study showing that simple regularization technique, namely dropout, improves robustness to mislabeled training data, and even in its standard version can be considered as a remedy for label noise. We demonstrate it on popular MNIST and CIFAR-10 datasets, presenting results obtained for several probabilities of noisy labels and dropout levels.KeywordsNeural networksDeep learningDropoutLabel noiseCategorical cross-entropy

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call