Abstract

How can we explain the wrong predictions of a blackbox algorithm? In this paper, I develop a method that using majority vote – a classic technique from noise data cleansing to classify the noise levels of the dataset and manipulate different noise levels of the training dataset and test dataset through simple convolutional neural networks (CNN). I do some simple experiments on the MNIST dataset and show that the ensemble the CNN trained with noise level 0 training dataset (88% of original training dataset) achieve 100% accuracy on noise level 0 test dataset (89% of original test dataset). I also show that one single mislabeled instance in training dataset has mislead effect on clean test dataset. As number of noise instance increase, the mislead effect can come from different groups noise instance. And there are neutralized effect that the mislead effect is neutralized and the instance is predicted correctly again. And there are also com-pounded effect that wrongly predicted doesn’t come from any group of noise instance but from the combined of these groups. On high level noise test dataset, the neutralized effect takes over the mislead effect and the compounded effect.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.