Abstract

Noise in training data increases the tendency of many machine learning methods to overfit the training data, which undermines the performance. Outliers occur in big data as a result of various factors, including human errors. In this work, we present a novel discriminator model for the identification of outliers in the training data. We propose a systematic approach for creating training datasets to train the discriminator based on a small number of genuine instances (trusted data). The noise discriminator is a convolutional neural network (CNN). We evaluate the discriminator's performance using several benchmark datasets and with different noise ratios. We inserted random noise in each dataset and trained discriminators to clean them. Different discriminators were trained using different numbers of genuine instances with and without data augmentation. We compare the performance of the proposed noise-discriminator method with seven other methods proposed in the literature using several benchmark datasets. Our empirical results indicate that the proposed method is very competitive to the other methods. It actually outperforms them for pair noise.

Highlights

  • While the effectiveness of supervised machine learning algorithms relies on the existence of large and high-quality labeled datasets, it is a time-consuming and challenging matter to create clean datasets that are free from noise [1, 2]

  • We propose a method to train a noise discriminator (ND). e ND is trained using automatically generated datasets based on a small number of genuine instances. e NDs that we propose are convolutional neural network (CNN) classifiers

  • The false-positive rate decreases as the noise ratio increases which is expected because as the number of outliers increases, the likelihood that the discriminator is correct when it classifies an instance as an outlier increases. is observation justifies the inverse correlation identified between the overall recall values and the noise ratio

Read more

Summary

Introduction

While the effectiveness of supervised machine learning algorithms relies on the existence of large and high-quality labeled datasets, it is a time-consuming and challenging matter to create clean datasets that are free from noise (i.e., incorrectly labeled instances) [1, 2]. E aim of this research is to propose a machine learning method for identifying and eliminating noise from datasets. We propose a method to train a noise discriminator (ND). Deep learning (DL) models, including CNN, have been applied with great success in diverse areas with a performance that often exceeds the capabilities of human beings [3, 4]. DL models are valuable in domains where large amounts of training data are available. Given the negative effects of outliers on DL methods, a range of solutions has been identified to mitigate these effects [6]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call