Abstract

In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that a majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach.

Highlights

  • In binary classification, one observes multiple realizations of two different classes, X01, . . . , X0m i∼id P0, X11, . . . , X1n i∼id P1, where P0 and P1, the class-conditional distributions, are probability distributions on a Borel space (X, S)

  • We examine some standard benchmark data sets as well as a real data set from a nuclear particle classification problem that is naturally described by our label noise model

  • As in the rest of the paper, we focus on label noise that is independent of the feature vector X, meaning that the conditional distribution of Ygiven X and Y depends only on Y

Read more

Summary

Introduction

One observes multiple realizations of two different classes, X01, . . . , X0m i∼id P0, X11, . . . , X1n i∼id P1, where P0 and P1, the class-conditional distributions, are probability distributions on a Borel space (X , S). A key aspect of our contribution is that the label noise proportions π0 and π1 are unknown, in contrast to previous work, and the linchpin of our solution is a method for accurately estimating π0 and π1. We argue that these proportions can be estimated using methods for mixture proportion estimation (MPE), which is the problem of estimating the mixing proportion of one distribution in another. Portions of this work appeared earlier in Scott et al [41] and Scott [40] This longer version integrates those versions and extends them by establishing the necessity of the proposed conditions, a consistency analysis featuring clippable losses, a connection to class probability estimation, and a more thorough literature review

Motivating application
Label flipping model for label noise
Related work
Some initial notation
Outline
The challenge of label noise
Alternate contamination model
Irreducibility and mixture proportion estimation
Mutual irreducibility
Sufficiency of mutual irreducibility for identifiability
Necessity
Maximal denoising
Mixture proportion estimation and a rate of convergence
Consistent classification with unknown label noise proportions
Problem formulation
Surrogate losses
Estimating α
Algorithm
First consistency result
Alternate consistency result with clippable losses
A more general analysis of co-training
Mutual irreducibility and class probability estimation
10. Implementation of estimators
11. Experiments
12. Conclusion
Proof of Proposition 1
Proof of Theorem 11
Proof of Theorem 12
Proof of Theorem 14
Proof of Theorem 18
Proof of Theorem 19
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call