Abstract

Deep AUC maximization (DAM) is a popular method to deal with complex imbalanced classification problems. It learns a deep neural network by minimizing a surrogate AUC loss. The mostly used surrogates are the pairwise square loss and its variants. However, they are usually used in regression problems and are sensitive to noise samples. The easy samples suffer larger losses than those without correct classification, which is unreasonable in optimization problems. In this paper, we propose a focus AUC loss based on samples (FAUC-S) by constructing a differentiable weight function that identifies hard and easy samples, which makes easy samples have small losses, and hard samples have large losses. It is more reasonable than the traditional AUC loss while having the same advantages on large-scale datasets. In experiments, we use an end-to-end compositional DAM to investigate the effect of different weight functions on the AUC loss and compare it with other methods. Experimental results on several benchmark datasets demonstrate that FAUC-S achieves superior performance in terms of AUC compared to existing state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call