Abstract

Area Under the ROC Curve (AUC) is a widely used metric for measuring classification performance. It has important theoretical and academic values to develop AUC maximization algorithms. Traditional methods often apply batch learning algorithm to maximize AUC which is inefficient and unscalable for large-scale applications. Recently some online learning algorithms have been introduced to maximize AUC by going through the data only once. However, these methods sometimes fail to converge to an optimal solution due to the fixed or rapid decay of learning rates. To tackle this problem, we propose an algorithm AdmOAM, Adaptive Moment estimation method for Online AUC Maximization. It applies the estimation of moments of gradients to accelerate the convergence and mitigates the rapid decay of the learning rates. We establish the regret bound of the proposed algorithm and implement extensive experiments to demonstrate its effectiveness and efficiency.

Highlights

  • Area Under the ROC Curve (AUC) [1] plays an important role in measuring classification performance, and quantifies the ability of a classifier that assigns a higher score for a randomly chosen positive instance than a randomly drawn negative instance [2]

  • We have shown the effectiveness of the proposed AdmOAM in experiments on several benchmark datasets in comparison with 4 state-of-the-art online AUC maximization algorithms

  • We develop a novel adaptive online AUC maximization algorithm called AdmOAM, which uses the square loss function in one-pass framework and applies Adam [22] for mitigating the rapid decay of the learning rates

Read more

Summary

Introduction

AUC [1] plays an important role in measuring classification performance, and quantifies the ability of a classifier that assigns a higher score for a randomly chosen positive instance than a randomly drawn negative instance [2]. Despite the success of these batch AUC optimization algorithms, they all require the whole training instances available before training. They update the model every epoch with all training instances. It is not efficient and scalable for large-scale applications in batch learning setting. To address this challenge, online learning technique has been introduced to maximize AUC, which has been shown to be capable for large-scale scenarios [13,14,15]. The online learning methods update the model with only one instance each

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call