Abstract

Learning to improve AUC performance for imbalanced data is an important machine learning research problem. Most methods of AUC maximization assume that the model function is linear in the original feature space. However, this assumption is not suitable for nonlinear separable problems. Although there have been some nonlinear methods of AUC maximization, scaling up nonlinear AUC maximization is still an open question. To address this challenging problem, in this paper, we propose a novel large-scale nonlinear AUC maximization method (named as TSAM) based on the triply stochastic gradient descents. Specifically, we first use the random Fourier feature to approximate the kernel function. After that, we use the triply stochastic gradients w.r.t. the pairwise loss and random feature to iteratively update the solution. Finally, we prove that TSAM converges to the optimal solution with the rate of O(1/t) after t iterations. Experimental results on a variety of benchmark datasets not only confirm the scalability of TSAM, but also show a significant reduction of computational time compared with existing batch learning algorithms, while retaining the similar generalization performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.