Abstract

Optimizing pseudo-linear performance measures has gained increasing interest in recent years. Compared with the existing works focusing on L2-norm, it is more challenging to design the model with L1-norm, especially, when data are of large scale. To address the issue, in this paper, we propose a sparse stochastic method to optimize the pseudo-linear metrics. The algorithm begins by formulating the problem as a cost-sensitive classification with L1 regularization, then composite objective mirror descent (COMID) is adopted for the inner optimization, which can obtain a sparse solution with promising performance. However, the original COMID method only has $\mathcal {O}(log(T)/T)$ convergence rate with the strong convex objective. To this end, a simple yet efficient polynomial-decay average strategy is suggested. We prove that with this strategy, the proposed algorithm can not only achieve the optimal convergence rate of $\mathcal {O}(1/T)$ but also obtain the classifier with higher sparsity. The empirical studies on several public benchmark data sets demonstrate the superior performance of the proposed algorithm in terms of both efficiency and sparsity.

Highlights

  • Classification under imbalance situation where one class is rare compared to another is a common yet important problem in supervised learning

  • In this paper, we focus on the indirect method, and present the first work on designing a sparse stochastic algorithm with optimal convergence rate for pseudo-linear metrics

  • The reason lies in the fact that the algorithm we proposed adopts the technique of polynomial averaging, which only use the average of a few weights as the final output, while for L1-composite objective mirror descent (COMID), it averages for all the weights, which deteriorates the sparsity of the classifier

Read more

Summary

Introduction

Classification under imbalance situation where one class is rare compared to another is a common yet important problem in supervised learning. It arises in many applications, ranging from medical diagnosis and text retrieval to credit risk prediction and fraud detection [1]–[4]. The usual error rate is ill-suited as a performance measure in such settings, as they may cause a bias towards the majority class and result in a lower sensitivity in detecting the minority class. People often design classifiers under imbalanced setting by optimizing these performance measures, various algorithms have been proposed, which are mainly categorized into two groups: the direct method and the indirect method.

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.