Abstract

An accelerated version of the proximal stochastic dual coordinate ascent (SDCA) algorithm in solving regularised loss minimisation with norm is presented, wherein a momentum is introduced and the strong theoretical guarantees of SDCA are shared. Moreover, it is also suitable for various key machine learning optimisation problems including support vector machine (SVM), multiclass SVM, logistic regression, and ridge regression. In particular, the Nestrov's estimate sequence technique to adjust the weight coefficient dynamically and conveniently is adopted. It is applied for training linear SVM from the large training dataset. Experimental results show that the proposed method has a competitive classification performance and faster convergence speed than state-of-the-art algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.