Abstract

This paper deals with the machine learning model as a framework of regularized loss minimization problem in order to obtain a generalized model. Recently, some studies have proved the success and the efficiency of nonsmooth loss function for supervised learning problems Lyaqini et al. [1]. Motivated by the success of this choice, in this paper we formulate the supervised learning problem based on L1 fidelity term. To solve this nonsmooth optimization problem we transform it into a mini-max one. Then we propose a Primal-Dual method that handles the mini-max problem. This method leads to an efficient and significantly faster numerical algorithm to solve supervised learning problems in the most general case. To illustrate the effectiveness of the proposed approach we present some experimental-numerical validation examples, which are made through synthetic and real-life data. Thus, we show that our approach is outclassing existing methods in terms of convergence speed, quality, and stability of the predicted models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.