Abstract

This study proposes a new method for regression – lp-norm support vector regression (lp SVR). Some classical SVRs minimize the hinge loss function subject to the l2-norm or l1-norm penalty. These methods are non-adaptive since their penalty forms are fixed and pre-determined for any types of data. Our new model is an adaptive learning procedure with lp-norm (0 <p < 1), where the best p is automatically chosen by data. By adjusting the parameter p, lp SVR can not only select relevant features but also improve the regression accuracy. An iterative algorithm is suggested to solve the lp SVR efficiently. Simulations and real data applications support the effectiveness of the proposed procedure.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.