Abstract

When the input features are generated by factors in a classification problem, it is more meaningful to identify important factors, rather than individual features. The -norm support vector machine(SVM) has been developed to perform automatic factor selection in classification. However, the -norm SVM may suffer from estimation inefficiency and model selection inconsistency because it applies the same amount of shrinkage to each factor without assessing its relative importance. To overcome such a limitation, we propose the adaptive -norm (-norm) SVM, which penalizes the empirical hinge loss by the sum of the adaptively weighted factor-wise -norm penalty. The -norm SVM computes the weights by the 2-norm SVM estimator and can be formulated as a linear programming(LP) problem which is similar to the one of the -norm SVM. The simulation studies show that the proposed -norm SVM improves upon the -norm SVM in terms of classification accuracy and factor selection performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.