Abstract

When the input features are generated by factors in a classification problem, it is more meaningful to identify important factors, rather than individual features. The -norm support vector machine(SVM) has been developed to perform automatic factor selection in classification. However, the -norm SVM may suffer from estimation inefficiency and model selection inconsistency because it applies the same amount of shrinkage to each factor without assessing its relative importance. To overcome such a limitation, we propose the adaptive -norm (-norm) SVM, which penalizes the empirical hinge loss by the sum of the adaptively weighted factor-wise -norm penalty. The -norm SVM computes the weights by the 2-norm SVM estimator and can be formulated as a linear programming(LP) problem which is similar to the one of the -norm SVM. The simulation studies show that the proposed -norm SVM improves upon the -norm SVM in terms of classification accuracy and factor selection performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call