Abstract
This paper introduces two new approaches to building sparse least square support vector machines (LSSVM) based on genetic algorithms (GAs) for classification tasks. LSSVM classifiers are an alternative to SVM ones because the training process of LSSVM classifiers only requires to solve a linear equation system instead of a quadratic programming optimization problem. However, the absence of sparseness in the Lagrange multiplier vector (i.e. the solution) is a significant problem for the effective use of these classifiers. In order to overcome this lack of sparseness, we propose both single and multi-objective GA approaches to leave a few support vectors out of the solution without affecting the classifier׳s accuracy and even improving it. The main idea is to leave out outliers, non-relevant patterns or those ones which can be corrupted with noise and thus prevent classifiers to achieve higher accuracies along with a reduced set of support vectors. Differently from previous works, genetic algorithms are used in this work to obtain sparseness not to find out the optimal values of the LSSVM hyper-parameters.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.