Abstract

Multi-view learning mechanism, which enhances learning performance by training multi-model data sets, is a popular filed in recent years. Multi-view generalized eigenvalue proximal support vector machine (MvGSVM), as a most recently proposed classifier, has been shown to be successful in multi-model classification, which incorporates multi-view learning into classical GEPSVM. However, this method is still based on squared L2-norm distance measure, thus its robustness is not guaranteed in the presence of outliers. To address this problem, we propose a robust multi-view GEPSVM based on Lp-norm minimization and Ls-norm maximization. But, the introduction of the Lp-norm and Ls-norm makes the problem different from the generalized eigenvalue problem. So an efficient iterative algorithm is designed to solve this problem, and we also give the proof of convergence of the algorithm. The performances in extensive experiments demonstrate the effectiveness and robustness of the algorithm.

Highlights

  • Support vector machine (SVM) [1]–[3], as a supervised learning tool [4], has performed powerfully in pattern recognition and data mining over the past decades

  • There are two main constraints for original SVM that limit its exposure to a wider range of applications: the complex Quadratic Programming Problems (QPPs) [11] and Exclusive Or problems (XOR)

  • By introducing a multi-view co-regularization to associate two views, Sun [35] gave an improved version based on generalized eigenvalue proximal support vector machine (GEPSVM), termed as multi-view learning with GEPSVM (MvGSVM), which converts a complex optimization problem to a generalized eigenvalue problem

Read more

Summary

INTRODUCTION

Support vector machine (SVM) [1]–[3], as a supervised learning tool [4], has performed powerfully in pattern recognition and data mining over the past decades. Inspired of the optimization objective for GEPSVM, Jayadera et al raised twin support vector machine (TWSVM) [13], which tries to obtain two nonparallel hyperplanes by solving two small-scale QPPs instead of generalized eigenvalue problems, is an important branch of SVM. By introducing a multi-view co-regularization to associate two views, Sun [35] gave an improved version based on GEPSVM, termed as multi-view learning with GEPSVM (MvGSVM), which converts a complex optimization problem to a generalized eigenvalue problem. Yan and Yan [45] reconstructed the ratio term of GEPSVM with L1-norm metric to seek two nonparallel planes by managing a pair of QPPs. L1-norm projection twin SVM (L1-TWSVM) is proved that has a more stable performance, in which Yan et al [46] constructed an unconstrained convex programming problem and generate multiple projection axes for each class by using recursive algorithms.

RELATED WORK
GEPSVM
MvGSVM
ALGORITHMIC ANALYSIS
Findings
EXPERIMENT
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call