Abstract

The solution of least squares support vector machines (LS-SVMs) is characterized by a specific linear system, that is, a saddle point system. Approaches for its numerical solutions such as conjugate methods Sykens and Vandewalle (1999) and null space methods Chu et al. (2005) have been proposed. To speed up the solution of LS-SVM, this paper employs the minimal residual (MINRES) method to solve the above saddle point system directly. Theoretical analysis indicates that the MINRES method is more efficient than the conjugate gradient method and the null space method for solving the saddle point system. Experiments on benchmark data sets show that compared with mainstream algorithms for LS-SVM, the proposed approach significantly reduces the training time and keeps comparable accuracy. To heel, the LS-SVM based on MINRES method is used to track a practical problem originated from blast furnace iron-making process: changing trend prediction of silicon content in hot metal. The MINRES method-based LS-SVM can effectively perform feature reduction and model selection simultaneously, so it is a practical tool for the silicon trend prediction task.

Highlights

  • As one kernel method, SVM works by embedding the input data x, z ∈ X into a Hilbert space H by a high-dimensional mapping Φ ·, and trying to find a linear relation among the high-dimensional embedded data points 1, 2

  • The model training process of LS-SVM is performed by solving a specific linear equations, that is, a saddle point system which can be efficiently solved by iterative methods instead of a quadratic programming problem

  • Practical application to a typical real BF indicates that the established MINRES method-based LS-SVM model is a good candidate to predict the changing trend of the silicon content in BF hot metal with low time cost

Read more

Summary

Introduction

SVM works by embedding the input data x, z ∈ X into a Hilbert space H by a high-dimensional mapping Φ · , and trying to find a linear relation among the high-dimensional embedded data points 1, 2 This process is implicitly performed by specifying a kernel function which satisfies k x, z Φ x TΦ z , that is, the inner product of the embedded points. To speed up the training of LS-SVM, Chu et al 8 employed the null space method to transform the saddle point system into a reduced n − 1 order symmetric positive definite system which was solved with the CG algorithm .

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call