Abstract

Multi-output regression aims at mapping a multivariate input feature space to a multivariate output space. Currently, it is effective to extend the traditional support vector regression (SVR) mechanism to solve the multi-output case. However, some methods adopting a combination of single-output SVR models exhibit the severe drawback of not considering the possible correlations between outputs, and other multi-output SVRs show high computational complexity and are typically sensitive to parameters due to the influence of noise. To handle these problems, in this study, we determine the multi-output regression function through a pair of nonparallel up- and down-bound functions solved by two smaller-sized quadratic programming problems, which results in a fast learning speed. This method is named multi-output twin support vector regression (M-TSVR). Moreover, when the noise is heteroscedastic, based on our M-TSVR, we introduce a pair of multi-input/output nonparallel parameter insensitive up- and down-bound functions to evaluate a regression model named multi-output parameter-insensitive twin support vector regression (M-PITSVR). To handle the nonlinear case, we derive the kernelized extensions of M-TSVR and M-PITSVR. Finally, a series of comparative experiments with several other multi-output-based methods are performed on twelve multi-output datasets. The experimental results indicate that the proposed multi-output regressors yield fast learning speed as well as a better and more stable prediction performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call