Abstract

Regularized multi-task learning (RMTL) has shown good performance in tackling multi-task binary problems. Although RMTL can be used to handle multi-class problems based on “one-versus-one” and “one-versus-rest” techniques, the information of the samples is not fully utilized and the class imbalance problem occurs. Motivated by the regularization technique in RMTL, we propose an original multi-task multi-class model termed MTKSVCR based on “one-versus-one-versus-rest” strategy to achieve better testing accuracy. Due to the utilization of the idea of RMTL, the related information included in multiple tasks is mined by setting different penalty parameters before task-common and task-specific regularization terms. However, the proposed MTKSVCR is time-consuming since it employs all samples in each optimization problem. Therefore, a multi-parameter safe acceleration rule termed SA is further presented to reduce the time consumption. It identifies and deletes most of the superfluous samples corresponding to 0 elements in the dual optimal solution before solving. Then, only a reduced dual problem is to be solved and the computational efficiency is improved accordingly. The biggest advantage of the proposed SA lies in safety. Namely, it derives an identical optimal solution to the primal problem without SA. In addition, our method remains effective when multiple parameters change simultaneously. Experiments on different artificial datasets and benchmark datasets verify the validity of the proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call