Abstract

Multilinear multitask learning (MLMTL) considers an MTL problem in which tasks are arranged by multiple indices. By exploiting the higher order correlations among the tasks, MLMTL is expected to improve the performance of traditional MTL, which only considers the first-order correlation across all tasks, e.g., low-rank structure of the coefficient matrix. The key to MLMTL is designing a rational regularization term to represent the latent correlation structure underlying the coefficient tensor instead of matrix. In this paper, we propose a new MLMTL model by employing the rank-product regularization term in the objective, which on one hand can automatically rectify the weights along all its tensor modes and on the other hand have an explicit physical meaning. By using this regularization, the intrinsic high-order correlations among tasks can be more precisely described, and thus, the overall performance of all tasks can be improved. To solve the resulted optimization model, we design an efficient algorithm by applying the alternating direction method of multipliers (ADMM). We also analyze the convergence and show that the proposed algorithm, with certain restriction, is asymptotically regular. Experiments on both synthetic and real data sets substantiate the superiority of the proposed method beyond the existing MLMTL methods in terms of accuracy and efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.