Abstract

Multitask learning can be defined as the joint learning of related tasks using shared representations, such that each task can help other tasks to perform better. One of the various multitask learning frameworks is the regularized convex minimization problem, for which many optimization techniques are available in the literature. In this paper, we consider solving the non-smooth convex minimization problem with sparsity-inducing regularizers for the multitask learning framework, which can be efficiently solved using proximal algorithms. Due to slow convergence of traditional proximal gradient methods, a recent trend is to introduce acceleration to these methods, which increases the speed of convergence. In this paper, we present a new accelerated gradient method for the multitask regression framework, which not only outperforms its non-accelerated counterpart and traditional accelerated proximal gradient method but also improves the prediction accuracy. We also prove the convergence and stability of the algorithm under few specific conditions. To demonstrate the applicability of our method, we performed experiments with several real multitask learning benchmark datasets. Empirical results exhibit that our method outperforms the previous methods in terms of convergence, accuracy and computational time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call