Abstract

Multi-task learning (MTL), with the help of the relationship among tasks, is able to improve the generalization performance of all tasks by learning multiple tasks simultaneously. Multi-task sparse feature learning, formulated under the regularization framework, is one of the main approaches for MTL. Hence, the regularization term is crucial for multi-task sparse feature models. While most of the existing models utilize convex sparse regularization, a non-convex capped-ℓ1 regularization is extended into MTL and proven as a powerful sparse term. In this paper, we propose a novel regularization term for multi-task learning by extending the non-convex ℓ1−2 regularization to multi-task learning. The regularization term can not only realize group sparsity to extract the common features shared by all tasks, but also learn task-specific features through the relaxation of the second term in regularization. Although the model formulation is similar to a proposed one for multi-class problem, we first extend ℓ1−2 regularization to multi-task learning so that both common features and task-specific features can be extracted. A classical multi-task learning model (Multi-task Feature Selection, MTFS) can be viewed as a special case of our proposed model. Due to the complexity of regularization, we approximate the original problem by a locally linear subproblem and then use the Alternating Direction Method of Multipliers (ADMM) to solve this subproblem. The theoretical analysis shows the convergence of the proposed algorithm and the time complexity of the algorithm is provided. Experimental results demonstrate the effectiveness of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call