Multitask learning (MTL) is a joint learning paradigm, which fuses multiple related tasks together to achieve the better performance than single-task learning methods. It has been observed by many researchers that different tasks with certain similarities share a low-dimensional common yet latent subspace. In order to get the low-rank structure shared across tasks, trace norm has been used as a convex relaxation of the rank minimization problem. However, trace norm is not a tight approximation for the rank function. To address this important issue, we propose two novel regularization-based models to approximate the rank minimization problem by minimizing the k minimal singular values. For our new models, if the minimal singular values are suppressed to zeros, the rank would also be reduced. Compared with the standard trace norm, our new regularization-based models are the tighter approximations, which can help our models capture the low-dimensional subspace among multiple tasks better. Besides, it is an NP-hard problem to directly solve the exact rank minimization problem for our models. In this article, we proposed two simple but effective strategies to optimize our models, which tactically solves the exact rank minimization problem by setting a large penalizing parameter. Experimental results performed on synthetic and real-world benchmark datasets demonstrate that the proposed models have the ability of learning the low-rank structure shared across tasks and the better performance than other classical MTL methods.