The Cloth-changing person re-identification (CC-ReID) is more challenging than person re-identification (ReID) because of the unreliability of cloth-relevant features. Existing CC-ReID methods output the unique latent vector feature for the same pedestrian image. However, on the one hand, the latent vector feature generated by the pooling layer will lose more spatial information. On the other hand, the features that they focus on are the same for the same pedestrian image under different comparison pedestrian images, which does not consider the pedestrian pair relation information. For this, we propose a Multitask Tensor-based Relation Network (MTTRN) for CC-ReID. In MTTRN, we utilize the augmented cloth-changing identity images, human parsing images and head images to guide the model learning more fine-grained cloth-irrelevant feature cues. We propose a novel channel subspace generator and use the tensor mode-n product to generate the subspace tensor features instead of the vector feature generated by the pooling layer, which can retain more spatial feature information. Furthermore, we assume that the model will focus on different features for the same pedestrian image under different comparison pedestrian images in the same feature subspace or the same pedestrian image pair in different feature subspaces, in which the tensor feature pair relation is considered fully to mine more robust features. Extensive experiments show that our method achieves the state-of-the-art or competitive performance on three CC-ReID benchmark datasets and demonstrate the robustness of our model.