Semantic segmentation methods for remote-sensing images based on the deep learning framework have achieved significant performance improvements. However, most of the existing work is based on fully supervised methods, which rely on a large number of manually annotated pixel-level labels. However, for remote-sensing images, labeling the ground-truth takes time and effort. To solve the problem in existing methods of overly relying on manual labeling, in this study, we propose a semisupervised convolution neural network based on contrastive loss for partial unlabeled remote-sensing image segmentation. In the design of the contrastive loss function, to capture the semantic relationship of pixels and improve the separability between different categories, we propose pixel-level and region-level contrastive loss. The pixel-level contrastive loss is designed to learn the correlation between different images, while region-level contrastive loss is designed to improve the quality of generated pseudo-labels. In addition, we designed a propagated self-training method that further guarantees the quality of the pseudo-labels and improves the richness of the labeled data. Experiments on POTSDAM and Vaihingen datasets demonstrate that the proposed method achieves the highest Mean Intersection over Union (mIOU) and significantly outperforms previous methods.