Semi-supervised semantic segmentation in remote sensing is critical for urban planning, environmental monitoring and disaster response. The high cost and time required for high-quality data annotation limits its wider application. Traditional semi-supervised deep learning methods, which operate in a single dimension, limit model robustness and generalization. Our study addresses this issue by proposing an effective semi-supervised learning method. This method improves model robustness and generalization in remote sensing semantic segmentation. We introduce the Multi-Dimensional Manifolds Consistency Regularization (MDMCR) approach. It applies multi-dimensional perturbations to input images and features, expanding the sample library and improving learning efficiency. Our method has been rigorously tested on various datasets. With only 1/8 of the data labeled, it achieved mean Intersection over Union (mIoU) scores of 74.48% on ISPRS Vaihingen and 78.80% on Potsdam. With only 5% labeled data, it reached 49.93% mIoU on DeepGlobe Roads and 57.90% on Massachusetts Roads. These results show the superiority of our method over existing techniques.