Semantic segmentation is a fundamental task in remote sensing image understanding. Recently, Deep Convolutional Neural Networks (DCNNs) have considerably improved the performance of the semantic segmentation of natural scenes. However, it is still challenging for very high-resolution remote sensing images. Due to the large and complex scenes as well as the influence of illumination and imaging angle, it is particularly difficult for the existing methods to accurately obtain the category of pixels at object boundaries—the so-called boundary blur. We propose a framework called Boundary-Aware Semi-Supervised Semantic Segmentation Network (BAS4Net), which obtains more accurate segmentation results without additional annotation workload, especially at the object boundaries. The Channel-weighted Multi-scale Feature (CMF) module balances semantic and spatial information, and the Boundary Attention Module (BAM) weights the features with rich semantic boundary information to alleviate the boundary blur. Additionally, to decrease the amount of difficult and tedious manual labeling of remote sensing images, a discriminator network infers pseudolabels from unlabeled images to assist semisupervised learning, and further improves the performance of the segmentation network. To validate the effectiveness of the proposed framework, extensive experiments have been performed on both the ISPRS Vaihingen dataset, and the novel remote sensing dataset AIR-SEG with more categories, and complex boundaries. The results demonstrate a significant improvement of accuracy especially on boundaries and for small objects.