ABSTRACT The use of remote sensing imagery for land cover and land use classification has made significant advancements in recent years. However, it becomes particularly challenging to enhance the semantic representation of high-resolution networks while dealing with uneven land categories and merging multi-scale data without compromising the accuracy of semantic segmentation. To tackle this challenge, this paper presents a novel method for classifying high-resolution remote sensing images based on a deep neural network that performs semantic segmentation of urban construction lands into five categories: vegetation, water, buildings, roads, and bare soil. The network incorporates a U-shaped high-resolution neural network and the advanced high-resolution network (HRNet) framework. The parallel storage of feature maps with different resolutions enables the exchange of information between them. The data pre-processing module addresses the issue of data imbalance in the semantic segmentation of urban construction lands, resulting in an increase in Intersection over Union (IoU) values for different land types by 3.75%-12.01%. Additionally, a target context representation module is introduced to enhance the feature representation of pixels by calculating the relationship between pixels and multiple target regions. Moreover, a polarization attention mechanism is proposed to extract the characteristics of geographical objects in all directions and achieve a stronger semantic representation. This method provides a novel approach to accurately and effectively extract information on construction lands and advance the development of monitoring algorithms for urban construction lands. To validate the proposed U-HRNet-OCR+PSA network, a comparative analysis was conducted with six classical networks, including DeepLabv3+, PSPNet, U-Net, U-Net++, HRNet, and HRNet-OCR, as well as the relatively new ViT-adapter-L, Oneformer and InternImage-H. The experiments demonstrate that the U-HRNet-OCR+PSA network achieves higher accuracy compared to the aforementioned networks. Specifically, the corresponding IoU values for the buildings, roads, vegetation, bare soil, and water in the multi-scale dataset are 89.79%, 90.05%, 94.89%, 85.91%, and 88.36%, respectively.