Abstract

This study investigates the relevance of semantic segmentation of remote sensing images in urban planning and land use. We introduce a novel deep learning model that leverages the principle of band combination in remote sensing images to enhance the efficiency and accuracy of semantic segmentation. Our research focuses not only on advancing the segmentation capabilities of remote sensing images but also on applying this technology in urban planning and land use to foster sustainable development in smart cities. By integrating the band combination principle into the convolution operation, our approach improves feature extraction, thereby enhancing the quality of semantic segmentation in remote sensing images. This method outperforms traditional remote sensing image analysis techniques by combining automatic feature learning and the generalization capabilities of deep learning, thereby improving the segmentation model’s performance. A unique aspect of this study is the direct application of remote sensing image segmentation in urban planning and land use. Our model accurately identifies various land uses such as residential, commercial, and industrial areas, and tracks land-use change trends, aiding urban planners in future development planning. Compared to conventional methods, our model significantly reduces training time and increases computational efficiency under identical training conditions. Experimental comparisons and analyses reveal that, within the same training duration, our model’s accuracy surpasses that of similar models by 10%–15%. On the ISPRS dataset, our model achieved a segmentation accuracy of 82.43% for building surfaces, and 76.54% for trees. In scenarios with relatively uniform reflective surfaces, our model outperforms similar models by approximately 10%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call