Abstract
Road information from high-resolution remote-sensing images is widely used in various fields, and deep-learning-based methods have effectively shown high road-extraction performance. However, for the detection of roads sealed with tarmac, or covered by trees in high-resolution remote-sensing images, some challenges still limit the accuracy of extraction: 1) large intraclass differences between roads and unclear interclass differences between urban objects, especially roads and buildings; 2) roads occluded by trees, shadows, and buildings are difficult to extract; and 3) lack of high-precision remote-sensing datasets for roads. To increase the accuracy of road extraction from high-resolution remote-sensing images, we propose a split depth-wise (DW) separable graph convolutional network (SGCN). First, we split DW-separable convolution to obtain channel and spatial features, to enhance the expression ability of road features. Thereafter, we present a graph convolutional network to capture global contextual road information in channel and spatial features. The Sobel gradient operator is used to construct an adjacency matrix of the feature graph. A total of 13 deep-learning networks were used on the Massachusetts roads dataset and nine on our self-constructed mountain road dataset, for comparison with our proposed SGCN. Our model achieved a mean intersection over union (mIOU) of 81.65% with an F1-score of 78.99% for the Massachusetts roads dataset, and an mIOU of 62.45% with an F1-score of 45.06% for our proposed dataset. The visualization results showed that SGCN performs better in extracting covered and tiny roads and is able to effectively extract roads from high-resolution remote-sensing images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Geoscience and Remote Sensing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.