Abstract

Automatic road extraction from satellite images is a popular research hotspot topic in the field of remote sensing. However, the complexity of road features and structures hinders their precise extraction. Road features have different saliency due to the occlusion of other objects and diverse road types. Road structures are scattered across various locations and affected by geographical conditions, population distribution, and transportation demands. These issues render current methods based on convolutional neural networks prone to fragmented and incomplete road extraction results. To address these issues, a Road Extraction Network with Dual-View Information Perception Based on GCN (RDPGNet) is proposed, where the GCN-based dual-view perceptron (GDVP) uses the superior information interaction capability of the GCN to explore road information under a dual view. In the GDVP, we first design the road feature saliency graph reasoning module (RFSG) and road structure homogeneous space graph reasoning module (RSHS) based on the features and structures. The RFSG obtains the similarity of road feature information in different regions as a similarity matrix to participate in the graph reasoning process, ensuring that equal treatment is given to regions with varying saliency levels. The RSHS projects the road structure onto a corresponding homogeneous space while aggregating and interacting with information through graph convolution, thereby enhancing the network’s perception of roads in diverse locations. Second, because multi-view information usually shares a potential common representation, a multi-view information fusion and alignment strategy (MVFA) is designed to comprehensively model road information. Experimental results obtained from two public datasets indicate that RDPGNet outperforms other state-of-the-art networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call