Abstract
Road extraction is a significant research hotspot in the area of remote sensing images. Extracting an accurate road network from remote sensing images is still challenging because some objects in the images are similar to the road, and some results are discontinuous due to the occlusion. Recently, convolutional neural networks (CNNs) have shown their power in road extraction process. However, the contextual information can not be captured effectively by those CNNs. Based on CNNs, combining with high-level semantic features and foreground contextual information, a novel road extraction method for remote sensing images is proposed in this paper. Firstly, the position attention (PA) mechanism is designed to enhance the expression ability for the road feature. Then the contextual information extraction module (CIEM) is constructed to capture the road contextual information in the images. At last, foreground contextual information supplement module (FCISM) is proposed to provide foreground context information at different stages of the decoder, which can improve the inference ability for the occluded area. Extensive experiments on the DeepGlobal road dataset showed the proposed method outperforms the existing methods in accuracy, IoU, Precision, F1-score, and yields competitive recall results, which demonstrated the efficiency of the new model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.