Abstract

The past several years have witnessed the rapid development of the task of road extraction in high-resolution remote sensing images. However, due to the complex background and road distribution, road extraction is still a challenging research in remote sensing images. In convolutional neural networks (CNNs), the U-shaped architecture network has shown its effectiveness. But the global representation cannot be captured effectively by CNNs. While in the transformer, the self-attention (SA) module can capture the long-distance feature dependencies. A hybrid encoder-decoder method called BDTNet is proposed in this letter, which enhance the extraction of global and local information in remote sensing images. Firstly, feature maps of different scales are obtained through the backbone network. And then, on the basis of reducing the computational cost of self-attention, the Bi-Direction Transformer Module (BDTM) is constructed to capture the contextual road information in feature maps of different scales. Finally, the Feature Refinement Module (FRM) is introduced to integrate the features extracted from the backbone network and BDTM, which enhances the semantic information of the feature maps and obtains more detailed segmentation results. The results show that the proposed method achieved a high IoU of 67.09% in the DeepGlobe dataset. Extensive experiments also verify the effectiveness of the proposed method on three public remote sensing road datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.