Abstract

The acquisition of road information from remote sensing images is of significant value with regard to intelligent transportation research. This study focuses on enhancing the contour-learning ability to mitigate the phenomenon of fragmented road segments and missing connections in road extraction. A novel Deep Feature-Review (FR) Transmit Network (TransNet) is proposed to review and facilitate the flow of contour features into an encoder network. Meanwhile, multiscale features are linked via a bridge between the encoder and the decoder. Compared with the state-of-the-art models such as fully convolutional network (FCN), SegNet, DeepLabv3, D-LinkNet, spatial consistency-FCN, and generative adversarial network (GAN), the proposed network achieves better overall performance for the Massachusetts Roads data set, with accuracy, precision, recall, and mean intersection-over-union (IoU) scores of 97.48%, 83.72%, 78.13%, and 0.6286%, respectively. For the DeepGlobe Road Extraction data set, the proposed network outperforms FCN, SegNet, DeepLabv3, D-LinkNet, and Deep TransNet, achieving accuracy, precision, recall, and mean IoU scores of 98.70%, 87.30%, 81.15%, and 0.7244%, respectively. Overall, these experiments indicate that the proposed network can effectively address the phenomenon of fragmented road segments and poor connectivity in remote sensing images, indicating its potential for utilization in practical intelligent transportation scenarios.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.