Abstract

Automatic extraction of road information based on data-driven methods is significant for various practical applications. Remote sensing (RS) images and GPS trajectories are two available data sources that can describe roads from a complementary perspective, and fusing them together can improve road detection performance. However, existing studies on the combination of RS images and GPS trajectories do not fully utilize their enhanced information about roads and suffer from road information loss. Moreover, roads and intersections are two crucial elements of road network generation, and they are closely related to each other. Therefore, we propose a multitask and multisource adaptive fusion (MTMSAF) network, which leverages RS images and trajectory datasets to execute road extraction tasks and intersection detection tasks simultaneously. The MTMSAF network is built on three components. First, two encoder steams are designed to capture road features from RS images and trajectories. Then, an adaptive fusion module is created to fuse the individual road features at each level in a guiding fashion. Finally, two specific decoders are proposed: one is used to implement the road region detection task by considering multidirectional geometric road information, and the other is used to perform the intersection extraction task by recovering intersection information with the road region feature. Four state-of-the-art deep learning-based methods, including segmentation networks and road extraction networks, are compared with the proposed approach. The results show the superiority of our approach for both road detection and intersection extraction tasks.

Highlights

  • C omplete and accurate road networks, which consist of road segments and intersections connected in series, are essential for practical applications such as intelligent transportation[1], updating geographic information and urban planning[2]

  • Our multi-task and multi-source adaptive fusion (MTMSAF) network obtains the best results in the completeness of the small road shown in the red box and in the segmentation of detailed information shown in the yellow box. This is attributed to our proposed adaptive fusion module, which can better capture the enhanced information of Remote sensing (RS) images and trajectories

  • We propose a road network generation framework based on RS images and trajectories

Read more

Summary

INTRODUCTION

C omplete and accurate road networks, which consist of road segments and intersections connected in series, are essential for practical applications such as intelligent transportation[1], updating geographic information and urban planning[2]. There are some preliminary studies on road extraction based on fused RS images and trajectories, the technology of data fusion is relatively simple, such as the concatenation of two data sources directly as the input of the deep learning network [14, 15]. Given the problems in the existing works, we propose a multitask deep learning approach to facilitate road detection and road intersection extraction tasks simultaneously by fusing RS images and taxi GPS trajectories, which is named the multitask and multi-source adaptive fusion (MTMSAF) network. 1) A novel encoder-decoder-based deep learning network is designed with an integrated multitask learning scheme to solve road detection and intersection extraction tasks simultaneously based on the fusion of RS image features and trajectory features.

RELATED WORKS
METHOD
Multisource Stream Encoders
Multisource Feature Adaptive Fusion Module
Multitask Decoder Learning
Adaptive Loss Function
EXPERIMENTS
Implementation details
Metrics
Method
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call