Abstract

The task of road extraction has aroused remarkable attention due to its critical role in facilitating urban development and up-to-date map maintenance, which has widespread applications such as navigation and autonomous driving. Existing solutions either rely on a single source of data for road graph extraction or simply fuse the multimodal information in a sub-optimal way. In this paper, we present an automatic road extraction solution named DuARE, which is designed to exploit the multimodal knowledge for underlying road extraction in a fully automatic manner. Specifically, we collect a large-scale real-world dataset for paired aerial image and trajectory data, covering over 33,000 km2 in more than 80 cities. First, road extraction is performed on the abundant spatial-temporal trajectory data adaptively based on the density distribution. Then, a coarse-to-fine road graph learner from aerial images is proposed to take advantage of the local and global context. Finally, our cross-check-based fusion approach keeps the optimal state of each modality while revisiting the original trajectory map with the guidance of aerial predictions to further improve the performance. Extensive experiments conducted on large-scale real-world datasets demonstrate the superiority and effectiveness of DuARE. In addition, DuARE has been deployed in production at Baidu Maps since June 2021 and keeps updating the road network by 100,000 km per month. This confirms that DuARE is a practical and industrial-grade solution for large-scale cost-effective road extraction from multimodal data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call