Abstract
The road networks provide key information for a broad range of applications such as urban planning, urban management, and navigation. The fast-developing technology of remote sensing that acquires high-resolution observational data of the land surface offers opportunities for automatic extraction of road networks. However, the road networks extracted from remote sensing images are likely affected by shadows and trees, making the road map irregular and inaccurate. This research aims to improve the extraction of road centerlines using both very-high-resolution (VHR) aerial images and light detection and ranging (LiDAR) by accounting for road connectivity. The proposed method first applies the fractal net evolution approach (FNEA) to segment remote sensing images into image objects and then classifies image objects using the machine learning classifier, random forest. A post-processing approach based on the minimum area bounding rectangle (MABR) is proposed and a structure feature index is adopted to obtain the complete road networks. Finally, a multistep approach, that is, morphology thinning, Harris corner detection, and least square fitting (MHL) approach, is designed to accurately extract the road centerlines from the complex road networks. The proposed method is applied to three datasets, including the New York dataset obtained from the object identification dataset, the Vaihingen dataset obtained from the International Society for Photogrammetry and Remote Sensing (ISPRS) 2D semantic labelling benchmark and Guangzhou dataset. Compared with two state-of-the-art methods, the proposed method can obtain the highest completeness, correctness, and quality for the three datasets. The experiment results show that the proposed method is an efficient solution for extracting road centerlines in complex scenes from VHR aerial images and light detection and ranging (LiDAR) data.
Highlights
The road network is the backbone of the city [1] and plays an essential role in many application fields, such as city planning, navigation, and transportation [2,3,4]
In order to improve the accuracy of road centerline extraction under complex scenes, this paper proposes a novel road centerline extraction method combining VHR images with light detection and ranging (LiDAR) data
There are still many false road segments (Figure 3c); (4) The shape filter based on the skeleton-based object linearity index (SOLI) can remove the false road segments, and road networks can be well superimposed on the VHR image (Figure 3d)
Summary
The road network is the backbone of the city [1] and plays an essential role in many application fields, such as city planning, navigation, and transportation [2,3,4]. Conventional methods to obtain the road network require extensive surveying fieldworks and are often time-consuming and costly [5]. Extensive efforts have been made to extract information on the road network from optical remote sensing images [7,8]. Some studies designed the detectors of points and lines to extract road networks. Liu et al [10] first detected road edges from remote sensing data to extract the road networks. The basic idea of the solution is first to classify remote sensing images into binary road and non-road groups and post-process the road groups based on the structural characteristics and contextual features to obtain the road network [15,17]. Various methods for road network extraction have been proposed, it is still a challenging task to extract complete and accurate road networks from VHR images in complex scenes because of the interferences of trees, shadows and non-road impervious surface (such as buildings and parking lots) [4]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.