The automatic detection and extraction of road pothole distress is an important issue regarding healthy road structures, monitoring, and maintenance. In this paper, a new algorithm that integrates the mobile point cloud and images is proposed for the detection of road potholes. The algorithm includes three steps: 2D candidate pothole extraction from the images using a deep learning method, 3D candidate pothole extraction via a point cloud, and pothole determination by depth analysis. Because the texture features of the pothole and asphalt or concrete patches greatly differ from those of a normal road, pothole or patch distress images are used to establish a training set and train and test the deep learning system. Subsequently, the 2D candidate pothole is extracted from the images and labeled via the trained DeepLabv3+, a state-of-the-art pixel-wise classification (semantic segmentation) network. The edge of the candidate pothole in the image is then used to establish the relationship between the mobile point cloud and images. The original road point cloud around the edge of the candidate pothole is categorized into two groups, that is, interior and exterior points, according to the relationship between the point cloud and images. The exterior points are used to fit the road plane and calculate the accurate 3D shape of the candidate potholes. Finally, the interior points of a candidate pothole are used to analyze the depth distribution to determine if the candidate pothole is a pothole or patch. To verify the proposed method, two cases, including real and simulation cases, are selected. The real case is an expressway in Shanghai with a length of 26.4 km. Based on the proposed method, 77 candidate potholes are extracted by the DeepLabv3+ system; 49 potholes and 28 patches are finally filtered. The affected lanes and pothole locations are analyzed. The simulation case is selected to verify the geometric accuracy of the detected potholes. The results show that the mean accuracy of the detected potholes is ∼1.5–2.8 cm.
Read full abstract