Abstract
The pavement distress measurement is a crucial aspect in guaranteeing the safety of transportation infrastructure. In this regard, we introduce a novel and cost-effective multi-sensor approach for pavement segmentation during low-light night conditions. By utilizing the low-cost Azure Kinect multi-sensor system, we generated a multi-sensor dataset that encompasses aligned IR, RGB, and depth images. Then we carried out the data annotation process on the RGB images. A total of 11,343 manual annotations were meticulously made on 791 images, which were randomly selected from a collection of 96,891 frames. Subsequently, four different deep learning-based image segmentation models were analyzed both quantitatively and qualitatively. The results indicated that the segmentation performance on the IR dataset outperformed that of the RGB dataset. The model with the highest mIoU (mean Intersection over Union) of 0.7169 was ConvNext when trained on the IR dataset. Furthermore, we proposed the use of relative height for evaluating the severity of pavement distress. On the aligned depth map, the relative height was calculated using the depth data from the corresponding pavement distress area. Additionally, a quantitative comparison between manual annotations and the results obtained through deep learning revealed that the latter was more effective in identifying more severe forms of pavement distress. Through this study, we established the feasibility of collecting pavement distress data during nighttime using a low-cost multi-sensor system.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Applied Earth Observation and Geoinformation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.