Abstract

Condition monitoring of road surfaces has acquired a lot of attention in the field of computer vision throughout the years. It is due to two main reasons; firstly, it produces safety and comfort to the community, and secondly, it causes less damage to the vehicles for an advanced driver assistance system (ADAS). To this extent, this article aims to present a real-time vision-based approach that automatically segments the road anomalies from the drivable area. An Intel RealSense D435 depth camera has been employed to capture RGB and depth (RGB-D) images of the road surface. An unsupervised learning method based on diffusion process has been employed to learn the affinity matrix of the RGB-D data and spectral clustering has been applied on the updated affinity matrix to cluster the road images. Image multiplex visibility graphs of the input sensor data are diffused by regularized diffusion process (RDP) to update the affinity matrix followed by generation of saliency map of the road surfaces. The prime motive to employ RDP is to use the graph Laplacian as a tool for similarity measurement for preserving the manifold structure. Qualitative and quantitative results reveal the efficacy of the proposed system with state-of-the-art methods on our RGB-D dataset. Benchmark datasets (KITTI and Cityscapes) are also used to validate the proposed method for segmentation of the drivable area for an intelligent transportation system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.