Abstract

Automated detection of road damage (ADRD) is a challenging topic in road maintenance. It focuses on automatically detecting road damage and assessing severity by deep learning. Because of the sparse distribution of characteristic pixels, it is more challenging than object detection. Although some public datasets provide a database for the development of ADRD, their amounts of data and the standard of classification cannot meet network training and feature learning. With the aim of solving this problem, this work publishes a new road damage dataset named CNRDD, which is labeled according to the latest evaluation standard for highway technical conditions in China (JTG5210-2018). The dataset is collected by professional onboard cameras and is manually labeled in eight categories with three different degrees (mild, moderate and severe), which can effectively help promote research of automated detection of road damage. At the same time, a novel baseline with attention fusion and normalization is proposed to evaluate and analyze the published dataset. It explicitly leverages edge detection cues to guide attention for salient regions and suppresses the weights of non-salient features by attention normalization, which can alleviate the interference of sparse pixel distribution on damage detection. Experimental results demonstrate that the proposed baseline significantly outperforms most existing methods on the existing RDD2020 dataset and the newly released CNRDD dataset. Further, the CNRDD dataset is proved more robust, as its high damage density and professional classification are more conducive to promote the development of ADRD.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.