Abstract
Although Convolutional Neural Network (CNN) can show the progressive nature of crack detection, it still mainly acts as a black box and interpreting CNN has yet become a difficult task. This paper presents a more accurate, efficient, and ‘transparent’ UNet-based model for crack detection and it integrates visual explanations to interpret the model. This UNet-based model changes the encoder of UNet by using typical CNN models to have different computational and model complexity. Instead of the only accuracy metrics, three evaluation metrics, including accuracy, time (computational complexity), and memory (model complexity), are used to estimate the overall performance of CNN for crack detection. An open benchmark dataset for crack detection is used to train and evaluate the models. The results show that the type and depth of the encoder have a significant influence on the accuracy of UNet-based models. UNet-VGG19, UNet-InceptionResNetv2, and UNet-EfficientNetb3 are ranked the top three by considering all evaluation metrics and thus selected to show the prediction and visual explanation under different conditions and in different training periods. The background of images, especially rough or dark background, has a significant influence on the performance of UNet-based models, while different types of cracks (single and multi, thin and thick) don’t. All these three UNet-based models perform well for large-scale images but they don’t fully depict the shape of cracks for small-scale images. It is discovered from this limited study that among all these conditions, UNet-VGG19 performs the best in terms of prediction and visual explanation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.