Abstract
This paper presents a novel approach for addressing the challenges of large parameter volumes and high computational complexity in existing deep learning models for crack detection. This method involves training a student model using a pretrained teacher model to guide the learning process. The novelty of the method is the use of channel-wise knowledge distillation to normalize activation maps between the teacher and student models, followed by the minimization of the asymmetric Kullback–Leibler divergence to achieve optimal model performance. By focusing on imitating regions with prominent activation values, the student model achieves accurate crack localization. Test results show that the method improves crack segmentation, based on improvements in the F1_score and intersection over union by 2.17% and 3.55%, respectively, and outperforms other compared knowledge distillation methods. A lightweight crack segmentation model that ensures accuracy and efficiency is established in this study, which can provide an efficient solution for crack segmentation in real-world scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.