Abstract
Owing to extensive research on deep learning, significant progress has recently been made in trackless surface defect detection (SDD). Nevertheless, existing algorithms face two main challenges. First, while depth features contain rich spatial structure features, most models only accept red-green-blue (RGB) features as input, which severely constrains performance. Thus, this study proposes a dual-stream teacher model termed the asymmetrical contrastive learning network (ACLNet-T), which extracts both RGB and depth features to achieve high performance. Second, the introduction of the dual-stream model facilitates an exponential increase in the number of parameters. As a solution, we designed a single-stream student model (ACLNet-S) that extracted RGB features. We leveraged a contrastive distillation loss via knowledge distillation (KD) techniques to transfer rich multimodal features from the ACLNet-T to the ACLNet-S pixel by pixel and channel by channel. Furthermore, to compensate for the lack of contrastive distillation loss that focuses exclusively on local features, we employed multiscale graph mapping to establish long-range dependencies and transfer global features to the ACLNet-S through multiscale graph mapping distillation loss. Finally, an attentional distillation loss based on the adaptive attention decoder (AAD) was designed to further improve the performance of the ACLNet-S. Consequently, we obtained the ACLNet-S ∗ , which achieved performance similar to that of ACLNet-T, despite having a nearly eightfold parameter count gap. Through comprehensive experimentation using the industrial RGB-D dataset NEU RSDDS-AUG, the ACLNet-S ∗ (ACLNet-S with KD) was confirmed to outperform 16 state-of-the-art methods. Moreover, to showcase the generalization capacity of ACLNet-S ∗ , the proposed network was evaluated on three additional public datasets, and ACLNet-S ∗ achieved comparable results. The code is available at https://github.com/Yuride0404127/ACLNet-KD.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.