Abstract

• Image fusion based on deflectometry principle can enhance the contrast between defective and non-defective regions. • A benchmark dataset of specular surface defects was constructed for algorithms evaluation and comparison. • An end-to-end attention-based network framework for automatic surface inspection is proposed. • Proposed framework can maintain performance dealing with extremely unbalanced pixel classes. • Quantitative and qualitative tests show superior performance compared to the state-of-art methods. Automated defect detection on specular vehicle surface with limited features (of up to 0.7 mm in diameter or width) and extremely unbalanced pixel classes is a still challenge of product quality control in automotive industry. The traditional defect inspection on specular surface is usually performed by inspectors, which is subjective, unstable and unquantified. Also, due to the limited features of isolated defect regions and hand-crafted feature extraction models may not be able to coordinated with each other, it is difficult for traditional methods to achieve comparable learning performance with the deep network. To alleviate these problems, a novel end-to-end attention-based fully convolutional neural network framework -DeepInspection is proposed for automated defect inspection on specular surface. Specifically, a sequence fusion algorithm through the principle of deflectometry is introduced to enhance the contrast between defective regions (pixels with lower intensity) and non-defective regions (pixels with higher intensity). Then, an attention-based deep convolutional neural network (DCNN) with Atrous Spatial Pyramid Pooling (ASPP) was proposed to capture local-to-global feature presentation from pre-processed fused images. To verify the robustness and effectiveness of the proposed method, a benchmark dataset called DeepInspection160 with 160 manually labeled images is established. Although the defective pixels only account for 0.561% in the DeepInspection160 dataset, the proposed DeepInspection framework still surpasses several state-of-the-art specular surface inspection methods which achieves F1 score over 0.7513 (pixel level) and 0.8055 (individual connected components) on the proposed challenging dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.