Abstract

Online or real-time defect detection of display panels after array process is of paramount importance for quality control and yield rate improvement of products in display industry. However, owing to the limitation in feature representation, the performances of traditional defect detection methods are not satisfactory. This paper develops a novel element-wise feature fusion network (EFFNet) to solve the issue and achieve high-accuracy real-time defect detection of display panels. The method adopts a transfer learning and fine-tuning strategy for feature extraction layers and a decoder with relatively less computational complexity. Particularly, a feature fusion module based on element-wise addition of pyramid features is proposed in skip connection to improve detection efficiency and accuracy. Our method is compared with many state-of-the-art CNN-based models. Additionally, the effects of training dataset size, motion blur, and different backgrounds on the performance of the proposed method are investigated. Extensive experiments, including the ablation study, demonstrate that the developed network can accurately detect defects with complex textures, ambiguous boundaries and low contrast. It also has good robustness against motion blur. It outperforms state-of-the-art methods in terms of mIoU, mPA, and F1-Measure. Moreover, it is able to detect defects at speeds of up to 159 fps with input images of size 256 × 256 pixels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call