Abstract
Human eyes generally perform product defect inspection in Indonesian industrial production lines; resulting in low efficiency and a high margin of error due to eye tiredness. Automated quality assessment systems for mass production can utilize deep learning connected to cameras for more efficient defect detection. However, employing deep learning on multiple high frame rate cameras (HFRC) causes the need for much computation and decreases deep learning performance, especially in the real-time inspection of moving objects. This paper proposes optimizing computational resources for real-time product quality assessment on moving cylindrical shell objects using deep learning with multiple HFRC Sensors. Two application frameworks embedded with several deep learning models were compared and tested to produce robust and powerful applications to assess the quality of production results on rotating objects. Based on the experiment results using three HFRC Sensors, a web-based application with tensorflow.js framework outperformed desktop applications in computation. Moreover, MobileNet v1 delivers the highest performance compared to other models. This result reveals an opportunity for a web-based application as a lightweight framework for quality assessment using multiple HFRC and deep learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.