Abstract
Deep learning-based predictive quality enables manufacturing companies to make data-driven predictions of the quality of a produced product based on process data. A central challenge is that production processes are subject to continuous changes such as the manufacturing of new products, with the result that previously trained models may no longer perform well in the process. In this paper, we address this problem and propose a method for continual learning in such predictive quality scenarios. We therefore adapt and extend the memory-aware synapses approach to train an artificial neural network across different product variations. Our evaluation in a real-world regression problem in injection molding shows that the approach successfully prevents the neural network from forgetting of previous tasks and improves the training efficiency for new tasks. Moreover, by extending the approach with the transfer of network weights from similar previous tasks, we significantly improve its data efficiency and performance on sparse data. Our code is publicly available to reproduce our results and build upon them.
Highlights
Predictive quality enables manufacturing companies to make data-driven in-process predictions of the quality of a produced product based on process data
The common goal in continual learning is to keep the training effort low and to prevent the so-called catastrophic forgetting of the networks with each new task. We address this issue and demonstrate the successful application of continual learning for a real use case in injection molding, where we train a neural network for numerical prediction of product quality based on machine parameters
We investigated a deep learning-based continual learning method for quality prediction across several different product variations in an injection molding use case
Summary
Predictive quality enables manufacturing companies to make data-driven in-process predictions of the quality of a produced product based on process data. The mentioned examples mainly focus on a particular learning problem, where the training of a neural network happens under the assumption that enough data is available for the respective problem. This assumption is often not met in production. A lot of new process data would have to be collected each time to train another completely new model on it (Escobar et al 2021) This strongly limits the sustainable use of deep learning in the production context, especially since the collection of representative process data is costly and time-consuming. Other common problems in the production domain are that, due to limited hardware capacities or corporate policies, long-term process data cannot be stored or accessed and model training must be carried out in a
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.