Abstract

With the ongoing digitization of the manufacturing industry and the ability to bring together data from specific manufacturing processes, there is enormous potential to use machine learning (ML) techniques to improve such processes. In this context, the competitive automotive industry can take advantage of the ML power by predicting defects before they occur, aiming to reduce the scrap rate and increase the robustness and reliability of the production processes. In a real world scenario, small and medium size companies do not have the amount of data the big companies have, which can prevent the usage of ML models in this vital niche for the industry. A collaboration in terms of data usage to develop powerful and general industry solutions is hindered by data privacy concerns despite similar problems. This paper addresses these concerns by providing a framework based on the Federated Learning (FL) method combined with Digital Envelopes (DE) to allow the ML models training while keeping the data of the partners and the models parameters private and protected against external cyber-attacks, which is one of the weaknesses of FL as of now. A case study was carried out to demonstrate the effectiveness of the proposed framework on handling data poisoning attacks to the training data and also the models’ weights.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call