Abstract

With the rise of social networks and the introduction of data protection laws, companies are training machine learning models using data generated locally by their users or customers in various types of devices. The data may include sensitive information such as family information, medical records, personal habits, or financial records that, if leaked, can generate problems. For this reason, this paper aims to introduce a protocol for training Multi-Layer Perceptron (MLP) neural networks via combining federated learning and homomorphic encryption, where the data are distributed in multiple clients, and the data privacy is preserved. This proposal was validated by running several simulations using a dataset for a multi-class classification problem, different MLP neural network architectures, and different numbers of participating clients. The results are shown for several metrics in the local and federated settings, and a comparative analysis is carried out. Additionally, the privacy guarantees of the proposal are formally analyzed under a set of defined assumptions, and the added value of the proposed protocol is identified compared with previous works in the same area of knowledge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call