Abstract

Abstract-For a couple of recent years, fields such as Computer Vision, NLP, and Speech Recognition have been revolutionized by Machine Learning (ML). This success can be attributed to Big Data. The Process of Collecting this large amount of data has often been done in privacy-invasive ways. Traditional ML approaches require collecting the training data in a common store to train a model and deploy it. Federated Learning (FL), is a recent technique that allows machine learning models to be trained without needing to submit collected data to the server. Instead of sharing the data with the client, the model parameters will be sent to the clients to collaboratively train the model. The data used to train the global model never leaves the devices, strictly enforcing privacy. Only the clients and the server share model learning parameters such as Weights, Biases. While this approach respects privacy and is flexible, it comes at a cost of additional challenges like communication cost and model bias due to data heterogeneity that needs to be tackled. To improve FL’s efficiency in terms of communication overhead and accuracy, In this work, we propose a scheme called Learned Gradient Compression Method (LGCM). In this scheme accuracy declination which causes by non-iid(non-independent and identically distributed) data and communication overhead is improved. To improve the accuracy declination caused by non-iid data, clients with a lower level of non-iid data have been selected for training the ML Model. To improve communication efficiency we employ compression of weight updates by using Autoencoder(AE). The proposed scheme has been applied in a typical ML task to study the performance and effectiveness of the scheme and has been compared with baseline algorithms. We have used a CNN model to classify handwritten digits using the MNIST dataset. The achieved accuracy of 96% outperforms the baseline FedAvg algorithm. the Compression The ratio achieved using AE is approx. 5000x.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call