Abstract

Nowadays, Federated Learning (FL) over Internet of Medical Things (IoMT) devices has become a current research hotspot. As a new architecture, FL can well protect the data privacy of IoMT devices, but the security of neural network model transmission can not be guaranteed. On the other hand, the sizes of current popular neural network models are usually relatively extensive, and how to deploy them on the IoMT devices has become a challenge. One promising approach to these problems is to reduce the network scale by quantizing the parameters of the neural networks, which can greatly improve the security of data transmission and reduce the transmission cost. In the previous literature, the fixed-point quantizer with stochastic rounding has been shown to have better performance than other quantization methods. However, how to design such quantizer to achieve the minimum square quantization error is still unknown. In addition, how to apply this quantizer in the FL framework also needs investigation. To address these questions, in this paper, we propose FEDMSQE - Federated Learning with Minimum Square Quantization Error, that achieves the smallest quantization error for each individual client in the FL setting. Through numerical experiments in both single-node and FL scenarios, we prove that our proposed algorithm can achieve higher accuracy and lower quantization error than other quantization methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call