Federated learning (FL) is an emerging subdomain of machine learning (ML) in a distributed and heterogeneous setup. It provides efficient training architecture, sufficient data, and privacy-preserving communication for boosting the performance and feasibility of ML algorithms. In this environment, the resultant global model produced by averaging various trained client models is vital. During each round of FL, model parameters are transferred from each client device to the server while the server waits for all models before it can average them. In a realistic scenario, waiting for all clients to communicate their model parameters, where client models are trained on low-power Internet of Things (IoT) devices, can result in a deadlock. In this paper, a novel temporal model averaging algorithm is proposed for asynchronous federated learning (AFL). Our approach uses a dynamic expectation function that computes the number of client models expected in each round and a weighted averaging algorithm for continuous modification of the global model. This ensures that the federated architecture is not stuck in a deadlock all the while increasing the throughput of the server and clients. To implicate the importance of asynchronicity in cybersecurity, the proposed algorithm is tested using NSL-KDD intrusion detection system datasets. The performance accuracy of the global model is about 99.5% on the dataset, outperforming traditional FL models in anomaly detection. In terms of asynchronicity, we get an increased throughput of almost 10.17% for every 30 timesteps.
Read full abstract