With the rapid spread of the Internet of Things (IoT), smart applications and services become increasingly crucial, making them an easily accessible source of personally identifiable information. Over the last few years, the use of machine learning in securing routing layers, particularly routing protocol for low-power and lossy networks (RPL), has become fundamental in ensuring successful routing and privacy preservation as a crucial consideration among edge nodes. In recent works, training of collected data on a central server has increased concerns regarding data privacy. Consequently, decentralized learning is currently a solution for privacy preservation. It has gained popularity in IoT networks in which the models are trained on hybrid data located in edge nodes and enable global decision-making without sharing global data, causing high communication costs during weight updates. We propose a federated learning of routing protocol (Fed-RPL)-based gated recurrent unit (GRU) model for decentralized training rounds and quantization method (Q-8bit) to decrease the number of weight updates that can significantly mitigate the communication overhead and maintain the local model with high accuracy. Meanwhile, the ensemble unit aggregates the updates and selects the best local model to enhance the global model accuracy. Our experiments show that Fed-RPL outperforms classical machine learning (ML) methods in privacy-preserving edge data, significantly reduces the communication cost in non-IID scenarios, and achieves higher detection accuracy than recent FL approaches.
Read full abstract