Abstract

Recently, smart cities, smart homes, and smart medical systems have challenged the functionality and connectivity of the large-scale Internet of Things (IoT) devices. Thus, with the idea of offloading intensive computing tasks from them to edge nodes (ENs), edge computing emerged to supplement these limited devices. Benefit from this advantage, IoT devices can save more energy and still maintain the quality of the services they should provide. However, computational offload decisions involve federation and complex resource management and should be determined in the real-time face to dynamic workloads and radio environments. Therefore, in this work, we use multiple deep reinforcement learning (DRL) agents deployed on multiple edge nodes to indicate the decisions of the IoT devices. On the other hand, with the aim of making DRL-based decisions feasible and further reducing the transmission costs between the IoT devices and edge nodes, federated learning (FL) is used to train DRL agents in a distributed fashion. The experimental results demonstrate the effectiveness of the decision scheme and federated learning in the dynamic IoT system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call