Abstract

Federated learning (FL) activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes. However, in large-scale heterogeneous Internet of Things (IoT) cellular networks, massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly. This paper introduces the system model of converging software-defined networking (SDN) and network functions virtualization (NFV) to enable device/resource abstractions and provide NFV-enabled edge FL (eFL) aggregation servers for advancing automation and controllability. Multi-agent deep Q-networks (MADQNs) target to enforce a self-learning softwarization, optimize resource allocation policies, and advocate computation offloading decisions. With gathered network conditions and resource states, the proposed agent aims to explore various actions for estimating expected long-term rewards in a particular state observation. In exploration phase, optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections. Action-based virtual network functions (VNF) forwarding graph (VNFFG) is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure (NFVI). The proposed scheme indicates deficient allocation actions, modifies the VNF backup instances, and reallocates the virtual resource for exploitation phase. Deep neural network (DNN) is used as a value function approximator, and epsilon-greedy algorithm balances exploration and exploitation. The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy. Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service (QoS) performance metrics, including packet drop ratio, packet drop counts, packet delivery ratio, delay, and throughput.

Highlights

  • The fast-growing deployment of Internet of Things (IoT) in cellular networks has exponentially increased in massive data volumes and heterogeneous service types with the requirement of ultrareliable low-latency communication (URLLC)

  • Edge Federated learning (FL) partitions the iterations of round communications in two preeminent steps: (1) The local models wnk on n data batch from selected k participants are aggregated in optimal edge server selection, and (2) Global communications are orchestrated to transmit between edge servers and a central parameter server in an appropriate interval [9,10,11]

  • This paper proposed a multi-agent approach, including proposed adaptive resource allocation agent (PARAA) for optimizing virtual resource allocation and proposed intelligent computation offloading agent (PICOA) for recommending Edge FL (eFL) aggregation server offloading, in order to meet the significance of URLLC for mission-critical IoT model services

Read more

Summary

Introduction

The fast-growing deployment of Internet of Things (IoT) in cellular networks has exponentially increased in massive data volumes and heterogeneous service types with the requirement of ultrareliable low-latency communication (URLLC). Edge FL (eFL) partitions the iterations of round communications in two preeminent steps: (1) The local models wnk on n data batch from selected k participants are aggregated in optimal edge server selection, and (2) Global communications are orchestrated to transmit between edge servers and a central parameter server in an appropriate interval [9,10,11]. This technique reduces cloud-centric communications and improves learning precision. A model-free multi-agent approach feasibly tackles the heterogeneity of core backbone network for efficient traffic control and channel reassignment in SDN-based IoT networks [20]

Paper Contributions
Paper Organizations
Architectural Framework
Proposed DQN Components
Algorithm Flow for MADQNs in Proposed Environment
21: Input new q-value QNEW for the state-action pair
Self-Organizing Agent Controllers for Optimal Edge Aggregation Decisions
Simulation Setup
Reference Schemes and Performance Metrics
Results and Discussions
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.