Abstract

As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning aims to acquire a global model using the training data distributed over many user devices. It is a challenging problem since link loss, partial device participation, and non-independent and identically distributed (non-iid) data distribution would all deteriorate the performance of decentralized learning algorithms. Existing work may restrict to linear models or show poor performance over non-iid data. Therefore, in this paper, we propose a decentralized learning scheme based on distributed parallel stochastic gradient descent (DPSGD) and graph neural network (GNN) to deal with the above challenges. Specifically, each user device participating in the learning task utilizes local training data to compute local stochastic gradients and updates its own local model. Then, each device utilizes the GNN model and exchanges the model parameters with its neighbors to reach the average of resultant global models. The iteration repeats until the algorithm converges. Extensive simulation results over both iid and non-iid data validate the algorithm’s convergence to near optimal results and robustness to both link loss and partial device participation.

Highlights

  • Zhejiang Provincial Key Laboratory of Information Processing, Communication and Networking (IPCAN), College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; Citation: Gao, H.; Lee, M.; Yu, G.; Abstract: As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning aims to acquire a global model using the training data distributed over many user devices

  • We mainly investigate a decentralized learning architecture, which avoids the possible congestion to the central server in the centralized architecture

  • We propose a new decentralized learning scheme utilizing graph neural network (GNN) aggregation for training generalized models in networks that can be modeled as undirected or balanced directed graphs

Read more

Summary

A Graph Neural Network Based Decentralized

Zhejiang Provincial Key Laboratory of Information Processing, Communication and Networking (IPCAN), Hangzhou 310027, China. With the immense growth of data and the exponential increase in computation power, great attention has been given to the machine learning techniques, which has superior performance for classification, regression, anomaly detection, denoising, and translation tasks. The issue of long runtime for training the models on a single machine becomes the main bottleneck for large-scale applications. This motivates us to use distributed systems because of their increasing parallel computation power. Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations

Literature Review
Methods
It cannot flexibly accommodate the communicationcomputation tradeoff
Main Contribution
Organization
System Model
Decentralized Learning Model
Gnn Aggregation Based Average Consensus
Overview of GNN
Gnn for Consensus
Training Process
Decentralized GNN Aggregation
Simulation Settings
Performance of GNN with Different Filter Orders
Performance Comparison with FIR Graph Filters
Performance Evaluation over Decentralized Learning
Performance Comparison with CPSGD and DPSGD
Performance Comparison between Different Sampling Strategies
Scalability to Scenarios with Different Topologies
Non-Iid Scenario
Findings
Conclusions and Future Directions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.