Abstract

Distributed machine learning algorithms are increasingly used in multirobot systems and are prone to Byzantine attacks. In this article, we consider a distributed implementation of the stochastic gradient descent (SGD) algorithm in a cooperative network, where networked agents optimize a global loss function using SGD on the local data and aggregation of the estimates of immediate neighbors. Byzantine agents can send arbitrary estimates to their neighbors, which may disrupt the convergence of normal agents to the optimum state. We show that if every normal agent combines its neighbors’ estimates (states) such that the aggregated state is in the convex hull of its normal neighbors’ states, then the resilient convergence is guaranteed. To assure this sufficient condition, we propose a resilient aggregation rule based on the notion of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">centerpoint</i> , which is a generalization of the median in the higher-dimensional Euclidean space. We evaluate our results using examples of target pursuit and pattern recognition in multirobot systems. The evaluation results demonstrate that distributed learning with average, coordinate-wise median, and geometric median-based aggregation rules fail to converge to the optimum state, whereas the centerpoint-based aggregation rule is resilient in the same scenario.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call