Abstract

Increasing amount of data generated at edge nodes and quest for privacy have resulted in learning at the edge. Computations are performed at edge devices and outputs are communicated to a central node for updating the model. The edge nodes are available intermittently and are connected via low-bandwidth links. The edge nodes communicate local gradients to helper nodes, and these helpers forward messages to the central node after possible aggregation. Recently, schemes using repetition codes and maximum-distance-separable (MDS) codes, respectively known as aligned repetition coding (ARC) and aligned MDS coding (AMC) schemes, were proposed. It was observed that the communication cost at edge nodes becomes optimal in the AMC scheme, at the expense of an increased cost of communication incurred by helpers. An upper bound on the communication cost at helpers for the AMC scheme was known in literature. In this paper, a tradeoff between communication costs at edge nodes and at helper nodes is established with the help of newly proposed pyramid scheme. The scheme makes use of well-known class of pyramid codes, thus expanding the realm of application of locally repairable codes to distributed learning. The communication costs both at helper nodes and at edge nodes are exactly characterized. Using the developed technique, the exact communication cost at helper nodes can be computed for the AMC scheme as well. Next, we come up with a technique to improve the aggregation strategy of both pyramid and AMC schemes, that yields significant reduction in communication cost at helpers without changing parameters of the code used by edges. Finally, we present a greedy algorithm to improve the aggregation strategy of the ARC scheme, achieving significantly reduced communication cost at helpers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call