Abstract

Abstract In order to mitigate the high communication cost in distributed and federated learning, various vector compression schemes, such as quantization, sparsification and dithering, have become very popular. In designing a compression method, one aims to communicate as few bits as possible, which minimizes the cost per communication round, while at the same time attempting to impart as little distortion (variance) to the communicated messages as possible, which minimizes the adverse effect of the compression on the overall number of communication rounds. However, intuitively, these two goals are fundamentally in conflict: the more compression we allow, the more distorted the messages become. We formalize this intuition and prove an uncertainty principle for randomized compression operators, thus quantifying this limitation mathematically, and effectively providing asymptotically tight lower bounds on what might be achievable with communication compression. Motivated by these developments, we call for the search for the optimal compression operator. In an attempt to take a first step in this direction, we consider an unbiased compression method inspired by the Kashin representation of vectors, which we call Kashin compression (KC). In contrast to all previously proposed compression mechanisms, KC enjoys a dimension independent variance bound for which we derive an explicit formula even in the regime when only a few bits need to be communicate per each vector entry.

Highlights

  • In the quest for high accuracy machine learning models, both the size of the model and the amount of data necessary to train the model have been hugely increased over time (Schmidhuber, 2015; Vaswani u. a., 2019)

  • Most common to federated learning (Konecnyu. a., 2016; McMahan u. a., 2017; Karimireddy u. a., 2019a), is when training data is inherently distributed across a large number of mobile edge devices due to data privacy concerns

  • In all cases of distributed learning and federated learning, information communication between computing nodes is inevitable, which forms the primary bottleneck of such systems (Zhang u. a., 2017; Lin u. a., 2018)

Read more

Summary

Introduction

In the quest for high accuracy machine learning models, both the size of the model and the amount of data necessary to train the model have been hugely increased over time (Schmidhuber, 2015; Vaswani u. a., 2019). In the quest for high accuracy machine learning models, both the size of the model and the amount of data necessary to train the model have been hugely increased over time Because of this, performing the learning process on a single machine is often infeasible. In a typical scenario of distributed learning, the training data (and possibly the model as well) is spread across different machines and the process of training is done in a distributed manner Most common to federated learning (Konecnyu. a., 2016; McMahan u. a., 2017; Karimireddy u. a., 2019a), is when training data is inherently distributed across a large number of mobile edge devices due to data privacy concerns

Communication bottleneck
Compressed learning
Contributions
Uncertainty principle for compression operators
UP for biased compressions
UP for unbiased compressions
Compression with polytopes
Representation systems
Computing Kashin’s representation
Quantizing Kashin’s representation
Measure concentration and orthogonal matrices
Concentration on the sphere for Lipschitz functions
Random orthogonal matrices
Implementation details of KC
Empirical variance comparison
Minimizing quadratics with CGD
Minimizing quadratics with distributed CGD
Proof of Theorem 1
Proof of Theorem 3
Proof of Theorem 6
Proof of Theorem 11
Proof of Theorem 7
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call