Abstract

Distributed stochastic gradient descent (SGD) algorithms are widely deployed in training large-scale deep learning models, while the communication overhead among workers becomes the new system bottleneck. Recently, two major categories of gradient compression techniques were proposed, including gradient quantization and sparsification. At best, the gradient quantization technique can obtain a compression ratio of 32, with little impact on model convergence accuracy. The gradient sparse technique can achieve a much higher compression ratio with some loss of model accuracy. To obtain a higher communication compression ratio with the minimum model accuracy loss, we proposed a mixed compression strategy named Hybrid Gradient Compression (HGC), which combines the merits of both quantization and sparsification. We validated the efficiency of our Hybrid Gradient Compression method by testing some complex models with millions of parameters (e.g., ResNet, VGG, LSTM, etc.) on the datasets including CIFAR-10, CIFAR-100 and Penn TreeBank on a GPU cluster. According to our tests, HGC can achieve a much higher gradient compression ratio at the cost of a small accuracy loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call