Abstract

AbstractLarge‐scale distributed convolution neural network (CNN) training brings two performance challenges: model performance and system performance. Large batch size usually leads to model test accuracy loss, which counteracts the benefits of parallel SGD. The existing solutions require massive hyperparameter hand‐tuning. To overcome this difficult, we analyze the training process and find that earlier training stages are more sensitive to batch size. Accordingly, we assert that different stages should use different batch size, and propose a variable batch size strategy. In order to remain high test accuracy under larger batch size cases, we design an auto‐tuning engine for automatic parameter tuning in the proposed variable batch size strategy. Furthermore, we develop a dataflow implementation approach to achieve the high‐throughput CNN training on supercomputer system. Our approach has achieved high generalization performance on SOAT CNN networks. For the ShuffleNet, ResNet‐50, and ResNet‐101 training with ImageNet‐1K dataset, we scale the batch size to 120 K without accuracy loss and to 128 K with only a slight loss. And the dataflow implementation approach achieves 93.5% scaling efficiency on 1024 GPUs compared with the state‐of‐the‐art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call