Abstract

Modern deep neural networks cannot be often trained on a single GPU due to large model size and large data size. Model parallelism splits a model for multiple GPUs, but making it scalable and seamless is challenging due to different information sharing among GPUs with communication overhead. Specifically, we identify two key issues to make the parallelism being inefficient and inaccurate; an efficient pipelining technique is crucial to maximize GPU utilization and normalizations in deep neural networks may affect the performance due to different statistics sharing of mini-batch. In this work, we address these issues by investigating efficient pipelining for model parallelism and effective normalizations in model / data parallelisms when training a model with large mini-batch in multiple GPUs so that the model performance in accuracy can not be compromised. Firstly, we propose a novel method to search for an optimal micro-batch size considering the number of GPUs and memory size for model parallelism. For efficient pipelining, mini-batch is usually divided into smaller batches (called micro-batch). To maximize the utilization of GPU computing resources, training should be performed with the optimal micro-batch size. Our proposed micro-batch size search algorithm achieved increased image throughput by up to 12% and improved trainable mini-batch size by 25% as compared to the conventional model parallelism method. Secondly, we investigate normalizations in distributed deep learning training for different parallelisms. Our experiments using different normalization methods suggested that the performance with batch normalization can be improved by sharing the batch information among GPUs when performing data parallelism. It was also confirmed that group normalization helped minimizing accuracy degradation when performing model parallelism with pipelining and yielded consistent accuracies for diverse mini-batch sizes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.