Abstract
When training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training. Distributed training has recently been applied to edge computing. Since edge devices have hardware resource limitations such as memory, there is a need for training methods that use hardware resources efficiently. Previous research focused on reducing training time by optimizing the synchronization process between edge devices or by compressing the models. In this paper, we monitored hardware resource usage based on the number of layers and the batch size of the model during distributed training with edge devices. We analyzed memory usage and training time variability as the batch size and number of layers increased. Experimental results demonstrated that, the larger the batch size, the fewer synchronizations between devices, resulting in less accurate training. In the shallow model, training time increased as the number of devices used for training increased because the synchronization between devices took more time than the computation time of training. This paper finds that efficient use of hardware resources for distributed training requires selecting devices in the context of model complexity and that fewer layers and smaller batches are required for efficient hardware use.
Highlights
The number of Internet of Things (IoT) devices connected to cloud servers is growing, which increases the amount of data that needs to be processed by those servers [1]
We demonstrated that to make efficient use of the hardware resources of the edge device, it was necessary to construct a small batch with fewer model layers
We demonstrated a hardware resource-efficient distributed training model configuration for resource-constrained edge devices
Summary
The number of Internet of Things (IoT) devices connected to cloud servers is growing, which increases the amount of data that needs to be processed by those servers [1]. Network response latency between cloud servers and IoT devices is increasing. Edge computing [2] can be applied for real-time calculation on a device that generates and collects data. Intelligent environments such as smart homes and smart factories that combine deep learning (DL) with edge devices are more common [3,4,5,6]. Offloading the calculation of the edge device to the server reduces the execution time of the DL application. Only the inference phase is executed locally, while the training phase is executed on the server [7]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.