Abstract

We empirically find the optimal batch size for training the Deep Q-network on the cart-pole system. The efficiency of the training is evaluated by the performance of the network on task after training, and total time and steps required to train. The neural network is trained for 10 different sizes of batch with other hyper parameter values fixed. The network is able to carry out the cart-pole task with the probability of 0.99 or more with the batch sizes from 8 to 2048. The training time per step for training tends to increase linearly, and the total steps for training decreases more than exponentially as the batch size increases. Due to these tendencies, we could empirically observe the quadratic relationship between the total time for training and the logarithm of batch size, which is convex, and the optimal batch size that minimizes training time could also be found. The total steps and time for training are minimum at the batch size 64. This result can be expanded to other learning algorithm or tasks, and further, theoretical analysis on the relationship between the size of batch or other hyper-parameters and the efficiency of training from the optimization point of view.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call