Abstract

This paper proposes a hardware oriented dropout algorithm for an efficient field-programmable gate array (FPGA) implementation. Dropout is a regularization technique, which is commonly used in neural networks such as multilayer perceptrons (MLPs), convolutional neural networks (CNNs), among others. To generate a dropout mask to randomly drop neurons during training phase, random number generators (RNGs) are usually used in software implementations. However, RNGs consume considerable FPGA resources in hardware implementations. The proposed method is able to minimize the resources required for FPGA implementation of dropout by performing a simple rotation operation to a predefined dropout mask. We apply the proposed method to MLPs and CNNs and evaluate them on MNIST and CIFAR-10 classification. In addition, we employ the proposed method in GoogLeNet training using own dataset to develop a vision system for home service robots. The experimental results demonstrate that the proposed method achieves the same regularized effect as the ordinary dropout algorithm. Logic synthesis results show that the proposed method significantly reduces the consumption of FPGA resources in comparison to the ordinary RNG-based approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call