Abstract

Convolutional neural networks are widely used in reinforcement learning. Capsule networks are gaining popularity over the traditional convolutional neural networks in many classification tasks. A capsule is a multidimensional activity vector consisting of neurons that represent the features of a specific type of entity such as an object or part of an object. In this paper, we explore the capability of a Capsule Network for deep reinforcement learning-based applications. Our proposed capsule network architecture with the same number of parameters as that of a convolutional neural network for reinforcement learning takes on average nine times lesser number of network update iterations than that of a convolutional neural network. We also propose a hardware accelerator for deep Q-learning that uses the capsule network as a deep Q-network instead of a convolutional neural network. We have implemented a capsule network-based deep Q-learning architecture for inference on the Xilinx Kintex UltraScale field-programmable gate array. We have tested the network on pygame based environments. Our hardware implementation achieves an overall speedup of 77.45x as compared to the software implementation of the capsule network for deep reinforcement learning on Intel Xeon CPU E5-1607, 4 core, @3.1GHz and 10.86x as compared to implementation on Nvidia Ge-Force GTX1080 GPU.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call