Abstract

In image recognition, techniques using a convolutional neural network (CNN) have been extensively studied and are widely used in various applications, such as a handwritten character recognition, a face recognition, a scene determination, and an object recognition. It has an enormous amount of computational complexity and internal parameters, and it is often implemented in high-performance GPUs. However, the embedded system requires real-time image recognition with a low-power consumption. In such systems, a binarized CNN has been proposed for the embedded system. It can achieve efficient implementation by restricting the values that the parameters inside CNN treating -1 and +1, and low bit precision of operations and memory. In the paper, we extend to a ternary weight binary input CNN to further increase its performance with a low-performance embedded processor. In the ternarized CNN, values that internal weight can take -1, +1 and 0, where zero weight can be realized by a skip computation. Since the number of possible states of the ternarized CNN is larger than that of the binarized CNN, high recognition accuracy can be obtained. Furthermore, we study an optimal training algorithm in the ternarized CNN and show the results by computer experiment. Comparison with the binarized CNN, as for the ARM processor, the ternary weight CNN was 8.13 times faster than the binary weight one.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call