Abstract

Nowadays, always-on intelligent and self-powered visual perception systems have gained considerable attention and are widely used. However, capturing data and analyzing it via a backend/cloud processor are energy-intensive and long-latency, resulting in a memory bottleneck and low-speed feature extraction at the edge. This paper presents AppCiP architecture as a sensing and computing integration design to efficiently enable Artificial Intelligence (AI) on resource-limited sensing devices. AppCiP provides a number of unique capabilities, including instant and reconfigurable RGB to grayscale conversion, highly parallel analog convolution-in-pixel, and realizing low-precision quinary weight neural networks. These features significantly mitigate the overhead of analog-to-digital converters and analog buffers, leading to a considerable reduction in power consumption and area overhead. Our circuit-to-application co-simulation results demonstrate that AppCiP achieves ~3 orders of magnitude higher efficiency on power consumption compared with the fastest existing designs considering different CNN workloads. It reaches a frame rate of 3000 and an efficiency of ~4.12 TOp/s/W. The performance accuracy of the AppCiP architecture on different datasets such as SVHN, Pest, CIFAR-10, MHIST, and CBL Face detection is evaluated and compared with the state-of-the-art design. The obtained results exhibit the best results among other processing in/near pixel architectures, while AppCip only degrades the accuracy by less than 1% on average compared to the floating-point baseline.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call