Abstract

In the intelligent IoT edge devices, power consumption is increasing due to the deployment of high-precision algorithms, which greatly limits the working time of the devices. The power of A/D conversion and data transmission has become the bottleneck of traditional visual system. In this paper, a new sensing-with-computing (Senputing) architecture is proposed to reduce this power bottleneck by combining imaging and BNN 1 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">st</sup> -layer feature map computation. This Senputing architecture has two working modes, Normal-Sensor mode and Direct-Photocurrent-Computation mode with different resolutions (128×128 and 32×32). An ultra-low-power CMOS image sensor (CIS) chip with Senputing architecture is proposed to verify the feasibility. Our CIS chip is simulated with 180nm CMOS technology, the power of feature map computation is 5.9μW, and the frame rate is 208fps. The computation efficiency reaches to 8.23TOPs/W, which is 10.1 x higher than previous works.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.