Abstract

This paper presents the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Eidetic</i> architecture, which is an SRAM-based ASIC neural network accelerator that eliminates the need to continuously load weights from off-chip, while also minimizing the need to go off chip for intermediate results. Using in-situ arithmetic in the SRAM arrays, this architecture can supports a variety of precision types allowing for effective inference. We also present different data mapping policies for matrix-vector based networks (RNN and MLP) on the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Eidetic</i> architecture and describe the tradeoffs involved. With this architecture, multiple layers of a network can be concurrently mapped, storing both the layer weights and intermediate results on-chip, removing the energy and latency penalty of off-chip memory accesses. We evaluate <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Eidetic</i> on Google's Neural Machine Translation System (GNMT) encoder and demonstrate a 17.20× increase in throughput and 7.77× reduction in average latency over a single TPUv2 chip.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.