Abstract

Sparse convolution neural networks (CNNs) are promising in reducing both memory usage and computational complexity while still preserving high inference accuracy. State-of-the-art sparse CNN accelerators can deliver high throughput by skipping zero weights and/or activations. To operate on only nonzero weights and activations, sparse accelerators typically search pairs of nonzero weights and activations for multiplication-accumulation (MAC) operations. However, the conventional search operation results in a severe limitation in the processing element (PE) array scale because of the enormous demands of internal interconnection and memory bandwidth. In this article, we first provide a design principle to free the search process of sparse CNN accelerations. Specifically, the indexes of the static compressed weights access the dynamic activations directly to avoid the search process for MAC operations. We then develop two search-free inference accelerators, called Swan and Swan-flexible, for sparse CNN accelerations. Swan supports search-free sparse convolution accelerations for interconnection and bandwidth saving. Compared with Swan, Swan-flexible not only has the search-free capability but also comprises a configurable architecture for optimum throughput. We formulate a mathematical optimization problem by combining the configurable characterization with the compressive dataflow to optimize the overall throughput. Evaluations based on a place-and-route process show that the proposed designs, in a compact factor of 4096 PEs, achieve 1.5– <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.7\times $ </tex-math></inline-formula> higher speedup and 6.0– <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$13.6\times $ </tex-math></inline-formula> better energy efficiency than representative accelerator baselines with the same PE array scale.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call