Abstract

The next significant step in the evolution and proliferation of artificial intelligence technology will be the integration of neural network (NN) models within embedded and mobile systems. This calls for the design of compact, energy efficient NN models in silicon. In this article, we present a scalable application-specific integrated circuit (ASIC) design of an energy-efficient Long Short-Term Memory (<underline>LS</underline>TM) <underline>a</underline>ccelerator, named ELSA, which is suitable for energy-constrained devices. It includes several architectural innovations to achieve small area and high energy efficiency. To reduce the area and power consumption of the overall design, the compute-intensive units of ELSA employ approximate multiplications and still achieve high performance and accuracy. The performance is further improved through efficient synchronization of the elastic pipeline stages to maximize the utilization. The article also includes a performance model of ELSA, as a function of the hidden nodes and timesteps, permitting its use for the evaluation of any LSTM application. ELSA was implemented in register transfer level (RTL) and was synthesized and placed and routed in 65nm technology. Its functionality is demonstrated for language modeling—a common application of LSTM. ELSA is compared against a baseline implementation of an LSTM accelerator with standard functional units and without any of the architectural innovations of ELSA. The article demonstrates that ELSA can achieve significant improvements in power, area, and energy-efficiency when compared to the baseline design and several ASIC implementations reported in the literature, making it suitable for use in embedded systems and real-time applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call