Abstract
Embedded Field-Programmable Gate Arrays (FPGAs) provide an efficient and flexible hardware platform to deploy highly optimised Deep Neural Network (DNN) accelerators. However, the limited area of embedded FPGAs restricts the degree of complexity of a DNN accelerator that can be deployed on them. Commonly an accelerator’s complexity is reduced to fit smaller FPGAs, often at the cost of significant redesign overhead. In this paper we present an alternative to this, which we call Temporal Accelerators. The main idea is to split an accelerator into smaller components, which are then executed by an FPGA sequentially. To do so, the FPGA is reconfigured multiple times during the execution of the accelerator. With this idea, we increase the available area of the FPGA ‘over time’. We show that modern FPGAs can reconfigure efficiently enough to achieve equally fast and energy efficient accelerators while using more cost efficient FPGAs. We develop and evaluate a Temporal Accelerator implementing an 1D Convolution Neural Network for detecting anomalies in ECG heart data. Our accelerator is deployed on a Xilinx Spartan 7 XC7S15. We compare it to a conventional implementation on the larger Xilinx Spartan 7 XC7S25. Our solution requires 9.06% less time to execute and uses 12.81% less energy while using an FPGA that is 35% cheaper.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.