Abstract
In recent years, deep neural networks (DNNs) have shown outstanding performance in various tasks. Training DNNs on resource-constrained edge platforms is required for online learning and privacy concerns. However, the training process of DNNs requires enormous computation and memory resources. Therefore, the hardware implementations of the training process with low-precision integer arithmetic are attracting extensive attention, considering their advantages on computation, storage, and energy consumption. In this paper, we propose an FPGA-based reconfigurable accelerator for DNN training with full 8-bit integer arithmetic. First, a reconfigurable processing element in a unified architecture is designed, which is flexible to support various computation patterns during the training process. Second, a two-stage scaling and rounding scheme is introduced to scale intermediate results to low-bit data for minimal memory usage, while retaining data accuracy to the maximum extent. Finally, based on the widely used softmax classification function, an optimized architecture is developed to calculate the cross-entropy loss function on-chip to initiate the backward propagation. Experimental results show that our design can reach 771 GOPS and 47.38 GOPS/W in terms of performance and energy efficiency, respectively. The comparison results demonstrate that our work significantly outperforms prior works.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.