Abstract

Deep Belief Networks (DBNs) are state-of-art Machine Learning techniques and one of the most important unsupervised learning algorithms. Training DBNs is computationally intensive which naturally leads to investigate FPGA acceleration. Fixed-point arithmetic can have an important influence on the execution time and prediction accuracy of a DBN. Previous studies have focused only on customized DBN accelerators with a fixed data-width. Our results experiments demonstrate that supporting various data-widths in different DBN configurations and application environments does make sense for achieving acceptable performance. From this we conclude that a DBN accelerator should support various data-widths rather than the fixed one as done in previous work. The processing performance of DBN accelerators in FPGA is almost always constrained not by the capacity of the processing units, but by their on-chip RAM capacity and speed. We propose an efficient memory controller for DBN accelerators, which shows that supporting various data-widths is not as difficult as it may sound. The cost is only little in hardware terms and does not affect the critical path. We have designed a tool to help users reconfiguring the memory controller with arbitrary data-width flexibly.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.