Abstract

Artificial Neural Networks (ANNs) have become an accepted approach for a wide range of challenges. Meanwhile, the advancement of chip manufacturing processes is approaching saturation which calls for new computing solutions. This work presents a novel approach of an FPGA-based accelerator development for fully connected feed-forward neural networks (FFNNs). A specialized tool was developed to facilitate different implementations, which splits FFNN into elementary layers, allocates computational resources and generates high-level C++ description for high-level synthesis (HLS) tools. Various topologies are implemented and benchmarked, and a comparison with related work is provided. The proposed methodology is applied for the implementation of high-throughput virtual sensor.

Highlights

  • Since ImageNet Image Classification competition was assuredly won by Krizhevsky, Sutskever and Hinton with their deep-learning-based solution in 2012 [1], it became evident that Deep Learning (DL) algorithms bear the potential for a variety of applications

  • This model steers the generation of high-level C++ programming language description of the topology, which is suitable for High-Level Synthesis (HLS) tools and IP core generation

  • The developed workflow is designed for optimal throughput and can be used as a part of a larger Artificial Neural Networks (ANNs) implementation workflow, e.g., classifier implementation for Convolutional Neural Networks (CNNs)

Read more

Summary

Introduction

Since ImageNet Image Classification competition was assuredly won by Krizhevsky, Sutskever and Hinton with their deep-learning-based solution in 2012 [1], it became evident that Deep Learning (DL) algorithms bear the potential for a variety of applications. Considerable effort has been devoted to improving computational efficiency by developing new Artificial Neural Network (ANN) architectures [3,4] and optimizing implementations for specific use-cases [5,6,7]. DNNs constitute their power through massively parallel distributed structures and their ability to learn and, generalize [8] These two information processing capabilities raise the potential of solving complex problems.

Related Work
Approximation Method
Background
FPGA and Circuit Design
Feed-Forward Neural Networks
Consideration for the Design
The Proposed Approach
Comparison with Other Approaches
Virtual Sensor Use-Case
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.