Abstract

The acceleration of deep neural networks (DNNs) on edge devices is gaining significant importance in various application domains. General purpose graphics processing units (GPGPUs) are typically used to explore, train and evaluate DNNs because they offer higher processing and computational capability compared to CPUs. However, this comes at the cost of increased power consumption required by these devices for operation, which prevents efficient deployment of networks on edge devices. In the Internet of Things (IoT) domain, Field programmable gate arrays (FPGAs) are considered a powerful alternative since their flexible architecture can run the DNNs with much less energy. The enormous amount of effort and time required for the entire end-to-end edge-aware deployment urged us to develop DeepEdgeSoc, an integrated framework for deep learning (DL) design and acceleration. DeepEdgeSoc is an overarching framework under which DNNs can be built. DeepGUI, a visual drag-and-drop DNN design environment, plays an important role in accelerating the network design phase. In DeepEdgeSoc, the networks can be quantized and compressed to suite the underlying edge devices in terms of size and energy. DeepEdgeSoc goes beyond the software level by converting the networks to appropriate FPGA implementations that can be directly synthesized and integrated within a System-on-Chip (SoC).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.