Abstract

Designing efficient machine learning algorithms for near-sensor data processing on the edge has been at the research forefront in recent years. To achieve the required edge processing constraints, massively parallel binary neural networks (BNNs) have been developed. BNNs implemented in purely combinational circuits provide resource utilization efficiency and performance. This paper proposes novel resource-optimized architectures of hard-wired combinational BNNs targeting field-programmable gate array (FPGA) edge deployment. The proposed architectures require fewer adders and overall operations for parallel neuron activation computations. A high-level synthesis toolchain is designed, which enables users to produce the hardware description language models of optimized combinational BNN circuits directly from application datasets. Standard and optimized combinational architectures are built for different edge processing applications by using this toolchain. The proposed optimized architecture reaches a 39.9% improvement in terms of FPGA slice usage, a 28.2% improvement in nets used, and a 51.9% reduction in power consumption, which enables deployment possibilities for the state-of-the-art FPGA classification systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call