Abstract
Deep neural network (DNN)-based object detection has been investigated and applied to various real-time applications. However, it is hard to employ the DNNs in embedded systems due to their high computational complexity and deep-layered structure. Although several field-programmable gate array (FPGA) implementations have been presented recently for real-time object detection, they suffer from either low throughput or low detection accuracy. In this article, we propose an efficient computing system for real-time SSDLite object detection on FPGA devices, which includes novel hardware architecture and system optimization techniques. In the proposed hardware architecture, a neural processing unit (NPU) that consists of heterogeneous units, such as band processing, scaling, and accumulating, and data fetching and formatting units is designed to accelerate the DNNs efficiently. In addition, system optimization techniques are presented to improve the throughput further. A task control unit is employed to balance the workload and increase the utilization of heterogeneous units in the NPU, and the object detection algorithm is refined accordingly. The proposed architecture is realized on an Intel Arria 10 FPGA and enhances the throughput by up to 13.6× compared to the state-of-the-art FPGA implementation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.