Abstract

The embedded systems continue to display as solutions of smart surveillance systems. Background subtraction using Gaussian Mixture Model (GMM) is often portrayed as a common step for video processing. This work discusses the implementation of an embedded vision system on system-on-a-chip (SoC) device that integrates both a processor and an FPGA (Field Programmable Gate Array) architecture. The conventional Register Transfer Level (RTL) design, typically used for FPGA programming is slow, and the use of floating-point arithmetic is complex. However, the use of High-Level Synthesis (HLS) tools allows describing algorithms using high-level programming languages. Three background subtraction algorithms with floating-point arithmetic were developed using a hardware-software co-design methodology. The paper presents the details of the implementation on a ZedBoard Zynq Evaluation and Development Kit, considering requirements as hardware resources and power consumption used. Also, performance comparisons among a PC-based, ARM, FPGA and SOC-FPGA implementations are presented. The results showed that frame rates needed for real-time video processing were reached.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.