Abstract

Background modeling techniques for embedded computer vision applications must balance accuracy, speed, and power. Basic background modeling techniques run quickly, but their accuracy is not sufficient for computer vision problems involving dynamic background. In contrast, adaptive background modeling techniques are more robust, but run more slowly. Due to its high inherent fine-grain parallelism, robust adaptive background modeling has been implemented on GPUs with significant performance improvements over CPUs. However, these implementations are infeasible in embedded applications due to the high power ratings of the targeted general-purpose GPU platforms. This paper focuses on exploiting fine-grain data parallelism and optimizing memory access patterns to target a low-cost adaptive background modeling algorithm multimodal mean (MMM) to a low-power GPU with thermal design power (TDP) of only 12 watts. The algorithm has comparable accuracy with the Gaussian mixture model (GMM) algorithm, but less computational and memory cost. It achieves a frame rate of 392 fps with a full VGA resolution (640x480) frame on the low-power integrated GPU NVIDIA ION. This is a 20x speed-up of the MMM algorithm compared to the embedded CPU platform Intel Atom of comparable TDP. In addition, the MMM algorithm attains a 5-6x speed up over the GMM implementation on the ION GPU platform.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.