Abstract

Gaussian Mixture Model (GMM) for background subtraction (BGS) is widely used for detecting and tracking objects in video sequences. Although the GMM can provide good results, low processing speed has become its bottleneck for realtime applications. We propose a novel method to accelerate the GMM algorithm based on graphics processing unit (GPU). As GPU excels at performing massively parallel operations, the novelty lies in how to adopt various optimization strategies to fully exploit GPU's resources. The parallel design consists of three levels. On the basis of first-level implementation, we employ techniques such as memory access coalescing and memory address saving to the secondlevel optimization and the third-level modification, which reduces the time cost and increases the bandwidth greatly. Experimental results demonstrate that the proposed method can yield performance gains of 145 frames per second (fps) for VGA (640*480) video and 505 fps for QVGA (320*240) video which outperform their CPU counterparts by 24X and 23X speedup respectively. The resulted surveillance system can process five VGA videos simultaneously with strong robustness and high efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.