Abstract
Background subtraction is a technique in which a background model is built and compared with the current frame to distinguish the foreground from the background. The technique is extensively used to facilitate automatic detection, segmentation, and tracking of objects in videos. However, conventional background subtraction methods have disadvantages, such as slow model-updating speeds, the inability to leverage edge information, and negative anti-noise properties in conditions with illumination variations. We therefore propose a ViBe-based method that employs simplified Gabor wavelets to calculate image edge information. The method randomly applies relevant pixels to initialize or update the background model and considers the variation dispersion degree during segmentation. Experimental results indicate that the proposed method performs well in foreground–background segmentation and in color shift situations caused by illumination or aperture adjustments. Moreover, the processing speed of the proposed approach is accelerated by parallel computing capacity of graphics processing units.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have