In many computer vision systems, background subtraction algorithms have a crucial importance to extract information about moving objects. Although color features have been extensively used in several background subtraction algorithms, demonstrating high efficiency and performances, in actual applications the background subtraction accuracy is still a challenge due to the dynamic, diverse and complex background types. In this paper, a novel method for the background subtraction is proposed to achieve low computational cost and high accuracy in real-time applications. The proposed approach computes the background model using a limited number of historical frames, thus resulting suitable for a real-time embedded implementation. To compute the background model as proposed here, pixels grayscale information and color invariant H are jointly exploited. Differently from state-of-the-art competitors, the background model is updated by analyzing the percentage changes of current pixels with respect to corresponding pixels within the modeled background and historical frames. The comparison with several traditional and real-time state-of-the-art background subtraction algorithms demonstrates that the proposed approach is able to manage several challenges, such as the presence of dynamic background and the absence of frames free from foreground objects, without undermining the accuracy achieved. Different hardware designs have been implemented, for several images resolutions, within an Avnet ZedBoard containing an xc7z020 Zynq FPGA device. Post-place and route characterization results demonstrate that the proposed approach is suitable for the integration in low-cost high-definition embedded video systems and smart cameras. In fact, the presented system uses 32 MB of external memory, 6 internal Block RAM, less than 16,000 Slices FFs, a little more than 20,000 Slices LUTs and it processes Full HD RGB video sequences with a frame rate of about 74 fps.
Read full abstract