Abstract

Problem statement: In many visions-based surveillance systems, the first step is accomplished by detecting moving objects resulted from subtraction of the current captured frame from the extracted background. So, the results of these systems mainly depend on the accuracy of the background image. Approach: In this study, a proposed background extraction system is presented to model the background using a simple method, to initialize the model, to extract the moving objects and to construct the final background. Our model saves the history of each pixel separately. It uses the saved information to extract the background using a probability-based method. It updates the history of the pixel consequently and according to the value of that pixel in the current captured image. Results: Results of the experiments certify that not only the quality of the final extracted background is the best between four recently re-implemented methods, but also the time consumption of the extraction is acceptable. Conclusion: Since History-based methods use temporal information extracted from the several previous frames, they are less sensitive to noise and sudden changes for extracting the background image.

Highlights

  • Many visions-based surveillance systems such as Intelligent Transportation Systems are started by detecting changes in an image sequence

  • In the second group of background extraction methods, some researchers use algorithms which are based on the subtraction of two images; the input frame and a reference image

  • We examined the systems using Campus (CM), Fountain (FT), Shopping Mall (SM), Bootstrap (BS), Escalator (ES), Hall Monitor (HM) and Lobby (LB)

Read more

Summary

INTRODUCTION

Many visions-based surveillance systems such as Intelligent Transportation Systems are started by detecting changes in an image sequence. The method uses the history in order to for each Gaussian (mean, variance, weight and learning estimate the current value of the corresponding pixel in rate) They have presented various combinations of a the background image. Chiu et al (2010): if (Pmax(i,j)>θ), pixel(i, j) of the background If (Diff (i, j) ≤ ∂or Diff (i, j) ≤ φ) image (Temp_BG) is assigned with the mean value (μ) of the gray scale slice which has the highest probability. From the combination of the current frame and the temporal extracted background, the final background image is resulted This combination is done according to the values of the binary image pixels.

RESULTS
DISCUSSION
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.