Abstract

Background subtraction is commonly employed in foreground object detection in urban traffic scenes. Most of the current color or texture feature-based background subtraction models are easily contaminated by sudden and gradual illumination variations in urban traffic scenes. To resolve this deficiency, an adaptive local median texture feature, which extracts the adaptive distance threshold employing the median information in a predefined local region of a pixel and Weber's law, is introduced. In addition, a sample consensus-based model that evolved from portable visual background extractor is proposed using an adaptive local median texture feature. Then, the foreground is labeled by comparing the input video frames feature with the model. Moreover, to adapt the dynamic background, the random update scheme is used to update the model. Extensive experimental results on the public Change Detection data set of 2014 (CDnet2014) and the real-world urban traffic videos demonstrate that our background subtraction method is superior to the other state-of-the-art texture-feature-based methods. The qualitative and quantitative results show the encouraging efficiency of the proposed technique to deal with sudden and gradual illumination variations in real-world urban traffic scenes.

Highlights

  • And reliably segmenting foreground objects from the increasing number of video-based urban traffic scenes is the first key step for surveillance applications, developing intelligent transportation systems (ITS) and high-level vision understanding

  • Bottom-up that first detects and classifies parts of an object using features such as Histogram of Oriented Gradients (HOG), Haar-like features and Local Binary Pattern (LBP) and top-down which pixels are grouped into objects early during the processing using background subtraction method are typically used for foreground objects recognition [1]

  • As the sun moves across the sky it provides a light source that varies during the day, which may lead to the incorrect foreground detection under gradual illumination changes scenes; frequent sudden changes, such as the VOLUME 8, 2020

Read more

Summary

INTRODUCTION

And reliably segmenting foreground objects from the increasing number of video-based urban traffic scenes is the first key step for surveillance applications, developing intelligent transportation systems (ITS) and high-level vision understanding. To efficiently address the deficiencies of BS-based methods that are contaminated by sudden and gradual illumination changes in urban traffic scenes, we combined the illumination-invariant features of the adaptive local median texture (ALMT) feature and the non-parametric sample consensus technique to introduce an adaptive local median texture feature background model (ALMTFM) to manage illumination changes. TEXTURE-BASED BACKGROUND SUBTRACTION MODELING adaptive local median texture (ALMT) feature and sample consensus scheme are employed to construct a novel vehicle detection background subtraction model in urban traffic scenes with illumination changes adaptive local median texture feature background model (ALMTFM). A. BACKGROUND MODELING AND INITIALIZATION Background subtraction models are the first step in vehicle detection, and the ideal algorithm may improve excellent performance to deal with complex environments and sudden or gradual illumination changes in urban traffic scenes. Initialization employing no-sequence frames based on equation (9) can distinctly decrease the probability of slow-moving or temporarily stopped vehicles blending into the initial background model and to ensure accurate initial background model

FOREGROUND DETECTION
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.