Abstract

Video background modeling is an important preprocessing stage for various applications, and principal component pursuit (PCP) is among the state-of-the-art algorithms for this task. One of the main drawbacks of PCP is its sensitivity to jitter and camera movement. This problem has only been partially solved by a few methods devised for jitter or small transformations. However, such methods cannot handle the case of moving or panning cameras in an incremental fashion. In this paper, we greatly expand the results of our earlier work, in which we presented a novel, fully incremental PCP algorithm, named incPCP-PTI, which was able to cope with panning scenarios and jitter by continuously aligning the low-rank component to the current reference frame of the camera. To the best of our knowledge, incPCP-PTI is the first low-rank plus additive incremental matrix method capable of handling these scenarios in an incremental way. The results on synthetic videos and Moseg, DAVIS, and CDnet2014 datasets show that incPCP-PTI is able to maintain a good performance in the detection of moving objects even when panning and jitter are present in a video. Additionally, in most videos, incPCP-PTI obtains competitive or superior results compared to state-of-the-art batch methods.

Highlights

  • Video background modeling consists of segmenting the “foreground” or moving objects from the static “background.” It is an important first step in various computer vision applications [1] such as abnormal event identification [2] and surveillance [3].Several video background modeling methods, using different approaches such as Gaussian mixture models [4], kernel density estimations [5], or neural networks [6], exist in the literature

  • Representative frames of the SPJ video with changing velocity and the segmented sparse components with incPCP-PTI and stab + incPCP-PTI are shown in Figure 3. e distance M(sk) computed for each frame of the different videos of the SPJ dataset are shown in Figures 4 and 5 for incPCP-PTI and stab + incPCP-PTI

  • detecting contiguous outliers in the low-rank representation (DECOLOR) has problems working in these sequences due to its prealignment phase failing to find a suitable unique frame for reference. e low performance of incPCP-PTI in some of the Motion Segmentation (Moseg) sequences might stem from the short number of video frames that cause the initial low-rank estimation of Principal component pursuit (PCP) to be less precise

Read more

Summary

Introduction

Video background modeling consists of segmenting the “foreground” or moving objects from the static “background.” It is an important first step in various computer vision applications [1] such as abnormal event identification [2] and surveillance [3]. Several video background modeling methods, using different approaches such as Gaussian mixture models [4], kernel density estimations [5], or neural networks [6], exist in the literature. Principal component pursuit (PCP) is currently considered to be one of the leading algorithms for video background modeling [8]. D L + S, where the matrix D ∈ Rm×n is formed by the n observed frames, each of size m Nr × Nc × Nd (rows, columns, and number of channels, respectively); L ∈ Rm×n is a low-rank matrix representing the background; S ∈ Rm×n is a sparse matrix representing the foreground; λ is a fixed global regularization parameter; and rank(L) is the rank of L and

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call