Surveillance video coding is crucial for improving compression efficiency in intelligent video surveillance systems and applications. Plenty of work has been done, which can be roughly divided into two categories: the former mainly focuses on low-complexity background modeling to obtain the clear background, while the latter focuses on an appropriate coding strategy to generate the high-quality background reference picture for effective background prediction. However, almost all existing works focus only on stationary camera scenes, while moving cameras-captured surveillance video coding is left untouched and is still an open problem. In this paper, a background modeling and referencing scheme for moving cameras-captured surveillance video coding in high-efficiency video coding (HEVC) is proposed. First, this paper proposes a low-complexity motion background modeling algorithm for surveillance video coding using the running average based on a global-motion-compensation method. To obtain the global motion vector, we propose a global motion detection method based on character blocks by establishing a low-rank singular value decomposition model for clustering and estimating motion vectors of background character blocks in the cameras movement circumstance. Second, we propose a background referencing coding strategy, in which the motion background coding tree units (MBCTUs) would be selected by anchoring the input video frame on the modeling background frame and coded with the optimized quantization parameter. Then, the reconstructed MBCTU will be used to update the previous coding tree unit in the global compensation location of the background reference picture. Extensive experimental results show that the proposed scheme can achieve significant bit savings of up to 26.6% and, on average, 6.7% with similar subjective quality and negligible encoding complexity, compared to HM12.0. Besides, the proposed scheme consistently outperforms two state-of-the-art surveillance video coding schemes with remarkable bitrate savings.
Read full abstract