Abstract

The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

Highlights

  • Multiview video codec (MVC) design becomes popular,[1] based on which wide-spread applications, such as threedimensional (3-D) video, free-viewpoint television (FTV), and video surveillance networks, can be developed

  • For a MVC that adopts distributed video coder (DVC) coding, multiview distributed video coding (MDVC), we proposed to utilize interview video correlations and exploit bit value probability distribution of transform coefficients under the block-DCT video codec framework to improve the side information frame (SIF) confidence and accuracy of decoded bits while speeding up the decoder rate control process

  • Contributions of this paper comprise (1) for specific multiview video applications, such as wireless video sensor and wireless video surveillance networks, the proposed MDVC utilizes the advantage of a DVC and multiview video framework to enable efficient and low complexity video encoding

Read more

Summary

Introduction

Multiview video codec (MVC) design becomes popular,[1] based on which wide-spread applications, such as threedimensional (3-D) video, free-viewpoint television (FTV), and video surveillance networks, can be developed. These correlations are utilized by assigning weights to different estimated motion vectors (MVs) exploited based on the MDVC framework This decoder-driven fusion method is adopted to improve the codec performances, e.g., peak signal to noise ratios (PSNRs) and time complexity. Three new fusion techniques that exploit signal properties of neighboring residual frames along intra- and interview direction were proposed for robustness and improving SIF quality.[17] The fusion can adopt a support vector machine to identify a set of features for classifying pixels into either the temporal or the disparity class, by which the fusion can yield better SIF.[18] It provides a good solution for fusing intra- and interview predictions These fusion methods suffer from performance degradation due to low temporally predicted quality and irregular video motion.

Multiview Distributed Video Coding Side Information
Side Information Reconstruction
COMPETE Side Information Reconstruction
Homography
Scale-invariant frame transform
Interpolation and homography
Block Matching Prediction
Multiview Distributed Video Coding Rate Control Algorithm
Wyner–Ziv Coding
Rate Control Mechanism
Simulation Study
Performance Analysis
Error analysis
Side information confidence
Objective performance evaluation
46 COMPETE
Motion Compensated Temporal Interpolation
Fusion-based homography
Hybrid Multiview Motion Estimation
COMPETE
Practical execution time evaluation
Reconstructed Side Information Frames
Reconstructed Wyner–Ziv Frame
Practical Applications
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.