Abstract

Due to the limited storage capacity and network bandwidth, an efficient rate control (RC) algorithm becomes more and more critical for multi-view video coding (MVC). Based on inter-view dependency and spatio-temporal correlation, a novel bit allocation method for multi-view texture video coding is proposed in this article. Firstly, considering that the distortion in the base view (BV) is directly transmitted to the dependent view (DV) by inter-view skip mode, a joint multi-view RD model is built based on the inter-view dependency. Based on the proposed joint multi-view RD model, a precise power model is derived to represent the target bitrates relationship between the BV and the DV. Secondly, since the P frame in the DV (P-DV) is mainly predicted from the corresponding I frame in the BV (I-BV) by disparity compensated prediction (DCP), the constant proportional relationship between the ratio of the average bitrates of the P-DV to the corresponding I-BV and the ratio of the total bitrates of the DV to the BV is discovered. Based on this discovery, a novel linear model is developed to assign the target bitrates of the P-DV. Finally, considering the spatio-temporal correlation, a new parameter prediction method is proposed for the R-$\lambda $ model in coding tree unit (CTU) level. Extensive experimental results show that the proposed overall method outperforms other state-of-the-art algorithms in terms of RD performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.