Abstract

This paper presents an adaptive texture-depth target bit rate allocation estimation technique for low latency multi-view video plus depth transmission using a multi-regression model. The proposed technique employs the prediction mode distribution of the macroblocks at the discontinuity regions of the depth map video to estimate the optimal texture-depth target bit rate allocation considering the total available bit rate. This technique was tested using various standard test sequences and has shown efficacy as the model is able to estimate, in real-time, the optimal texture-depth rate allocation with an absolute mean estimation error of 2.5% and a standard deviation of 2.2%. Moreover, it allows the texture-depth rate allocation to be adapted to the video sequence with good tracking performance, allowing the correct handling of scene changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call