Abstract

Rendering of virtual views in interactive streaming of compressed image-based scene representations requires random access to arbitrary parts of the reference image data. The degree of interframe dependencies exploited during encoding has an impact on the transmission and decoding time and, at the same time, delimits the (storage) rate-distortion (RD) tradeoff that can be achieved. In this work, we extend the classical RD optimization approach using hybrid video coding concepts to a tradeoff between the storage rate (R), distortion (D), transmission data rate (T), and decoding complexity (C). We present a theoretical model for this RDTC space with a focus on the decoding complexity and, in addition, the impact of client side caching on the RDTC measures is considered and evaluated. Experimental results qualitatively match those predicted by our theoretical models and show that an adaptation of the encoding process to scenario specific parameters like computational power of the receiver and channel throughput can significantly reduce the user-perceived delay or required storage for RDTC optimized streams compared to RD optimized or independently encoded scene representations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.