Abstract

Remote navigation in image-based scene representations requires random access to the compressed reference image data to compose virtual views. When using block-based hybrid video coding concepts, the degree of inter frame dependencies introduced during compression has an impact on the effort that is required to access reference image data and at the same time delimits the rate distortion trade-off that can be achieved. If, additionally, a maximum available channel bitrate is taken into account, the traditional rate-distortion (RD) trade-off can be extended to a trade-off between the storage rate (R), distortion (D), transmission data rate (T), and decoding complexity (C). In this work we present a theoretical analysis of this RDTC space. Experimental results qualitatively match those predicted by theory and show that an adaptation of the encoding process to scenario specific parameters like computational power of the receiver and channel throughput can significantly reduce the user perceived delay or required storage for RDTC optimized streams compared to RD optimized or independently encoded scene representations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call