Abstract
To compress multi-view video, spatial redundancy between adjacent view sequences as well as temporal redundancy need to be eliminated. View-temporal prediction structures are proposed, which can be adjusted to various characteristics of multi-view videos. The proposed prediction structure achieves better coding performance than the reference prediction structure for the standardisation of multi-view video coding.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have