Abstract
Immersive video places the user inside the video scene, allowing the user to control the direction of the view. To achieve this, the view of every direction must be recorded using either a panoramic camera or multiple cameras placed at different positions with different angels. The size of the captured video can be quite large due to multiple video streams, one from each camera. Even with compression standards such as Multiview Video Coding (MVC), the transmission of the whole MVC video is still bandwidth-costly, especially for heterogeneous users whose bandwidths vary. In this paper, we present a new approach for immersive video streaming by using Scalable Multiview Video Coding (SMVC) to create multiple layers of the immersive video, supporting heterogeneous receivers more efficiently. Our method limits the number of views in its base layer, while it uses view scalability and free view-point scalability in the additional layers to synthesize more views at the receiver and provide high quality free view-point viewing to the user. Performance evaluations demonstrate that our method: 1-synthesizes missing views more accurately, as evident subjectively, and 2-achieves an average and maximum gain of 0.75 and 1.4 in Bjontegaard BD-Bitrate scale, respectively, compared to existing work which simply group adjacent views in the same layer.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have