Abstract
Recent technological advancements allowed videos to come from a simple sequence of 2D images to be displayed in a flat screen display into spherical representations of one’s surroundings, capable of creating a realistic immersive experience when allied to head-mounted displays. In order to explore the existing infrastructure for video coding, 360-degrees videos are pre-processed and then encoded by conventional video coding standards. However, the flattened version of 360-degrees videos present some peculiarities which are not present in conventional videos, and therefore, may not be properly exploited by conventional video coders. Aiming to find evidence that conventional video encoders can be adapted to perform better over 360-degrees videos, this work performs an evaluation on the intra-frame prediction performed by the High Efficiency Video Coding over 360-degrees videos in the equirectangular projection. Experimental results point that 360-degrees videos present spatial properties that make some regions of the frame likely to be encoded using a reduced set of prediction modes and block sizes. This behavior could be used in the development of fast decision and energy saving algorithms by evaluating a reduced set of prediction modes and block sizes depending on the regions of the frame being encoded.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have