Abstract
In 360-degree video streaming, Most solutions are based on tile-based streaming that divides videos into tiles and streams the high-quality tiles corresponding to the user's viewport areas. However, these methods cannot transmit different combinations of tile coding efficiently. In this paper, we experimented with streaming 360-degree videos using a motion-constrained tile set (MCTS) technique that allows encoding with constraining motion vectors such that each tile can be decoded and transmitted independently. Moreover, we have used a tile-based approach using a saliency map that integrates the information of human visual attention with the contents to deliver high-quality tiles to the region of interest (ROI). We encoded the 360-degree videos at various quality representations with MCTS techniques and assigned a tile quality representation using a saliency map predicted by the existing convolutional neural network (CNN) model. We proposed a novel heuristic algorithm to assign appropriate quality to the tiles on the centerline. Consequently, mixed quality videos based on the saliency map enable efficient streaming in 360-degree videos. Using the Salient360! dataset, the proposed method shows an improvement in terms of bandwidth with little loss of viewport image quality.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.