Abstract

The restriction of network resources has forced cloud Virtual Reality service providers to only transmit low-resolution 360-degree images to Virtual Reality devices, leading to unpleasant user experience. Deep learning-based single image super-resolution approaches are commonly used for transforming low-resolution images into high-resolution versions, but these approaches are unable to deal with a dataset which has an extremely low number of training image samples. Moreover, current single image training models cannot deal with 360-degree images with very large image sizes. Therefore, we propose a 360-degree image super-resolution method which can train a super-resolution model on a single 360-degree image sample by using image patching techniques and a generative adversarial network. We also propose an improved Generative Adversarial Network (GAN) model structure named Progressive Residual GAN (PRGAN), which learns the image in a rough-to-fine way using progressively growing residual blocks and preserves structural and textural information with multi-level skip connections. Experiments on a street view panorama image dataset prove that our image super-resolution method outperforms several baseline methods in multiple image quality evaluation metrics, meanwhile keeping the generator model computational efficient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call