Abstract

The conventional warping method only considers translations of pixels to generate stereo images. In this paper, we propose a model that can generate stereo images from a single image, considering both translation as well as rotation of objects in the image. We modified the appearance flow network to make it more general and suitable for our model. We also used a reference image to improve the inpainting method. The quality of images resulting from our model is better than that of images generated using conventional warping. Our model also better retained the structure of objects in the input image. In addition, our model does not limit the size of the input image. Most importantly, because our model considers the rotation of objects, the resulting images appear more stereoscopic when viewed with a device.

Highlights

  • In recent years, because of the commercialization of wearable systems and the vigorous development of related technologies, research on virtual reality has become increasingly popular.Among many related topics, the concept of stereo images is basic and essential

  • Many studies have been conducted on stereo images, on such issues as the design of special cameras or devices to capture stereoscopic panoramas [2] and stitching stereoscopic panoramas from stereo images captured with a stereo camera [3]

  • We eroded the objects because of noise produced by the view synthesis network along the edges of the rotated objects

Read more

Summary

Introduction

Because of the commercialization of wearable systems and the vigorous development of related technologies, research on virtual reality has become increasingly popular. After several representative networks were proposed, such as convolutional neural networks (CNNs) [4,5] and generative adversarial networks (GANs) [6], correlated studies on stereo images witnessed breakthrough developments, such as stereo matching and disparity estimation from a pair of stereo images [7,8], single view depth estimation [9,10,11,12,13], predicting new views or constructing a complete three-dimensional scene from an image sequence [14,15], and view synthesis for an object from only a single view [16,17,18,19,20]. The translation and rotation of each object in the scene are considered This generates a right-eye view that can be combined with the input image to form a pair of stereo images, and the user can feel a stronger stereoscopic sense when viewing it

Related Works
The Proposed Approach
Depth Estimation
In scale
Semantic Segmentation
Translation
View Synthesis of Objects
Whenthis theoperation
Right-Eye View Generation and Inpainting
Results
Quantitative Evaluation
Qualitative Evaluation
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.