Abstract

Three-dimensional (3D) light field display, as a potential future display method, has attracted considerable attention. However, there still exist certain issues to be addressed, especially the capture of dense views in real 3D scenes. Using sparse cameras associated with view synthesis algorithm has become a practical method. Supervised convolutional neural network (CNN) is used to synthesize virtual views. However, such a large amount of training target views is sometimes difficult to be obtained and the training position is relatively fixed. Novel views can also be synthesized by unsupervised network MPVN, but the method has strict requirements on capturing multiple uniform horizontal viewpoints, which is not suitable in practice. Here, a method of dense-view synthesis based on unsupervised learning is presented, which can synthesize arbitrary virtual views with multiple free-posed views captured in the real 3D scene based on unsupervised learning. Multiple posed views are reprojected to the target position and input into the neural network. The network outputs a color tower and a selection tower indicating the scene distribution along the depth direction. A single image is yielded by the weighted summation of two towers. The proposed network is end-to-end trained based on unsupervised learning by minimizing errors during reconstructions of posed views. A virtual view can be predicted in a high quality by reprojecting posed views to the desired position. Additionally, a sequence of dense virtual views can be generated for 3D light-field display by repeated predictions. Experimental results demonstrate the validity of our proposed network. PSNR of synthesized views are around 30dB and SSIM are over 0.90. Since multiple cameras are supported to be placed in free-posed positions, there are not strict physical requirements and the proposed method can be flexibly used for the real scene capture. We believe this approach will contribute to the wide applications of 3D light-field display in the future.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.