Abstract

The three-dimensional light-field display is one of the most promising display technologies and has achieved an impressive improvement in past years. To generate real-world dense views efficiently, many virtual view synthesis approaches have been proposed, which take sparse 2D images as input and synthesize dense virtual views. However, in the occlusion areas, the generated results of previous methods are not satisfactory, which usually contain errors and blurs because the occlusion relation cannot​ be recovered correctly. Here, a dense view synthesis method for three-dimensional light-field display based on scene geometric reconstruction is presented, which can synthesize high-quality arbitrary virtual views with correct occlusions. The scene geometric model is firstly reconstructed with several depth maps captured by RGB-D cameras. The geometric model can provide the correct occlusion relation and select the non-occlusion regions as the input data. To relieve the effect of the depth map noise and improve the quality further, the input views semantic feature maps are extracted and reprojected to the virtual view, which encodes the captured multi-view information and represents the virtual view implicitly. The virtual view image can be generated by rendering the reprojected encoded feature map. Compared with other view synthesis approaches, the quality of the synthesized view is significantly improved with our proposed method, especially in the occlusion areas. The experiments are carried out on a 27 inches 3D light-field display device, and the results demonstrate the effectiveness of our proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call