Abstract
AbstractA depth-image-based rendering (DIBR) method with spatial and temporal texture synthesis is presented in this article. Theoretically, the DIBR algorithm can be used to generate arbitrary virtual views of the same scene in a three-dimensional television system. But the disoccluded area, which is occluded in the original views and becomes visible in the virtual views, makes it very difficult to obtain high image quality in the extrapolated views. The proposed view synthesis method combines the temporally stationary scene information extracted from the input video and spatial texture in the current frame to fill the disoccluded areas in the virtual views. Firstly, the current texture image and a stationary scene image, which is extracted from the input video, are warped to the same virtual perspective position by the DIBR method. Then, the two virtual images are merged together to reduce the hole regions and maintain the temporal consistency of these areas. Finally, an oriented exemplar-based inpainting method is utilized to eliminate the remaining holes. Experimental results are shown to demonstrate the performance and advantage of the proposed method compared with other view synthesis methods.
Highlights
Year 2010 is considered to be the year of breakthrough for 3D video and 3D industry [1]
The prosperity of 3D industry gives an important opportunity for threedimensional television (3DTV) system, which is believed to be the generation of television broadcasting after high-definition television
To restore the missing information of the remaining hole areas, we propose an oriented exemplar-based inpainting algorithm based on the previous work of Criminisi et al [30]
Summary
Year 2010 is considered to be the year of breakthrough for 3D video and 3D industry [1]. A sprite of stationary scene is maintained throughout the view synthesis process, which stores the temporally accumulated structure and depth information of stationary image part. Current frame and stationary scene sprite are warped to the same virtual perspective view by a backward DIBR method to tackle the visibility problem and resampling problem. A temporary sprite of stationary scene, denoted as TCSS and TMSS, is obtained between each input image frame It and its previous frame It−1 to extract the useful information of occluded background in. The appeared background information in past frames is stored in CSS and MSS, which can be used to partly solve the disocclusion problem of virtual view synthesis algorithm.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have