Abstract
Notice of Violation of IEEE Publication Principles<br><br>"A Novel View Multi-view Synthesis Approach for Free Viewpoint Video"<br>by Yuhua Zhu<br>in the Proceedings of the 2009 International Joint Conference on Artificial Intelligence April 2009, pp. 88-91<br><br>After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.<br><br>This paper contains significant portions of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.<br><br>Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following article:<br><br>"Multi-view Synthesis: A Novel View Creation Approach for Free Viewpoint Video"<br>by Eddie Cooke, Peter Kauff, and Thomas Sikora<br>in Signal Processing: Image Communication, Vol 21, No 6, Elsevier, July 2006, pp. 476-492<br><br> <br/> Interactive audio-visual applications such as free viewpoint video (FVV) endeavour to provide unrestricted spatiotemporal navigation within a multiple camera environment. Current novel view creation approaches for scene navigation within FVV applications are both purely image-based, implying large information redundancy and dense sampling of the scene; or involve reconstructing complex 3-D models of the scene. In this paper we present a new multiple image view synthesis algorithm for novel view creation that requires only implicit scene geometry information. The multi-view synthesis approach can be used in any multiple camera environments and is scalable, as virtual views can be created given 1 to N of the available video inputs, providing a means to gracefully handle scenarios where camera inputs decrease or increase over time. The algorithm identifies and selects only the best quality surface areas from available reference images, thereby reducing perceptual errors in virtual view reconstruction. Experimental results are provided and verified using both objective (PSNR) and subjective comparisons and also the improvements over the traditional multiple image view synthesis approach of view-oriented weighting are presented.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have