Abstract
ABSTRACTNeural radiance fields (NeRF) technology has garnered significant attention due to its exceptional performance in generating high‐quality novel view images. In this study, we propose an innovative method that leverages the similarity between views to enhance the quality of novel view image generation. Initially, a pre‐trained NeRF model generates an initial novel view image, which is subsequently compared and subjected to feature transfer with the most similar reference view from the training dataset. Following this, the reference view that is most similar to the initial novel view is selected from the training dataset. We designed a texture transfer module that employs a strategy progressing from coarse‐to‐fine, effectively integrating salient features from the reference view into the initial image, thus producing more realistic novel view images. By using similar views, this approach not only improves the quality of novel perspective images but also incorporates the training dataset as a dynamic information pool into the novel view integration process. This allows for the continuous acquisition and utilization of useful information from the training data throughout the synthesis process. Extensive experimental validation shows that using similar views to provide scene information significantly outperforms existing neural rendering techniques in enhancing the realism and accuracy of novel view images.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have