Abstract

Neural Radiance Fields (NeRFs) offers a state-of-the-art quality in synthesizing novel views of complex 3D scenes from a small subset of base images. For NeRFs to perform optimally, the registration of base images has to follow certain assumptions, including maintaining a constant distance between the camera and the object. We can address this limitation by training NeRFs with 3D point clouds instead of images, yet a straightforward substitution is impossible due to the sparsity of 3D clouds in the under-sampled regions, which leads to incomplete reconstruction output by NeRFs. To solve this problem, here we propose an auto-encoder-based architecture that leverages a hypernetwork paradigm to transfer 3D points with the associated color values through a lower-dimensional latent space and generate weights of NeRF model. This way, we can accommodate the sparsity of 3D point clouds and fully exploit the potential of point cloud data. As a side benefit, our method offers an implicit way of representing 3D scenes and objects that can be employed to condition NeRFs and hence generalize the models beyond objects seen during training. The empirical evaluation confirms the advantages of our method over conventional NeRFs and proves its superiority in practical applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.