Abstract

Recent breakthroughs in neural radiance fields have significantly advanced the field of novel view synthesis and 3D reconstruction from multi-view images. However, the prevalent neural volume rendering techniques often suffer from long rendering time and require extensive network training. To address these limitations, recent initiatives have explored explicit voxel representations of scenes to expedite training. Yet, they often fall short in delivering accurate geometric reconstructions due to a lack of effective 3D representation. In this paper, we propose an octree-based approach for the reconstruction of implicit surfaces from multi-view images. Leveraging an explicit, network-free data structure, our method substantially increases rendering speed, achieving real-time performance. Moreover, our reconstruction technique yields surfaces with quality comparable to state-of-the-art network-based learning methods. The source code and data can be downloaded from https://github.com/LaoChui999/Octree-VolSDF.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call