Abstract

AbstractWith the rise of the metaverse, XR technologies have gained wide attention. However, traditional XR technologies require a high precision explicit 3D scene appearance and geometric model representation to achieve a more realistic visual fusion effect, which will make XR technologies require high computational power and memory capacity. Recent studies have shown that it is feasible to implicitly encode 3D scene appearance and geometric models by position-based MLPs, of which NeRF is a prominent representative. In this demo, we propose an XR tool based on NeRF that enables convenient and interactive creation of the XR environments. Specifically, we first train the NeRF model of XR content using Instant-NGP to achieve an efficient implicit 3D representation of XR content. Second, we contribute a depth-awareness scene understanding approach that automatically adapts different plane surfaces for XR content placement and more realistic real-virtual occlusion effects. Finally, we propose a multi-nerf joint rendering method to achieve natural XR content occlusion from each other. This demo shows the final result of our interactive XR tool.KeywordsMixed realityNeural radiance fields

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call