Abstract

Efficiently, accurately, and realistically reconstructing large-scale 3D orchard scenes in a virtual world is an immensely challenging task. This complexity stems from the intricate and expansive of real orchard scenes. Traditional 3D reconstruction and rendering methods have encountered limitations in terms of modeling efficiency and computational costs, hindering the ability to provide users with immersive experiences. In response to these challenges, this study introduces a strategy for 3D scene reconstruction and rendering grounded in implicit neural representation: the NeRF-Ag model. Building upon the baseline NeRF, this model integrates a multi-resolution latent feature encoding technique, notably heightening training efficiency and elevating modeling precision. Furthermore, by means of environmental factor embedding, the model's robustness and practical applicability are further enhanced. The experimental outcomes illustrate that NeRF-Ag attains photo-realistic rendering outcomes across small, medium, and large scales. Moreover, it surpasses NeRF concerning the evaluation metrics of PSNR, SSIM, and LPIPS. Notably, the training speed of NeRF-Ag is roughly 39 times faster than NeRF. In 3D reconstruction tasks, NeRF-Ag showcases enhanced texture detail representation and higher modeling accuracy compared to the COLMAP-based 3D reconstruction method. Additionally, this study accomplishes free-viewpoint rendering of 3D scenes employing NeRF-Ag and provides evidence substantiating the connection between the quantity of training images and the precision of 3D rendering. The conclusions of this study will contribute to supporting and referencing the implementation of immersive visual interactive features within agricultural digital twin systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call