Abstract. Over the last few years, implicit 3D representation has attracted more and more research endeavors, typified by the so-called Neural Radiance Fields (NeRF). The original NeRF and some relevant variants mostly address on small-scale scene (such as, indoor or tiny toys), which already show good novel views rendering results. It still remains challenging when dealing with wide coverage area that is captured by large number of high-resolution images, the time efficiency and rendering quality is generally limited. To cope with large-scale scenario, recently, Mega-NeRF was proposed to divide the area into several overlapping sub-area and train corresponding sub-NeRFs, respectively. Mega-NeRF adopts the method of parallel training of multiple sub-modules, which means sub-modules are absolutely independent of each other, which might in principle not be an optimal solution, as two sub-NeRFs of adjacent sub-models obtained by parallel training are likely to get different rendering results for the overlapping area, and the final rendering result is supposed to be negative affected. Therefore, we present Mega-NeRF++, and our goal is to improve Mega-NeRF by implementing extra sub-models optimization that alleviate the rendering discrepancy of overlapping sub-NeRFs. More specifically, we further fine tune the original Mega-NeRFs by considering the consistency of adjacent overlapping area, which means the training data used in the optimization are only from the overlapping region, and we also proposed a novel loss, so that it not only takes into account the difference between the prediction of each sub-model and the true value, but also considers the consistency of the predicted results between various adjacent sub-modules in the overlapping region. The experimental results show that, for the overlapping area, our Mega-NeRF++ can qualitatively render better images with higher fidelity and quantitively have higher PNSR and SSIM compare to original Mega-NeRF.
Read full abstract