Neural Radiance Field (NeRF) has emerged as a powerful paradigm for scene representation, offering high-fidelity renderings and reconstructions from a set of sparse and unstructured sensor data. In the context of autonomous robotics, where perception and understanding of the environment are pivotal, NeRF holds immense promise for improving performance. However, few survey has discussed such a potential. To fill this gap, we have collected over 200 papers since the publication of original NeRF in 2020 and present a thorough analysis of how NeRF can be used to enhance the capabilities of autonomous robots. We especially focus on the perception, localization and navigation, and decision-making modules of autonomous robots and delve into tasks crucial for autonomous operation, including 3-dimensional reconstruction, segmentation, pose estimation, simultaneous localization and mapping, navigation and planning, and interaction. Our survey meticulously benchmarks existing NeRF-based methods, comparing their reported performance, and providing insights into their strengths and limitations. Moreover, we target the existing challenges of applying NeRF in autonomous robots, including real-time processing, sparse input views, and explore promising avenues for future research and development in this domain. We especially discuss potential of integrating advanced deep learning techniques like 3-dimensional Gaussian splatting, large language models, and generative artificial intelligence. This survey serves as a roadmap for researchers seeking to leverage NeRF to empower autonomous robots, paving the way for innovative solutions that can navigate and interact seamlessly in complex environments.
Read full abstract