Traditional methods for 3D reconstruction of unmanned aerial vehicle (UAV) images often rely on classical multi-view 3D reconstruction techniques. This classical approach involves a sequential process encompassing feature extraction, matching, depth fusion, point cloud integration, and mesh creation. However, these steps, particularly those that feature extraction and matching, are intricate and time-consuming. Furthermore, as the number of steps increases, a corresponding amplification of cumulative error occurs, leading to its continual augmentation. Additionally, these methods typically utilize explicit representation, which can result in issues such as model discontinuity and missing data during the reconstruction process. To effectively address the challenges associated with heightened temporal expenditures, the absence of key elements, and the fragmented models inherent in three-dimensional reconstruction using Unmanned Aerial Vehicle (UAV) imagery, an alternative approach is introduced—the neural radiance field. This novel method leverages neural networks to intricately fit spatial information within the scene, thereby streamlining the reconstruction steps and rectifying model deficiencies. The neural radiance field method employs a fully connected neural network to meticulously model object surfaces and directly generate the 3D object model. This methodology simplifies the intricacies found in conventional 3D reconstruction processes. Implicitly encapsulating scene characteristics, the neural radiance field allows for iterative refinement of neural network parameters via the utilization of volume rendering techniques. Experimental results substantiate the efficacy of this approach, demonstrating its ability to complete scene reconstruction within a mere 5 min timeframe, thereby reducing reconstruction time by 90% while markedly enhancing reconstruction quality.