Neural Radiance Fields (NeRF) has gained prominence in the domain of 3D reconstruction. Despite its popularity, NeRF algorithm typically require clear, static images to function effectively, leading to reduced performance when dealing with real-world scenarios that present non-ideal conditions such as complex reflections, low dynamic range, dark scenes and blurriness resulting from camera motion or defocus photography. The resilience of NeRF to such blurred inputs has not been sufficiently examined, presenting a gap in current research, which also tends to neglect the role of 3D scene context in image deblurring. Addressing these challenges, we introduce Multi-branch Fusion Network and Prior-based Learnable Weights NeRF (MP-NeRF), a novel approach engineered for the precise reconstruction of scenes from blurred images. MP-NeRF innovatively applies distinct priors for various blur types, thus better interpreting the blur formation mechanism. It integrates a Multi-branch Fusion Network (MBFNet) with a Prior-based Learnable Weights (PLW), which collectively enhance the capture of complex scene details, including textural and pattern information. Our experimental findings validate that MP-NeRF considerably enhances the visual accuracy of reconstructions achieved through NeRF, yielding substantial improvements in Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS) compared to state-of-the-art models. The source code for MP-NeRF is made available at the following URL: https://github.com/luckhui0505/MP-NeRF.
Read full abstract