Abstract

AbstractWith the increasing popularity of wind turbines, the demand for integrity detecting of wind turbines operating in natural or extreme environments is also increasing. To better detect the state of wind turbines, rendering and 3D reconstruction of realistic wind turbine models has become a crucial task. The neural radiation field is becoming a widely used method for novel view synthesis and 3D reconstruction. In the original neural radiation field method, the scene or object is required to have a large number of features or complex textures. But for wind turbines, the surface is smooth and texture-free, which creates blur and ghosting in the results. Therefore, we propose the Wind Turbine Neural Radiance Fields (WTBNeRF), a network dedicated to wind turbine rendering and 3D reconstruction. Instead of single pixel-centered rays, we use conical truncated rays to cover individual pixel ranges in greater detail, effectively reducing aliasing and blurring in smooth, low-texture wind turbine scenes. At the same time, obtaining accurate camera poses for low-texture objects and scenes is also a challenging task. We use a pretrained camera pose estimation neural radiation field network to predict the camera pose of the wind turbines in the dataset, reducing the requirement to know the real camera parameters of the data in advance. Moreover, in the network design, we simplify the network structure, which greatly reduced the network training time. The speed is about 10 times faster than that of NeRF for our multi-scale wind turbine dataset.KeywordsWind turbine blade3D ReconstructionNovel view synthesis

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call