Abstract
The availability of high-quality datasets is increasingly critical in the field of computer vision-based civil structural health monitoring, where deep learning approaches have gained prominence. However, the lack of specialized datasets for such tasks poses a significant challenge for training a reliable model. To address this challenge, a framework, 3DGEN, is proposed to swiftly generate realistic synthetic 3D datasets which can be targeted for specific tasks. The framework is based on diverse 3D civil structural models, rendering them from various angles and providing depth information and camera parameters for training neural networks. By employing mathematical methods, such as analytical solutions and/or numerical simulations, deformation of civil engineering structures can be generated, ensuring a reliable representation of their real-world shapes and characteristics in the 3D datasets. For texture generation, a generative 3D texturing method enables users to specify desired textures using plain English sentences. Two successful experiments are conducted to (1) assess the efficiency of generating the 3D datasets using two distinct structures, (2) train a monocular depth estimation network to perform 3D surface reconstruction with the generated dataset. Notably, 3DGEN is not limited to 3D surface reconstruction; it can also be used for training neural networks for various other tasks. The code and dataset are available at: https://github.com/YANDA-SHAO/Beam-Dataset-SE
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have