Abstract

Computer vision-based displacement measurement methods have received increasing attention for the structural health monitoring of buildings and infrastructures owing to their advantages over traditional contact sensors. Meanwhile, surveillance cameras widely equipped in urban areas can record a large number of images and videos of buildings and infrastructure, which have the potential to support structural analysis in structural health monitoring or engineering investigations. The three-dimensional (3D) displacement of structures is important for structural analysis. It is challenging for the existing vision-based measurement methods to obtain all the 3D displacement components because they require either multi-view camera systems or additional specially designed targets, which makes it difficult to meet the requirements of measurement applications based on urban surveillance cameras. Therefore, this study proposes a 3D structural displacement measurement method using monocular vision and deep learning based pose estimation. The method uses virtual rendering to synthesize the training set based on the 3D models of the target objects, then trains the deep learning model DPOD (Dense Pose Object Detector) to estimate the poses of the target object, and finally measures the 3D translation of the structures based on the original and destination poses or the original pose and keypoint matching. The effectiveness of the proposed method was validated through static and dynamic experiments. The results showed that the method can meet the needs of obtaining 3D structural displacement and has good accuracy in identifying the principal frequencies of the dynamic responses. The proposed method can support the 3D displacement measurements of buildings and infrastructure based on urban surveillance cameras.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call