Abstract

Vibration measurement serves as the basis for structural damage detection. To detect damage, vibration measurement and frequency estimation through image sequence analysis continue to receive increasing attention. In this work, we demonstrate that structural damage prediction can be achieved using a deep learning neural network architecture. In this paper, we seek to learn and see the structural damage directly from videos using deep convolutional neural networks (CNN). The key idea is to use each pixel of an image taken from a digital camera, extracting the spatiotemporal information, like a sensor to capture the modal frequencies of a vibrating structure. We develop attention-based architecture to detect subtle signals from a specific source to visualize high resolution of dynamic properties of the structures to infer existing structural damage. We first extract the high discriminative features of video frames using the CNN. Then we leverage conv-long short-term memory (ConvLSTM) with the extracted features as inputs to capture the temporal dynamics in videos. The attention mechanisms are embedded in the network to ensure the model learns to focus selectively on the dynamic frames across the video clips. Our computer vision-based deep learning model takes the video of a vibrating structure as input and outputs about the health of the structure. We demonstrate, using reliable empirical results, the proposed model is efficient, autonomous, and accurate. The proposed method is verified using a few laboratory experiments. Our experimental results demonstrate that the proposed method can achieve acceptable prediction accuracy even.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.