Abstract
Vibration-based condition monitoring relies on capturing spatio-temporal characteristics (STCs) such as resonant frequencies and operational deflection shapes to assess the health of a system. Optical techniques have gained popularity in this field, but noise in the optical data can affect the accuracy of extracted STCs. To address this issue, ad-hoc denoising methods like total variation denoising (TVD) and deep learning-based algorithms have been used. However, these methods are often specific to particular applications. This study proposes a robust time-inferred autoencoder (TIA) framework to preserve a system’s STCs while denoising its optically collected response. The TIA model is trained using videos of an undamaged vibrating structure to learn its underlying STCs. It is then used to reconstruct the dynamic response of damaged configurations of the same structure. The performance of TIA in reconstructing the dynamic response and denoising is compared to CNN-based autoencoders and TVD. Laboratory tests were conducted, and the results showed that TIA achieved an accuracy of approximately 94% in extracting the STCs, outperforming CNN-based autoencoders by around 40%. At the same time, TIA demonstrated comparable denoising accuracy as TVD. However, TIA offers more flexibility and automated processes over TVD, resulting in a case-independent method. Once the TIA model is trained, it does not require manually selecting or updating the regularizer term if the input dataset changes. Further development of the TIA framework could enhance its capabilities and enable its broader application as a robust tool for condition monitoring, contributing to improved system health assessment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.