Abstract

This paper exploits a new direction of full-field structural monitoring and vibration analysis, using an emerging type of neuro-inspired vision sensors – namely event cameras. Compared to traditional frame-based cameras, event cameras offer salient benefits of resilience to motion blur, high dynamic range, and microsecond latency. Event cameras are herein exploited for structural monitoring, in order to extract dense measurements of structural response in terms of both spatial and temporal resolution. The output of an event camera is a stream of so called “events”, which is different to traditional snapshots. Due to this fundamentally different working principle, basic computer vision algorithms, such as optical flow or feature tracking, should be re-designed for processing event-based measurements. In this work, we present a novel framework termed physics-informed sparse identification, for full-field structural vibration tracking and analysis. The framework leverages sparse identification guided by assimilation of the underlying structural dynamics in the assembly of a library matrix, which is used to characterize the system’s dynamics. The stream of event data generated from event cameras is sparsely represented by means of well-chosen basis functions, allowing for a physical interpretation of the system’s response. The proposed framework is extended to boundary condition learning/classification by fusion of characteristic basis functions, representing different classes of support conditions, into the library matrix. The results obtained by means of an illustrative numerical example, as well as experimental tests on vibrating beams recorded by an event camera demonstrate an accurate tracking of structural vibration and the developed strains, in the form of full-field measurements rather than point-wise tracking. What is more, the proposed sparse learning process enables identification of the boundary conditions of monitored structural elements, which comes with key benefits for structural monitoring.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call