Abstract

Video cameras have become ubiquitous for data collection for their low price, agility, high spatial sensing resolution, and non-contact. This dissertation will broaden and develop the application of vision based approaches on dealing with scientific and engineering problems including physical law (closed-form governing equation) discovery and full-field high-precision structural displacement measurement. For physical law discovery, in various science and engineering disciplines, distilling physical laws from collected data has the potential to advance our understanding, modeling and prediction of the dynamical systems. Recently, the increasing richness of sensing data and advances in machine learning have given rise to a new paradigm of understanding unknown physical systems called data-driven governing equation discovery. However, almost all existing data-driven methods rely on the physical states (e.g., time series trajectory) being given. Identifying closed-form governing equations straightly from raw videos is still a grand challenge. To this end, this dissertation introduces a novel end-to-end unsupervised deep learning framework to uncover the closed-form governing equation of nonlinear dynamics presented by moving object(s) from videos. In this framework, the explicit physical law is uncovered from the physical states which are learned by the network instead of being given, and the equation form is not given as well. The discovery returns both the physical trajectory and its governed closed-form equation. The efficacy of the proposed unsupervised learning method is tested by uncovering ordinary differential equation (ODE) systems from videos. Then this framework is developed to discover the governing equations for dynamical systems which have unknown source inputs and the discovery results include the physical trajectory, governing equation, and the unknown source input. The proposed paradigm shows the potential of uncovering closed-form physical laws when the scientific data is collected in video types where neither the physical states nor the governed equation form are known. For displacement measurement, the accurate sensing of full-field displacements plays a significant role in dynamic testing for structural health monitoring (SHM). As an optical flow method, the phase-based method proposed recently has achieved great success in handling small motions. However the complex computational procedure and sensitivity to noise limit its real-time inference capacity, and this approach can fail when the motion in phase-domain is beyond a threshold value. To address these issues, this dissertation develops a deep learning framework and a variational approach for capturing full-field subpixel precision structural displacements from videos. Two new convolutional neural network (CNN) architectures, SubFlowNetC and SubFlowNetS, are designed and trained on a dataset generated from a single lab-recorded high-speed video. The deep learning framework based on CNNs enables the real-time extraction of motion field without complex computational procedure. The performance of the trained networks is tested on various videos to extract the full-field motion (e.g., displacement time histories). The developed variational approach, pixel matching and optical flow, combines the pixel matching algorithm developed from traditional block matching algorithm and optical flow, and has the capacity to extract the displacement no matter its amplitude. This variational approach is proved efficient in extracting large motion from videos which the phase-based approach fails to process. Finally, those vision sensing techniques are applied to more real projects including the rail vibration monitoring and displacement measurement of steel bridge vibration.--Author's abstract

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.