Abstract

Accurate prediction of building’s response subjected to earthquakes makes possible to evaluate building performance. To this end, we leverage the recent advances in deep learning and develop a physics-guided convolutional neural network (PhyCNN) for data-driven structural seismic response modeling. The concept is to train a deep PhyCNN model based on limited seismic input–output datasets (e.g., from simulation or sensing) and physics constraints, and thus establish a surrogate model for structural response prediction. Available physics (e.g., the law of dynamics) can provide constraints to the network outputs, alleviate overfitting issues, reduce the need of big training datasets, and thus improve the robustness of the trained model for more reliable prediction. The surrogate model is then utilized for fragility analysis given certain limit state criteria. In addition, an unsupervised learning algorithm based on K-means clustering is also proposed to partition the datasets to training, validation and prediction categories, so as to maximize the use of limited datasets. The performance of PhyCNN is demonstrated through both numerical and experimental examples. Convincing results illustrate that PhyCNN is capable of accurately predicting building’s seismic response in a data-driven fashion without the need of a physics-based analytical/numerical model. The PhyCNN paradigm also outperforms non-physics-guided neural networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.