Abstract
In the last decades, geophysicists have developed numerical simulators to predict earthquakes and other natural catastrophes. However, the more precise the model is, the higher the computational burden and the time to results. In addition, even if we could reproduce the phenomenon with more complex and more representative models, the underlying uncertainty would remain significantly high, affecting the reliability of the final prediction. In response to this challenge, we adopted a hybrid strategy, consisting into mixing physics-based numerical simulations and machine-learning. The goal is to transform synthetic earthquake ground motion, obtained via physics-based simulation, accurate up to a frequency of 5 Hz, into a broader-band prediction that mimics the recorded seismographs. In doing so, we factorize the latent representation of the seismic signal, by forcing an encoding that splits features into two parts: a low frequency one (0-1 Hz) and a high frequency one (1-20 Hz). In the following, we train a convolutional U-Net neural network and apply two different signal-to-signal translation techniques: pix2pix and BiCycleGAN. The latter strategies are compared with the prior work of Gatti et al., 2020, on the Stanford Earthquake Dataset (STEAD) showing their capability of mimicking recorded seismographs. We finally tested the two strategies on the synthetic time-histories obtained for the 2019 Le Teil earthquake (France).   
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.