Abstract

SUMMARY The misalignment of the observation and predicted waveforms in regional moment tensor inversion is mainly due to seismic models’ incomplete representation of the Earth's heterogeneities. Current moment tensor inversion techniques, allowing station-specific time-shifts to account for the model error, are computationally expensive. Here, we propose a gradient-based method to jointly invert moment-tensor parameters, centroid depth and unknown station-specific time-shifts utilizing the modern functionalities in deep learning frameworks. A $L_2^2$ misfit function between predicted synthetic and time-shifted observed seismograms is defined in the spectral domain, which is differentiable to all unknowns. The inverse problem is solved by minimizing the misfit function with a gradient descent algorithm. The method's feasibility, robustness and scalability are demonstrated using synthetic experiments and real earthquake data in the Long Valley Caldera, California. This work presents an example of fresh opportunities to apply advanced computational infrastructures developed in deep learning to geophysical problems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.