Abstract

<strong class="journal-contentHeaderColor">Abstract.</strong> Double-difference (DD) seismic data are widely used to define elasticity distribution in the Earth's interior and its variation in time. DD data are often pre-processed from earthquake recordings through expert opinion, whereby pairs of earthquakes are selected based on some user-defined criteria and DD data are computed from the selected pairs. We develop a novel methodology for preparing DD seismic data based on a trans-dimensional algorithm, without imposing pre-defined criteria on the selection of event pairs. We apply it to a seismic database recorded on the flank of Katla volcano (Iceland), where elasticity variations in time have been indicated. Our approach quantitatively defines the presence of changepoints that separate the seismic events in time windows. Within each time window, the DD data are consistent with the hypothesis of time-invariant elasticity in the subsurface, and DD data can be safely used in subsequent analysis. Due to the parsimonious behaviour of the trans-dimensional algorithm, only changepoints supported by the data are retrieved. Our results indicate the following: (a) retrieved changepoints are consistent with first-order variations in the data (i.e. most striking changes in the amplitude of DD data are correctly reproduced in the changepoint distribution in time); (b) changepoint locations in time correlate neither with changes in seismicity rate nor with changes in waveform similarity (measured through the cross-correlation coefficients); and (c) the changepoint distribution in time seems to be insensitive to variations in the seismic network geometry during the experiment. Our results demonstrate that trans-dimensional algorithms can be effectively applied to pre-processing of geophysical data before the application of standard routines (e.g. before using them to solve standard geophysical inverse problems).

Highlights

  • Double differences (DD) data are often pre-processed from earthquakes recordings through expert-opinion, where couples of earthquakes are selected based on some user-defined criteria, and DD data are computed from the selected couples

  • Our results indicate that: (a) retrieved changepoints are consistent with first-order variations in the data; (b) changepoint locations in time do correlate neither with changes in seismicity rate, nor with changes in waveforms similarity; and (c) noteworthy, the changepoint distribution in time seems to be insensitive to variations in the seismic network geometry during 15 the experiment

  • We remark that the increased location uncertainties due to the network geometry change in January 2012 may affect the quality of the locations the events and, the determination of their OT, which is relevant for computing uncertainties in the 165 DD data

Read more

Summary

Introduction

Data preparation is a daily routine in the worklife of geoscientists. Before using data to get insights into the Earth system, 20 geoscientists try to deeply understand their datasets, to avoid introducing, e.g. instrumental issues, redundant data, un-wanted structures like data density anomalies, and many others (Yin and Pillet, 2006; Berardino et al, 2002; Lohman and Simons, 2005). The number of hyper-parameters considered 40 is not fixed, but can assume two different values (1 or 2), depending on the error model considered Another interesting, recent case of exploration of the data space is represented by the work of Tilmann et al (2020), where the authors used Bayesian inference to separate the data in two sets: “outliers” and “regular”. We represent data structure as partitions of the covariance matrix of errors, i.e. changepoints that create sub-matrices of the covariance matrix with homogeneous characteristics, where the number of partitions is not dictated by the user, but it is 50 derived by the data themselves, in a Bayesian sense (i.e. we obtain a posterior probability distribution, PPD, of the number of partitions) In this way, similar to Tilmann et al (2020), portions of data can be classified and used differently in the subsequent steps of the analysis. We show how a more data-driven approach can obviate 60 to expert-driven data selection and can be used as a preliminary tool for, e.g., time-lapse seismic tomography

Double difference data in seismology
Background on Bayesian inference, Markov chain Monte Carlo sampling and trans-dimensional algorithms
Data uncertainties from full-waveform investigation
Model parameterization
Recipe
Prior information
Discussion
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.