Abstract

Abstract. Double-difference (DD) seismic data are widely used to define elasticity distribution in the Earth's interior and its variation in time. DD data are often pre-processed from earthquake recordings through expert opinion, whereby pairs of earthquakes are selected based on some user-defined criteria and DD data are computed from the selected pairs. We develop a novel methodology for preparing DD seismic data based on a trans-dimensional algorithm, without imposing pre-defined criteria on the selection of event pairs. We apply it to a seismic database recorded on the flank of Katla volcano (Iceland), where elasticity variations in time have been indicated. Our approach quantitatively defines the presence of changepoints that separate the seismic events in time windows. Within each time window, the DD data are consistent with the hypothesis of time-invariant elasticity in the subsurface, and DD data can be safely used in subsequent analysis. Due to the parsimonious behaviour of the trans-dimensional algorithm, only changepoints supported by the data are retrieved. Our results indicate the following: (a) retrieved changepoints are consistent with first-order variations in the data (i.e. most striking changes in the amplitude of DD data are correctly reproduced in the changepoint distribution in time); (b) changepoint locations in time correlate neither with changes in seismicity rate nor with changes in waveform similarity (measured through the cross-correlation coefficients); and (c) the changepoint distribution in time seems to be insensitive to variations in the seismic network geometry during the experiment. Our results demonstrate that trans-dimensional algorithms can be effectively applied to pre-processing of geophysical data before the application of standard routines (e.g. before using them to solve standard geophysical inverse problems).

Highlights

  • Data preparation is a daily routine in the working life of geoscientists

  • It is worth noticing that, in simple cases, our algorithm generally performs as expert opinion, and this confirms the overall performance of our methodology

  • This observation is not totally unexpected since the two time series are based on different seismological observables, it suggests that care should taken when investigating time variations of elasticity retrieved from methodologies based on cross-correlation and re-assessing approaches based on variations of the seismicity rate as a proxy for “rock instabilities” (Dou et al, 2018)

Read more

Summary

Introduction

Data preparation is a daily routine in the working life of geoscientists. Before using data to get insights into the Earth system, geoscientists try to deeply understand their data sets to avoid introducing, e.g. instrumental issues, redundant data, unwanted structures such as data density anomalies, and many others (Yin and Pillet, 2006; Berardino et al, 2002; Lohman and Simons, 2005). The number of hyper-parameters considered is not fixed but can assume two different values (1 or 2) depending on the error model considered Another interesting, recent case of exploration of the data space is represented by the work of Tilmann et al (2020), in which the authors used Bayesian inference to separate the data into two sets: “outliers” and “regular”. We represent data structure as partitions of the covariance matrix of uncertainties, i.e. changepoints that create sub-matrices of the covariance matrix with homogeneous characteristics, with the number of partitions not dictated by the user but derived by the data themselves in a Bayesian sense (i.e. we obtain a posterior probability distribution, PPD, of the number of partitions) In this way, similar to Tilmann et al (2020), portions of data can be classified and used differently in the subsequent steps of the analysis. We show how a more data-driven approach can obviate expert-driven data selection and can be used as a preliminary tool for e.g. time-lapse seismic tomography

Double-difference data in seismology
An algorithm for exploration of double-difference data space
Model parameterization
Prior information
Candidate selection
Discussion
Conclusions
Data uncertainties from full waveform investigation
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call