Abstract

Coulomb-stress theory has been used for years in seismology to understand how earthquakes trigger each other. Whenever an earthquake occurs, the stress field changes, and places with positive increases are brought closer to failure. Earthquake models that relate earthquake rates and Coulomb stress after a main event, such as the rate-and-state model, assume that the magnitude distribution of earthquakes is not affected by the change in the Coulomb stress. By using different slip models, we calculate the change in Coulomb stress in the fault plane for every aftershock after the Landers event (California, USA, 1992, moment magnitude 7.3). Applying several statistical analyses to test whether the distribution of magnitudes is sensitive to the sign of the Coulomb-stress increase, we are not able to find any significant effect. Further, whereas the events with a positive increase of the stress are characterized by a much larger proportion of strike-slip events in comparison with the seismicity previous to the mainshock, the events happening despite a decrease in Coulomb stress show no relevant differences in focal-mechanism distribution with respect to previous seismicity.

Highlights

  • Since the L’Aquila event in 2009 seismologists have advocated the modeling and testing of earthquakes within a rigorous statistical framework[1], following on the CSEP (Collaboratory for the Study of Earthquake Predictability) previous works

  • After separating by the sign of the Coulomb-stress change, the first result that becomes apparent from Table 1 is that the number of aftershocks with positive increases is much larger than the number for the negative case[6,7], no matter the slip model used to calculate ΔCFS

  • We have seen how the positive Coulomb-stress increase associated to the Landers mainshock triggered a very large number of strike-slip events and a large number of normal events, but much less thrust events. This result seems easy to establish, as it can be obtained without the calculation of ΔCFS, we have unambiguously associated these events to the positive ΔCFS

Read more

Summary

Introduction

Since the L’Aquila event in 2009 seismologists have advocated the modeling and testing of earthquakes within a rigorous statistical framework[1], following on the CSEP (Collaboratory for the Study of Earthquake Predictability) previous works. A hallmark of statistical seismology and of earthquake hazard assessment is the well-known Gutenberg-Richter relation, or Gutenberg-Richter law[19,20,21] This law states that earthquake magnitudes must be described in terms of a probability distribution and that, above a lower cut-off value, this distribution is exponential. In terms of the probability density f(m) one has f (m) = (bln10)10−b(m−mmin) ∝ 10−bm , defined for m ≥ mmin (values below mmin are disregarded), with m the magnitude, mmin the lower cut-off in magnitude, b the so called b-value (directly related to the exponent β of the power-law complementary cumulative distribution of seismic moment, β = 2b/3), and the symbol ∝ denoting proportionality. Where R(t) is the rate of events (i.e., aftershocks) at any given time t after a mainshock, r is the rate of background seismicity, ΔCFS is the increase in Coulomb stress induced by the mainshock, B is a constant, for our purposes, and ta is the characteristic relaxation time[22]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.