Abstract

[1] Increased catalog incompleteness following mainshocks can indeed result in artificially low b-value estimates for aftershock sequences in their early stages. Hainzl [2013] nicely documents how this effect could explain the low b-values seen for aftershocks of M 2.5 to 5.5 mainshocks in southern California documented in Shearer [2012a]. However, catalog incompleteness is most clearly seen following large earthquakes, where it is generally attributed to network and/or analyst overload and the overlapping codas of the main shock and its larger aftershocks [e.g., Kagan, 2004; Peng et al., 2007]. These effects will be less important for smaller main shocks, which have shorter codas and individually produce many fewer aftershocks above any threshold magnitude. Thus, it is not clear if a relative lack of smaller aftershocks in the 12 hours following M 2.5 to 5.5 main shocks in southern California is an artifact of catalog incompleteness or a real property of the Earth. Resolving this issue may require new observational studies to search for the “missing” aftershocks that the catalog incompleteness model predicts should exist. [2] One diagnostic property of catalog incompleteness is a flattening in the magnitude versus frequency curve at smaller magnitudes. This can be seen in the concave-down curvature of the synthetic aftershock dN/dM curves in Figures 2 and 3b of Hainzl, [2013]. However, the curvature is fairly subtle, and comparisons to the corresponding data curves are inconclusive. Some amount of curvature is visible in the data plotted in Hainzl's Figure 3b, but interestingly does not appear in the corresponding data curves plotted in Figure 8 of Shearer [2012a]. Further study is warranted, but resolving the exact shape of the dN/dM curves may be difficult given the small differences that need to be resolved. [3] I agree with Hainzl, [2013] that Båth's Law is subject to some uncertainty with respect to the space/time windows used for its computation, and that models with higher triggering productivities than those nominally consistent with Båths Law should be explored. Comparisons between the absolute numbers of aftershocks seen in the data versus those predicted by triggering simulations should ideally use the same time-space windowing method for both data and synthetics. Thus, it is potentially misleading to compare synthetic aftershock sequences with spatially windowed catalog aftershocks, as is done in Figure 3 of Hainzl, [2013]. I attempted to take these data-windowing effects into account in Shearer [2012a], which concludes that observed aftershock numbers for 2.5 ≤ M ≤ 5.5 main shocks are too large to be compatible with Båth's Law. In a follow up paper, Shearer [2012b] more completely explores the space-time clustering of California earthquakes and reaches a similar conclusion—that at least some of the temporal clustering of seismicity observed at short scales (0.5–5 km) does not appear to be caused by local earthquake triggering, but instead reflects an underlying physical process that temporarily increases the seismicity rate, such as is often hypothesized to drive earthquake swarms. This conclusion is based on comparisons between data and synthetic triggering models over a wide range of scales, which identify a number of models versus data differences, but perhaps the strongest evidence supporting the conclusion is the anomalously high foreshock-to-aftershock ratio seen for the smaller earthquakes, which is a point of agreement with Hainzl, [2013]. [4] Additional tests of earthquake-to-earthquake triggering models versus seismic observations are warranted at a range of magnitudes and time-distance scales. It is important to understand where current models work and where they fail, because their limitations provide clues about the underlying physical changes that drive earthquake activity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call