Abstract

AbstractHigh rate Global Navigation Satellite System (GNSS) processed time series capture a broad spectrum of earthquake strong motion signals, but experience regular sporadic noise that can be difficult to distinguish from true seismic signals. The range of possible seismic signal frequencies amidst a high, location‐varying noise floor makes filtering difficult to generalize. Existing methods for automatic detection rely on external inputs to mitigate false alerts, which limit their usefulness. For these reasons, geodetic seismic signal detection makes for a compelling candidate for data‐driven machine learning classification. In this study we generated high rate GNSS time differenced carrier phase (TDCP) velocity time series concurrent in space and time with expected signals from 77 earthquakes occurring over nearly 20 years. TDCP velocity processing has increased sensitivity relative to traditional geodetic displacement processing without requiring sophisticated corrections. We trained, validated and tested a random forest classifier to differentiate seismic events from noise. We find our supervised random forest classifier outperforms the existing detection methods in stand‐alone mode by combining frequency and time domain features into decision criteria. The classifier achieves a 90% true positive rate of seismic event detection within the data set of events ranging from MW4.8–8.2, with typical detection latencies seconds behind S‐wave arrivals. We conclude the performance of this model provides sufficient confidence to enable these valuable ground motion measurements to run in stand‐alone mode for development of edge processing, geodetic infrastructure monitoring and inclusion in operational ground motion observations and models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call