Abstract
Recent observations from Global Navigation Satellite Systems (GNSS) displacement time-series have demonstrated that the motion of tectonic plates is ubiquitously not steady-state. GNSS displacement time-series can be described as the sum of expected motions such as long term tectonic interseismic motion or annually and semi-annually seasonal oscillations, and unexpected “transients”, such as slow-slip events or volcanic deformations. Although the number and quality of globally available GNSS time-series have increased dramatically in recent years, detecting and modeling these signals remains challenging, because of their elusive nature, masked by the presence of noise. The most popular approach for filtering such noise is the application of common-mode-filters (CMFs) that use weighted averages of residuals after fitting individual time-series with trajectory models. CMFs exploit the spatial coherence of higher frequency noise in both Precise Point Positioning and network double difference solutions to systematically reduce the noise of each time-series. However, the application of CMFs is limited in the case of sparsely distributed GNSS stations. Additionally, CMFs can potentially map local transients into noise when the original trajectory model fits are made suboptimally.Here we propose an alternative method to CMFs by exploiting Deep Learning (DL) techniques. Our supervised learning regression method aims to remove the noise present in each GNSS time-series regardless of its geographical location on a station-by-station basis. Our dataset consists of nine thousand time-series: all GNSS time-series available on the Nevada GNSS repository with at least 4 years of contiguous data. Our approach is defined by two subroutines. Firstly, the Greedy Automatic Signal Decomposition (GrAtSiD) algorithm is used to fit each time-series and leave behind a residual. GrAtSiD is a sequential greedy linear algorithm that decomposes the time-series into a minimum number of transient basis functions and some permanent functions. The residual time-series after signal decomposition is defined as noise, without qualifying its nature. Once this noise is identified, a DL model is trained to recognize this noise from the raw time-series, without the need for any trajectory modeling. The supervised learning regression model predicts what the residual to a trajectory model would be. Although GrAtSiD is very effective in isolating the high frequency noise of GNSS time-series, its fitting is dependent on the temporal length of input time-series. Additionally, GrAtSiD needs to set arbitrary thresholds which control the convergence of the inversion routine to avoid under- or over-fitting input data. In this context, our DL approach proposes a generalization of GrAtSiD solutions, by exploiting a weakly supervised training based on millions of examples - essentially, the model generalizes an optimal fit having seen many examples of GrAtSiD trajectory fits. This generalization allows the DL model to preserve apparent transient features of the time-series. A multitude of DL architectures are tested in both sequence-to-single and sequence-to-sequence regression framings. This exploration allows us to identify the best framing, architecture, and related hyperparameters for our method to be successful. With the best performing models, we demonstrate the effects of the DL high frequency noise removal and compare it to the CMF approach. 
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.