In the last thirty years, Synthetic Aperture Radar interferometry (InSAR) and the Global Navigation Satellite System (GNSS) have become fundamental space geodetic techniques for mapping surface deformations due to tectonic movements. One major limiting factor to those techniques is the effect of the troposphere, as surface velocities are of the order of a few mm yr−1, and high accuracy (to mm level) is required. The troposphere introduces a path delay in the microwave signal, which, in the case of GNSS Precise Point Positioning (PPP), can nowadays be partly removed with the use of specialized mapping functions. Moreover, tropospheric stratification and short wavelength spatial turbulences produce an additive noise to the low amplitude ground deformations calculated by the (multitemporal) InSAR methodology. InSAR atmospheric phase delay corrections are much more challenging, as opposed to GNSS PPP, due to the single pass geometry and the gridded nature of the acquired data. Thus, the precise knowledge of the tropospheric parameters along the propagation medium is extremely useful for the estimation and correction of the atmospheric phase delay. In this context, the PaTrop experiment aims to maximize the potential of using a high-resolution Limited-Area Model for the calculation and removal of the tropospheric noise from InSAR data, by following a synergistic approach and integrating all the latest advances in the fields of remote sensing meteorology (GNSS and InSAR) and Numerical Weather Forecasting (WRF). In the first phase of the experiment, presented in the current paper, we investigate the extent to which a high-resolution 1 km WRF weather re-analysis can produce detailed tropospheric delay maps of the required accuracy, by coupling its output (in terms of Zenith Total Delay or ZTD) with the vertical delay component in GNSS measurements. The model is initially operated with varying parameterization, with GNSS measurements providing a benchmark of real atmospheric conditions. Subsequently, the final WRF daily re-analysis run covers an extended period of one year, based on the optimum model parameterization scheme demonstrated by the parametric analysis. The two datasets (predicted and observed) are compared and statistically evaluated, in order to investigate the extent to which meteorological parameters that affect ZTD can be simulated accurately by the model under different weather conditions. Results demonstrate a strong correlation between predicted and observed ZTDs at the 19 GNSS stations throughout the year (R ranges from 0.91 to 0.93), with an average mean bias (MB) of –19.2 mm, indicating that the model tends to slightly underestimate the tropospheric ZTD as compared to the GNSS derived values. With respect to the seasonal component, model performance is better during the autumn period (October–December), followed by spring (April–June). Setting the acceptable bias range at ±23 mm (equal to the amplitude of one Sentinel-1 C-band phase cycle when projected to the zenithal distance), it is demonstrated that the model produces satisfactory results, with a percentage of ZTD values within the bias margin ranging from 57% in summer to 63% in autumn.
Read full abstract