In recent years, phase linking (PL) methods in radar time-series interferometry (TSI) have proven to be powerful tools in geodesy and remote sensing, enabling the precise monitoring of surface displacement and deformation. While these methods are typically designed to operate on a complete network of interferograms, generating such networks is often challenging in practice. For instance, in non-urban or vegetated regions, decorrelation effects lead to significant noise in long-term interferograms, which can degrade the time-series results if included. Additionally, practical issues such as gaps in satellite data, poor acquisitions, or systematic errors during interferogram generation can result in incomplete networks. Furthermore, pre-existing interferogram networks, such as those provided by systems like COMET-LiCSAR, often prioritize short temporal baselines due to the vast volume of data generated by satellites like Sentinel-1. As a result, complete interferogram networks may not always be available. Given these challenges, it is critical to understand the applicability of PL methods on these incomplete networks. This study evaluated the performance of two PL methods, eigenvalue decomposition (EVD) and eigendecomposition-based maximum-likelihood estimator of interferometric phase (EMI), under various network configurations including short temporal baselines, randomly sparsified networks, and networks where low-coherence interferograms have been removed. Using two sets of simulated data, the impact of different network structures on the accuracy and quality of the results was assessed. These patterns were then applied to real data for further comparison and analysis. The findings demonstrate that while both methods can be effectively used on short temporal baselines, their performance is highly sensitive to network sparsity and the noise introduced by low-coherence interferograms, requiring careful parameter tuning to achieve optimal results across different study areas.
Read full abstract