ABSTRACT Terrestrial radar interferometry (TRI) provides accurate observations of displacements in the line-of-sight (LOS) direction and is therefore used in various monitoring applications. However, relating these displacements directly to the 3d world is challenging due to the particular imaging process. To address this, the radar results are projected onto a 3d model of the monitored area, requiring georeferencing of the 3d model and radar observation. However, georeferencing relies on manual alignment and resource-intensive on-site measurements. Challenges arise from the significant disparity in spatial resolution between radar images and 3d models, the absence of identifiable common natural features and the relationship between image and spatial coordinates depending on the topography and instrument pose. Herein, we propose a method for data-driven, automatic and precise georeferencing of TRI images without the need for manual interaction or in situ installations. Our approach (i) uses the radar amplitudes from the TRI images and the angle of incidence based on the 3d point cloud to identify matching features in the datasets, (ii) estimates the best-fitting transformation parameters using Kernel Density Correlation (KDC) and (iii) requires only rough initial approximations of the radar instrument’s pose. Additionally, we present the correct relation between cross-range and azimuth for ground-based radar instruments. We demonstrate the application on a geomonitoring case using TRI data and a point cloud of a large rock cliff. The results show that the positions of the radar image can be localized in the monitored 3d space with a precision of a few metres at distances of over 1 k m . This is an improvement of almost one order of magnitude compared to what had been achieved using standard approaches and direct observation of the radar instrument’s pose. The proposed method thus contributes to the automation of TRI data processing and improved localization of small-scale deformation areas detected in radar images.
Read full abstract