Synthetic aperture sonar (SAS) intensity statistics are dependent upon the sensing geometry at the time of capture. Estimating bathymetry from acoustic surveys is challenging. While several methods have been proposed to estimate seabed relief via intensity, we develop the first large-scale study that relies on deep learning models. In this work, we pose bathymetric estimation from SAS surveys as a domain translation problem of translating intensity to height. Since no dataset of coregistered seabed relief maps and sonar imagery previously existed to learn this domain translation, we produce the first large simulated dataset containing coregistered pairs of seabed relief and intensity maps from two unique sonar data simulation techniques. We apply four types of models, with varying complexity, to translate intensity imagery to seabed relief: a shape-from-shading (SFS) approach, a Gaussian Markov random field (GMRF) approach, a conditional Generative Adversarial Network (cGAN), and UNet architectures. Each model is applied to datasets containing sand ripples, rocky, mixed, and flat sea bottoms. Methods are compared in reference to the coregistered simulated datasets using L1 error. Additionally, we provide results on simulated and real SAS imagery. Our results indicate that the proposed UNet architectures outperform an SFS, a GMRF, and a pix2pix cGAN model.
Read full abstract