Abstract
Acoustic tomography uses integrating measurements which require inverse methods to resolve the averages into estimates of spatial structure. Statistical inverse methods have been extensively used to solve the reconstruction problem over different tomographic ranges and configurations. These inverses become very difficult to apply in frontal regions like the Gulf Stream (GS) system, where the statistics are acutely inhomogeneous and anisotropic and the mean is not a likely representation of the GS front at any time. In this paper we propose an alternative inverse which asks for the solution which gives a front instead of asking for the smoothest solution. The inverse solution minimizes the errors in the fit to the data while simultaneously maximizing the sum of the squares of the gradients observed in the reconstructed section and minimizing the absolute value norm for stability. The inverse is aimed at detecting changes in the GS front, thus the data are used to estimate the perturbations to a previous estimate of the frontal structure, instead of reconstructing the entire front as a perturbation from some average state. This approach is intended to merge well with eventual dynamic updating schemes and can be used with various types of data, given a proper model. Several examples have been run intercomparing the traditional linear least squares (LLSI) with the maximum gradient inverse (MGI), from very idealized cases to a real Gulf Stream section reconstructed from hydrographic data. Different transceiver configurations were also compared and mid‐depth instruments were found to be superior to bottom mounted instruments. The simplest cases show a significant improvement in the estimate of the Gulf Stream front by the MGI compared to the weighted least squares inverse (LLSI). As the cases became more complicated (and more realistic), the differences between inverse methods become less pronounced, although the strength and location of the perturbation maxima were always determined more accurately by the MGI. The decline is at least partially due to the numerical algorithm which lumps data misfits and external constraints (the maximum gradient) into a single penalty criterion which is minimized. The most immediate way to overcome this limitation is to break up the problem into a two‐step procedure, first a least squares inverse to fit the data and second an iterative, nonlinear optimization maximizing the gradient and minimizing the absolute value norm.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.