Abstract

Iterated linear inversion theory can often solve nonlinear inverse problems. Also, linear inversion theory provides convenient error estimates and other interpretive measures. But are these interpretive measures valid for nonlinear problems? We address this question in terms of the joint probability density function (pdf) of the estimated parameters. Briefly, linear inversion theory will be valid if the observations are linear functions of the parameters within a reasonable (say, 95%) confidence region about the optimal estimate, if the optimal estimate is unique. We use Bayes' rule to show how prior information can improve the uniqueness of the optimal estimate, while stabilizing the iterative search for this estimate. We also develop quantitative criteria for the relative importance of prior and observational data and for the effects of nonlinearity. Our method can handle any form of pdf for observational data and prior information. The calculations are much easier (about the same as required for the Marquardt method) when both observational and prior data are Gaussian. We present calculations for some simple one and two‐parameter nonlinear inverse problems. These examples show that the asymptotic statistics (those based on the linear theory) may in some cases be grossly erroneous. In other cases, accurate observations, prior information, or a combination of the two may effectively linearize an otherwise nonlinear problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call