Abstract

SUMMARY We consider the case where the ‘solution’ to an inverse problem is an ensemble (e.g. drawn from the conditional probability density function $p( {{{\bf m}}|{{{\bf d}}^{obs}}} )$ of M model parameters ${{\bf m}}$ given observed data ${{{\bf d}}^{obs}}$). Here we presume that the ${{\bf m}}$s have a natural ordering, say in position x, so that ‘resolution’ means the ability of the inverse problem to distinguish physically adjacent model parameters. The trade-off curve for resolution and variance is constructed using the following steps: (1) the single solution ${{{\bf m}}^{est}}$ and its covariance ${{{\bf C}}_m}$ are estimated as the ensemble mean and covariance; (2) the eigenvalue decomposition ${{{\bf C}}_m} = {\rm{\ }}{{{\bf V} {\boldsymbol \Lambda} }}{{{\bf V}}^{\rm{T}}}$ is computed and the submatrix ${{{\boldsymbol \Lambda }}^{( N )}}$ of the N smallest eigenvalues, and submatrix ${{{\bf V}}^{( N )}}$of the N corresponding eigenvectors, are formed; (3) the equation ${{{\boldsymbol \mu }}^{( N )}} = {{{\boldsymbol \Phi }}^{( N )}}\ {{\bf m}}$ with ${{{\boldsymbol \mu }}^{( N )}} = [ {{{{\bf V}}^{( N )}}} ]{\boldsymbol{\ }}{{{\bf m}}^{est}}$and${{{\boldsymbol \Phi }}^{( N )}} = {[ {{{{\bf V}}^{( N )}}} ]^{\rm{T}}}$ is formed, as is its covariance ${{\bf C}}_\mu ^{( N )} = {{{\boldsymbol \Lambda }}^{( N )}}{\boldsymbol{\ }}$; (4) the equation is solved to yield a localized average ${\langle {{\bf m}} \rangle ^{( N )}} = \ {{{\boldsymbol \Phi }}^{ - g}}{{{\boldsymbol \mu }}^{( N )}}$, where ${{{\boldsymbol \Phi }}^{ - g}}$ is either the minimum length or Backus–Gilbert generalized inverse of ${{\boldsymbol \Phi }}$; (5) the resolution and covariance are computed as ${{{\bf R}}^{( N )}} = {{{\boldsymbol \Phi }}^{ - g}}{\boldsymbol{\ }}{{{\boldsymbol \Phi }}^{( N )}}$ and ${{\bf C}}_m^{( N )} = {{{\boldsymbol \Phi }}^{ - g}}{\boldsymbol{\ }}{{\bf C}}_\mu ^{( N )}{( {{{{\boldsymbol \Phi }}^{ - g}}} )^{\rm{T}}}$; (6) the spread ${K^{( N )}}$ of resolution and size ${J^{( N )}}\ $of covariance are computed using either the Dirichlet or Backus–Gilbert measures and (7) the process is repeated for $1 \le N \le M$ to build up the trade-off curve $K( J )$. We show that, in the Dirichlet case, ${K^{( N )}} = \ M - N$ and ${J^{( N )}} = \ {\rm{tr}}( {{{{\boldsymbol \Lambda }}^{( N )}}} )$. We also consider the case where the model parameters correspond to spline coefficients and a sequence ${y_i}( {{{\bf m}},{x_i}} )$ derived from these coefficients possesses natural ordering. Layered models are an example of such a parametrization. We construct the trade-off curve for ${{\bf y}}$ by converting each member of the ensemble from ${{\bf m}}$ to ${{\bf y}}$ and applying the above procedure to them. We demonstrate the method by applying it to several simple examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call