Abstract

Convergent methodology for ill-posed problems is typically equivalent to application of an operator dependent on a single parameter derived from the noise level and the data (a regularization parameter or terminal iteration number). In the context of a given problem discretized for purposes of numerical analysis, these methods can be viewed as resulting from imposed prior constraints bearing the same amount of information content. We identify a new convergent method for the treatment of certain multivariate ill-posed problems, which imposes constraints of a much lower information content (i.e., having much lower bias), based on the operator’s dependence on many data-derived parameters. The associated marked performance improvements that are possible are illustrated with solution estimates for a Lyapunov equation structured by an ill-conditioned matrix. The methodology can be understood in terms of a Minimax Entropy Principle, which emerges from the Maximum Entropy Principle in some multivariate settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call