Abstract

This paper derives new exact and approximate algorithms for the computation of modeling and bias errors in linear minimum error variance estimation. The primary difference between the exact algorithms and those previously presented is their form. A result concerning orthogonal projections for suboptimal estimation leads to “delta” error analysis algorithms for the difference between the true error variance and the optimum system error variance. These algorithms often simplify computational problems considerably compared to previously obtained algorithms and adapt easily to sensitivity analysis accurate regardless of parameter variation magnitude. With plant or measurement matrix errors, divergence of certain system states can occur. Use of the first two terms of a Taylor series, as an approximation of the error variance, does not show this effect and can therefore be in error by orders of magnitude. An alternate approximation, circumventing this problem, is presented. It makes use of a “conditional bias” concept which views the primary error in systems with small dynamic or observation matrix variations as a bias conditioned on the observation. Examples illustrate the divergence problem and the use of the exact and approximate error analysis algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call