Abstract

The optimization of the output matrix for a discrete-time, single-output, linear stochastic system is approached from two different points of view. Firstly, we investigate the problem of minimizing the steady-state filter error variance with respect to a time-invariant output matrix subject to a norm constraint. Secondly, we propose a filter algorithm in which the output matrix at timek is chosen so as to maximize the difference at timek+1 between the variance of the prediction error and that of the a posteriori error. For this filter, boundedness of the covariance and asymptotic stability are investigated. Several numerical experiments are reported: they give information about the limiting behavior of the sequence of output matrices generated by the algorithm and the corresponding error covariance. They also enable us to make a comparison with the results obtained by solving the former problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call