Abstract

This paper deals with some issues involving a parameter estimation approach that yields estimates consistent with the data and the given a priori information. The first part of the paper deals with the relationships between various noise models and the ‘size’ of the resulting membership set, the set of parameter estimates consistent with the data and the a priori information. When there is some flexibility about the choice of the noise model, this analysis can be helpful for noise model selection so that the resulting membership set yields a better estimate of the unknown parameter. The second part of the paper presents algorithms for various commonly encountered noise models that have the following properties: (a) they are recursive and easy to implement; and (b) after a finite ‘learning period’, the estimates provided by these algorithms are guaranteed to be in (or very ‘close’ to) the membership set. In general, the interpolatory algorithms, that produce an estimate in the membership set, do not possess nice statistical and worst-case properties similar to those of classical approaches such as least mean squares (LMS) and least squares (LS) algorithms. In the third part of the paper, we propose an algorithm that is optimal in a certain worst-case sense but gives an estimate that is in (or is ‘close’ to) the membership set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call