Abstract

Parameter estimation in structured models is generally considered a difficult problem. For example, the prediction error method (PEM) typically gives a non-convex optimization problem, while it is difficult to incorporate structural information in subspace identification. In this contribution, we revisit the idea of iteratively using the weighted least-squares method to cope with the problem of non-convex optimization. The method is, essentially, a three-step method. First, a high order least-squares estimate is computed. Next, this model is reduced to a structured estimate using the least-squares method. Finally, the structured estimate is re-estimated, using weighted least-squares, with weights obtained from the first structured estimate. This methodology has a long history, and has been applied to a range of signal processing problems. In particular, it forms the basis of iterative quadratic maximum likelihood (IQML) and the Steiglitz-McBride method. Our contributions are as follows. Firstly, for output-error models, we provide statistically optimal weights. We conjecture that the method is asymptotically efficient under mild assumptions and support this claim by simulations. Secondly, we point to a wide range of structured estimation problems where this technique can be applied. Finally, we relate this type of technique to classical prediction error and subspace methods by showing that it can be interpreted as a link between the two, sharing favorable properties with both domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call