Abstract
Arbib and Zeiger's generalization of Ho's algorithm for system identification is presented from an alternative viewpoint called “state characterization”. This is an essentially nonlinear, nonparametric method for designing experiments which will directly determine a state space representation for an unknown black box. This method assumes (1) a finite dimensional state space is possible, (2) the unknown black box can be reset to its initial state at will, so that an arbitrary set of experiments can be performed, and (3) there is no plant noise, so that applying given inputs to the initial state will always yield the same state, so various experiments can be performed on it. In the initial, informal discussion the properties of this method of system identification are compared to the properties of the method of parameter estimation via loss function minimization, e.g. least square error or maximum likelihood, which is the only general approach to system identification presently available. Loss function minimization will utilize any data, but it is computationally difficult and essentially parametric. State characterization eliminates the computational requirement at the expense of requiring specific data, and it is essentially nonparametric. It is proposed that state characterization may have practical application in determining an approximate, low order description of a complex system about which we have little prior information, for instance in the social sciences and medicine. The formal presentation is limited to finite state automata and discrete time linear systems. The situation is considered in which the order of the unknown black box, the minimum size state space required to represent it, is not known a priori; so it is necessary to continue experimenting indefinitely, using the results to obtain successively better descriptions of the unknown black box. Detailed algorithms are presented for obtaining either a Moore model or a Mealy model description. Branching rules for using past data to choose subsequent experiments in order to hasten convergence are presented. Known theorems are used to prove that after some finite time these algorithms will yield a correct description, and new theorems show that the subsequent representations will be invariant after a correct description is obtained—although there is no way for the experimenter to know when he has obtained a correct description.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.