Abstract

In designing an optimal control system, if the a priori information required is unknown or incompletely known, one possible approach is to design a controller which is capable of estimating the unknown information during its operation and determining the optimal control action on the basis of the estimated information. If the estimated information gradually approaches the true information as time proceeds, then the controller designed will approach the optimal controller; and, consequently, the performance of the control system is gradually improved. Because of the gradual improvement of performance due to the improvement of the estimated unknown information, this class of control systems has been called learning control systems. Design techniques proposed for learning control systems include: (1) trainable controllers using pattern classifiers, (2) reinforcement learning algorithms, (3) Bayesian estimation, (4) stochastic approximation, and (5) stochastic automata models. A survey of these techniques can be found in [1]. A general formulation using stochastic approximation has been treated extensively in [2, 3]. Practical applications include spacecraft control systems, the control of valve actuators, power systems, and production processes. In addition, several nonlinear learning algorithms have recently been proposed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call