Abstract

Fundamental developments in feedforward artificial neural networks from the past 30 years are reviewed. the central theme of this article is a description of the history, origination, operating characteristics, and basic theory of several supervised neural network training algorithms including the Perceptron rule, the LMS algorithm, three Madaline rules, and the backpropagation technique. These methods were developed independently, but with the perspective of history they can all be related to each other. the concept which underlies these algorithms is the “minimal disturbance principle,” which suggests that during training it is advisable to inject new information into a network in a manner which disturbs stored information to the smallest extent possible. In the utilization of present-day rule-based expert systems, decision rules must always be known for the application of interest. Sometimes there are no rules, however. the rules are either not explicit or they simply do not exist. For such applications, trainable expert systems might be usable. Rather than working with decision rules, an adaptive expert system might observe the decisions made by a human expert. Looking over the expert's shoulders, an adaptive system can learn to make similar decisions to those of the human. Trainable expert systems have been used in the laboratory for real-time control of a “broom-balancing system.” © 1993 John Wiley & Sons, Inc.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call