Abstract

This paper discusses the Delta-rule and training of min-max neural networks by developing a differentiation theory for min-max functions, the functions containing min (wedge) and/or max (V) operations. We first prove that under certain conditions all min-max functions are continuously differentiable almost everywhere in the real number field R and derive the explicit formulas for the differentiation. These results are the basis for developing the Delta-rule for the training of min-max neural networks. The convergence of the new Delta-rule is proved theoretically using the stochastic theory, and is demonstrated with a simulation example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call