Abstract

In this paper, enhanced gradient descent learning algorithms for complex-valued feed forward neural networks are proposed. The most known such enhanced algorithms for real-valued neural networks are: quick prop, resilient back propagation, delta-bar-delta, and Super SAB, and so it is natural to extend these learning methods to complex-valued neural networks, also. The complex variants of these four algorithms are presented, which are then exemplified on various function approximation problems, as well as on channel equalization and time series prediction applications. Experimental results show an important improvement in training and testing error over classical gradient descent and gradient descent with momentum algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call