Abstract

This paper presents a complex-valued version of the back-propagation algorithm (called `Complex-BP'), which can be applied to multi-layered neural networks whose weights, threshold values, input and output signals are all complex numbers. Some inherent properties of this new algorithm are studied. The results may be summarized as follows. The updating rule of the Complex-BP is such that the probability for a “standstill in learning” is reduced. The average convergence speed is superior to that of the real-valued back-propagation, whereas the generalization performance remains unchanged. In addition, the number of weights and thresholds needed is only about the half of real-valued back-propagation, where a complex-valued parameter z= x+ iy (where i= −1 ) is counted as two because it consists of a real part x and an imaginary part y. The Complex-BP can transform geometric figures, e.g. rotation, similarity transformation and parallel displacement of straight lines, circles, etc., whereas the real-valued back-propagation cannot. Mathematical analysis indicates that a Complex-BP network which has learned a transformation, has the ability to generalize that transformation with an error which is represented by the sine. It is interesting that the above characteristics appear only by extending neural networks to complex numbers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.