Abstract

We study the optimal learning capacity for neural networks withQ-state clock neurons, i.e. the states arecomplex numbers with magnitude 1 and azimuthal anglesn·2π/Q, withn=0, 1, ...,Q−1. Performing a phase space analysis, the learning capacity αc for given stability κ can be expressed by means of a double-integral with a simple geometrical interpretation, which for vanishing κ reduces to αc(Q) = 4Q/(3Q−4), forQ≧3. Then we define a training algorithm, which generalizes the well-known AdaTron algorithm fromQ=2 toQ≧3 and converges very fast to the network with optimal stability, if the numberp of random patterns to be learned is smaller than αc(Q). Finally, in the conclusions, we also give hints on applications for image recognition and in a „note added in proof” we generalize some results to Potts model networks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.