Abstract

In this paper, we first analyze the distribution of local fields (DLF) which is induced by the memory patternsin the Q-Ising model. It is found that the structure of the DLF is closely correlated with thenetwork dynamics and the system performance. However, the design rule adopted in theQ-Ising model, like the other rules adopted for multi-state neural networks with associativememories, cannot be applied to directly control the DLF for a given set of memorypatterns, and thus cannot be applied to further study the relationships between thestructure of the DLF and the dynamics of the network. We then extend a design rule,which was presented recently for designing binary-state neural networks, to make itsuitable for designing general multi-state neural networks. This rule is able tocontrol the structure of the DLF as expected. We show that controlling the DLFnot only can affect the dynamic behaviors of the multi-state neural networksfor a given set of memory patterns, but also can improve the storage capacity.With the change of the DLF, the network shows very rich dynamic behaviors,such as the ‘chaos phase’, the ‘memory phase’, and the ‘mixture phase’. Thesedynamic behaviors are also observed in the binary-state neural networks; therefore,our results imply that they may be the universal behaviors of feedback neuralnetworks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.