Abstract

Learning in bidirectional associative memory (BAM) is typically Hebbian-based. Since Kosko's 1988 [‘Bidirectional Associative Memories’, IEEE Transactions on Systems, Man and Cybernetics, 18, 49–60] paper on BAM in the late 1980s, many improvements have been proposed. However, none of the proposed modifications have allowed BAM to perform complex associative tasks that combine many-to-one with one-to-many associations. Even though BAMs are often deemed more plausible biologically, if they are not able to solve such mappings they will have difficulties establishing themselves as good models of cognition. This article presents a BAM that can perform complex associations using only covariance matrices. It will be shown that a single network can be trained to learn both the 2- and 3-bit parity problems, while its extension into a generalised BAM with a hidden layer allows the model to learn even more complex associations with perfect performance. The conditions that provide optimal learning performance within both network frameworks are explored. Results show that contrary to other associative memory network models, the proposed model is able to learn nonlinearly separable tasks perfectly while maintaining the same learning and output functions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.