Abstract

In this paper, we present a framework to construct a general class of recurrent neural networks (RNNs) as associative memories (AMs) for pattern storage and retrieval. Different from the traditional AM models that treat elements of a pattern equally, the proposed framework introduces visual saliency of a target pattern into the AM design process by encoding saliency values into the ellipsoidal basis function (EBF) kernel that calculates the weighted distance between the input and target patterns. Network potential fields (NPFs) are then constructed as the linear combination of EBF/radial basis function (RBF) kernels for auto-associative memories (AAMs) and hetero-associative memories (HAMs), respectively. Sparse and dynamic synapses for both the proposed AAMs and HAMs are determined according to the gradients of the NPFs with high efficiency and without the continuity assumptions of the RNN's activation function, which is usually required by traditional AMs. With the proposed method, the target patterns are assigned as fixed point attractors of the network and the AMs are proven to be able to converge to one of these attractors via Lyapunov analysis. The resulted AAMs and HAMs are demonstrated to have excellent tolerance and robustness to a variety of input noises and corruption via comparative experiments on retrieval and association of 2D color images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call