Abstract

We present a computational model for the analogical mapping of compositional structures that combines two existing ideas known as holistic mapping vectors and sparse distributed memory. The model enables integration of structural and semantic constraints when learning mappings of the type \(x_i \rightarrow y_i\) and computing analogies \(x_j \rightarrow y_j\) for novel inputs x j . The model has a one-shot learning process, is randomly initialized, and has three exogenous parameters: the dimensionality \(\mathcal{D}\) of representations, the memory size S, and the probability χ for activation of the memory. After learning three examples, the model generalizes correctly to novel examples. We find minima in the probability of generalization error for certain values of χ, S, and the number of different mapping examples learned. These results indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of the memory is important. The optimal dimensionality of binary representations is of the order 104, which is consistent with a known analytical estimate and the synapse count for most cortical neurons. We demonstrate that the model can learn analogical mappings of generic two-place relationships, and we calculate the error probabilities for recall and generalization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call