Abstract

The hippocampus is needed to store memories that are reconfigurable. Therefore, a hippocampal-like computational model should be able to solve transitive inference (TI) problems. By turning TI into a problem of sequence learning (stimuli-decisions-outcome), a sequence learning, hippocampal-like neural network solves the TI problem. In the transitive inference problem studied here, a network simulation begins by learning six pairwise relationships: A>B, B>C, C>D, D>E, E>F, and F>G where the underlying relationship is the linear string: A>B>C>D>E>F>G. The simulation is then tested with the novel pairs: B?D, C?E, D?F, B?E, C?F, B?F, and A?G. The symbolic distance effect, found in animal and human experiments, is reproduced by the network simulations. That is, the simulations give stronger decodings for B>F than for B>E or C>F and decodings for B>F and C>F are stronger than for B>D, C>E, or D>F.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call