Abstract

By simulating the neural mechanism of cognition, NMLI (Neural Model with Lateral Interaction) was recently proposed for learning tasks. It can realize both supervised and unsupervised learning while achieving outstanding results on few-shot learning. Obeying Hebbian rules and the one-to-one correspondence between elementary neurons and samples, the number of inter-level connections in NMLI is consistent with the number of training samples. This forming mechanism, however, would result in huge computation when the training set is large. Inspired by the fact that neurons in the human brain are sparsely connected, as well as the related cognition phenomena, we propose a method for support neuron selection in NMLI. We first evaluate the validity of the inter-level connections. Based on the evaluation, only a few parts of elementary neurons are selected as support neurons, corresponding to the typical and special samples in the training set. Then, by considering only the support neurons, NMLI can form spares inter-level connections. In this way, the computational efficiency and the biological reasonability of the model are significantly improved. In addition, the proposed method adjusts neuronal connections according to prediction error, implying that back-propagation (BP) mechanism can be realized by synaptic plasticity. Experiments show that the proposed method reduced the test time of NMLI while maintaining accuracy. Furthermore, it shows the potential to improve the efficiency of other neural models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call