Abstract

AbstractIn this paper, we propose a structural developmental neural network to address the plasticity‐stability dilemma, computational inefficiency, and lack of prior knowledge in continual unsupervised learning. This model uses competitive learning rules and dynamic neurons with information saturation to achieve parameter adjustment and adaptive structure development. Dynamic neurons adjust the information saturation after winning the competition and use this parameter to modulate the neuron parameter adjustment and the division timing. By dividing to generate new neurons, the network not only keeps sensitive to novel features but also can subdivide classes learnt repeatedly. The dynamic neurons with information saturation and division mechanism can simulate the long short‐term memory of the human brain, which enables the network to continually learn new samples while maintaining the previous learning results. The parent‐child relationship between neurons arising from neuronal division enables the network to simulate the human cognitive process that gradually refines the perception of objects. By setting the clustering layer parameter, users can choose the desired degree of class subdivision. Experimental results on artificial and real‐world datasets demonstrate that the proposed model is feasible for unsupervised learning tasks in instance increment and class increment scenarios and outperforms prior structural developmental neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call