Abstract

Inspired by the dynamic dendritic architecture of biological neurons, the approximate logic neuron model (ALNM) is a novel model recently proposed by us. ALNM owns four layers, namely, the synaptic layer, the dendritic layer, the membrane layer, and the cell body. Through neural pruning function, the model can omit useless synapses and unnecessary branches of dendrites after the training process. In other words, it will form a unique and simplified dendritic structure for each particular classification task. Further, the simplified dendritic structure can be completely substituted by logic circuits, which makes ALNM be capable of running in hardware. However, although it has satisfactory performances to solve classification problems, it still suffers from some disadvantages owing to its learning algorithm, named batch gradient descent (BGD) algorithm. It is because that, using all the training data for each iteration is time-consuming and it is unsuitable for large scale problems. In addition, BGD cannot adaptively adjust the learning rate during the whole training process, which will converge slowly in the neighborhood of saddle points, and oscillate in the steep region of gradient space. To settle the above issues, we propose a novel stochastic adaptive gradient descent (SAGD) algorithm, which uses stochastic gradient descent information and adaptively adjusts the learning rate, to improve the classification performances of ALNM. In our experiments, ALNM trained by the new algorithm is evaluated on three benchmark classification datasets, and experimental results demonstrate that it performs significantly better than the original model in terms of accuracy and convergence rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call