Abstract

We propose a biologically motivated brain-inspired single neuron perceptron (SNP) with universal approximation and XOR computation properties. This computational model extends the input pattern and is based on the excitatory and inhibitory learning rules inspired from neural connections in the human brain's nervous system. The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules. The main features of proposed single layer perceptron are universal approximation property and low computational complexity. The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets. Various comparisons with multilayer perceptron (MLP) with gradient decent backpropagation (GDBP) learning algorithm indicate the superiority of the approach in terms of higher accuracy, lower time, and spatial complexity, as well as faster training. Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.

Highlights

  • In various computer applications such as pattern recognition, classification, and prediction, a learning module can be implemented by various approaches including statistical, structural, and neural approaches

  • We prove that a single neuron perceptron (SNP) can solve XOR problem and can be a universal approximator

  • These features can be achieved by extending input pattern and by using max operator

Read more

Summary

Introduction

In various computer applications such as pattern recognition, classification, and prediction, a learning module can be implemented by various approaches including statistical, structural, and neural approaches Among these methods, artificial neural networks (ANNs) are inspired by physiological workings of the brain. In MLP architecture, by increasing the number of neurons in input layer or (and) the number of neurons in output layer or (and) the number of neurons in hidden layer(s), the number of learning parameters and the algorithm computational complexity are significantly increased. This problem is usually referred to as the curse of dimensionality [3, 4].

Proposed Single Neuron Perceptron
Numerical Results
Tic-tac
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call