Abstract

Traditional Artificial Neural Networks (ANNs) such as Multi-Layer Perceptrons (MLPs) and Radial Basis Functions (RBFs) were designed to simulate biological neural networks; however, they are based only loosely on biology and only provide a crude model. This in turn yields well-known limitations and drawbacks on the performance and robustness. In this paper we shall address them by introducing a novel feed-forward ANN model, Generalized Operational Perceptrons (GOPs) that consist of neurons with distinct (non-)linear operators to achieve a generalized model of the biological neurons and ultimately a superior diversity. We modified the conventional back-propagation (BP) to train GOPs and furthermore, proposed Progressive Operational Perceptrons (POPs) to achieve self-organized and depth-adaptive GOPs according to the learning problem. The most crucial property of the POPs is their ability to simultaneously search for the optimal operator set and train each layer individually. The final POP is, therefore, formed layer by layer and this ability enables POPs with minimal network depth to attack the most challenging learning problems that cannot be learned by conventional ANNs even with a deeper and significantly complex configuration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call