Abstract

Operational Neural Networks (ONNs) have recently been proposed to address the well-known limitations and drawbacks of conventional Convolutional Neural Networks (CNNs) such as network homogeneity with the sole linear neuron model. ONNs are heterogeneous networks with a generalized neuron model. However the operator search method in ONNs is not only computationally demanding, but the network heterogeneity is also limited since the same set of operators will then be used for all neurons in each layer. Moreover, the performance of ONNs directly depends on the operator set library used, which introduces a certain risk of performance degradation especially when the optimal operator set required for a particular task is missing from the library. In order to address these issues and achieve an ultimate heterogeneity level to boost the network diversity along with computational efficiency, in this study we propose Self-organized ONNs (Self-ONNs) with generative neurons that can adapt (optimize) the nodal operator of each connection during the training process. Moreover, this ability voids the need of having a fixed operator set library and the prior operator search within the library in order to find the best possible set of operators. We further formulate the training method to back-propagate the error through the operational layers of Self-ONNs. Experimental results over four challenging problems demonstrate the superior learning capability and computational efficiency of Self-ONNs over conventional ONNs and CNNs.

Highlights

  • Multi-Layer Perceptrons (MLPs), and their derivatives, Convolutional Neural Networks (CNNs) have a common drawback: they employ a homogenous network structure with identical “linear” neuron model

  • The comparative evaluations are performed with the same experimental setup and over the same challenging problems in [37]: 1) Image Synthesis, 2) Denoising, 3) Face Segmentation, and 4) Image Transformation with the same training constraints: i) Low Resolution: 60x60 pixels, ii) Compact/Shallow Models: Inx16x32xOut and Inx6x10xOut for Self-Operational Neural Networks (ONNs), iii) Scarce Train Data: 10% of the dataset iv) Multiple Regressions per network, v) Shallow Training: 240 iterations

  • We have used a Self-ONN configuration, Inx6x10xOut with Q 7 in all layers. In this way all networks have approximately the same number of network parameters. Note that this equivalence results in Self-ONNs having three times less number of hidden neurons than CNNs and ONNs, i.e., 16 vs. 48

Read more

Summary

INTRODUCTION

Multi-Layer Perceptrons (MLPs), and their derivatives, Convolutional Neural Networks (CNNs) have a common drawback: they employ a homogenous network structure with identical “linear” neuron model. Self-ONNs, as the name implies, have the ability to self-organize the network operators during training They neither need any operator set library in advance, nor require any prior search process to find the optimal nodal operator. It is true that the (weights) parameters of the kernel change the nodal operator output, e.g., for a “Sinusoid” nodal operator of a particular neuron, the kernel parameters are distinct frequencies This allows the creation of “any” harmonic function; the final nodal operator function after training cannot have any other pattern or form besides a pure sine wave even though a “composite operator”, e.g., the linear combination of harmonics, hyperbolic and polynomial, or an arbitrary nodal operator function would perhaps be a better choice for this neuron than pure sinusoids.

OPERATIONAL NEURAL NETWORKS
SELF-ORGANIZED OPERATIONAL NEURAL NETWORKS
Generative Neurons
Forward Propagation in Self-ONNs
Discussions
EXPERIMENTAL RESULTS
Learning Performance Evaluations
Computational Complexity Analysis
CONCLUSIONS
Figure 15
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call