Abstract

In Deep learning neural networks (DNNs) activation functions perform a vital role. In each neuron activation function is responsible for generating output signals from given input signals. Hence, activation function is one of the factors that influence the performance of DNN. A novel activation unit RAU (Reciprocal activation unit) is proposed in this paper. Most of the popular algorithms given more importance to positive signals, but proposed method handles the negative and positive inputs equally. The proposed RAU tested with both multiclassification and binary classification datasets. Iris flower and Wisconsin Breast Cancer datasets are used for the analysis. In Breast cancer dataset RAU provides 99.25% and 97.08% accuracy for classification of train and test sets respectively. In Iris dataset RAU provides 99.05% and 97.78% accuracy for the classification of train and test sets. Analysis of the same datasets are performed with the existing activation functions- Sigmoid, RMAF, Swish, Tanh and ReLU. Results showed that RAU performed better than other activation functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call