Abstract
In Deep learning neural networks (DNNs) activation functions perform a vital role. In each neuron activation function is responsible for generating output signals from given input signals. Hence, activation function is one of the factors that influence the performance of DNN. A novel activation unit RAU (Reciprocal activation unit) is proposed in this paper. Most of the popular algorithms given more importance to positive signals, but proposed method handles the negative and positive inputs equally. The proposed RAU tested with both multiclassification and binary classification datasets. Iris flower and Wisconsin Breast Cancer datasets are used for the analysis. In Breast cancer dataset RAU provides 99.25% and 97.08% accuracy for classification of train and test sets respectively. In Iris dataset RAU provides 99.05% and 97.78% accuracy for the classification of train and test sets. Analysis of the same datasets are performed with the existing activation functions- Sigmoid, RMAF, Swish, Tanh and ReLU. Results showed that RAU performed better than other activation functions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal on Recent and Innovation Trends in Computing and Communication
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.