Abstract

AbstractDeep learning is a cutting-edge technology that functions similarly to the human nervous system. Neural networks are at the heart of Deep Learning. Neural networks are made up of numerous layers, including the input layer, which accepts raw data as input, hidden layers, which process the input data, and a final layer, the output layer, which provides the result. Its workflow pattern is comparable to machine learning (Basak et al. in Eur. Phys. J. Spec. Top. 230:2221–2251, 2021; Makhija, Simran et al. “Separating stars from quasars: Machine learning investigation using photometric data. Astron. Comput. 29:100,313, 2019) [2,, 11], allowing us to gain hands-on expertise with this technology, speed up our work, and allow us to make several efforts without having to develop a basic Machine learning algorithm from scratch. In the case of deep learning, there are several neural networks to choose from. The majority of Deep Learning architectures are built on neural networks such as CNN, RNN, and others. Deep neural network activation function development is often guided by set goals and gradual steps toward tackling specific challenges. The primary goal of this study is to examine the performance of innovative activation functions (SBAF parabola (Saha et al. DiffAct: A Unifying Framework for Activation Functions. International Joint Conference on Neural Networks (IJCNN) (2021); 1–8; Saha et al. A new activation function for artificial neural net based habitability classification. 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI) (2018): 1781–1786) [6,, 16], AReLU (Mediratta et al. LipAReLU: AReLU Networks aided by Lipschitz Acceleration. 2021 International Joint Conference on Neural Networks (IJCNN) (2021); 1–8.) [,7], Leaky ReLU, SWISH) on deep learning architectures such as CNN, DenseNet. On deep learning architectures, our study will compare the classification performance of the aforementioned activation functions. Our study allows the user to run DL architectures on selected computer vision datasets using selected activation functions, allows the user to display classification accuracies and other performance parameters, reports different validation curves to validate system performance, runs the code for multiple loss functions.KeywordsConvolutional neural network (CNN)DenseNetSBAF parabolaAReLULeaky ReLUSWISH

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.