Abstract

Activation functions play a crucial role in enabling neural networks to carry out tasks with increased flexibility by introducing non-linearity. The selection of appropriate activation functions becomes even more crucial, especially in the context of deeper networks where the objective is to learn more intricate patterns. Among various deep learning tools, Convolutional Neural Networks (CNNs) stand out for their exceptional ability to learn complex visual patterns. In practice, ReLu is commonly employed in convolutional layers of CNNs, yet other activation functions like Swish can demonstrate superior training performance while maintaining good testing accuracy on different datasets. This paper presents an optimally refined strategy for deep learning-based image classification tasks by incorporating CNNs with advanced activation functions and an adjustable setting of layers. A thorough analysis has been conducted to support the effectiveness of various activation functions when coupled with the favorable softmax loss, rendering them suitable for ensuring a stable training process. The results obtained on the CIFAR-10 dataset demonstrate the favorability and stability of the adopted strategy throughout the training process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call