Abstract

One of the tools used for images is Convolutional Neural Networks (CNN) which is a sub-group of the Artificial Neural Network. The activation function is the main characteristic element in CNN. As for any complex application, the starting point is the results of the activation function. The activation function is used to limit the amplitude of the output of a neuron. In order to compute complex functions, non-linearity is introduced by activations to the model. Activation functions of different types can be used with a Convolutional Neural Network in different applications, but better results are given by effective activation functions; they also improve the model performance. The work explores the ability of 10 most used activation functions to evaluate their efficiency in terms of accuracy along with the training time. Sigmoid, Hyperbolic, Tangent (Tanh), Exponential Linear Unit (ELU), Rectified Linear Unit (RLU), Scaled Exponential Linear Unit (SELU), linear, hard sigmoid, softsign, Parametric Rectified Linear Unit (PReLU), and Leaky Rectified Linear Unit (LReLU) activation functions are under consideration. The effects of networks, datasets and processors on training are analyzed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call