Abstract

Determining optimal activation function in artificial neural networks is an important issue because it is directly linked with obtained success rates. But, unfortunately, there is not any way to determine them analytically, optimal activation function is generally determined by trials or tuning. This paper addresses, a simpler and a more effective approach to determine optimal activation function. In this approach, which can be called as trained activation function, an activation function was trained for each particular neuron by linear regression. This training process was done based on the training dataset, which consists the sums of inputs of each neuron in the hidden layer and desired outputs. By this way, a different activation function was generated for each neuron in the hidden layer. This approach was employed in random weight artificial neural network (RWN) and validated by 50 benchmark datasets. Achieved success rates by RWN that used trained activation functions were higher than obtained success rates by RWN that used traditional activation functions. Obtained results showed that proposed approach is a successful, simple and an effective way to determine optimal activation function instead of trials or tuning in both randomized single and multilayer ANNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call