Abstract

The most commonly used activation functions in the field of artificial neural networks is the class of sigmoidal activation functions which are bounded, continuous, differentiable and monotonically increasing function. The universal approximation results for feed-forward artificial neural networks allow activation functions to be any arbitrary non-polynomial function. In this paper the properties of a bounded, continuous, differentiable and non-monotone function is described. The efficacy and efficiency of using this non-monotone function as activation function is demonstrated on five benchmark learning task. The non-monotone activation function is compared with four generally used activation functions on the benchmark tasks. Results demonstrate that the networks using the non-monotone activation function at hidden layer nodes out-perform the other 4 sigmoidal activation function using networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call