Abstract
In this paper, by setting the recognition accuracy and the running time as selecting standards, and studying the performances of four activation functions, namely, standard rectified linear unit (ReLU), exponential linear unit (ELU), scaled exponential linear unit (SELU) and Softplus in visual geometry group (VGG) network. A novel parameterized activation function, which is based on a combination of the advantages of both ReLU and Softplus, has been proposed, compared and tested in VGG network with a selected dataset. The test results show that the novel activation function improves the recognition accuracy in VGG effectively, and even at around 3.0% when the parameter k is larger than 1.37.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have