Abstract
The activation function introduces nonlinearity into convolutional neural network, which greatly promotes the development of computer vision tasks. This paper proposes elastic adaptively parametric compounded units to improve the performance of convolutional neural networks for image recognition. The activation function takes the structural advantages of two mainstream functions as the function’s fundamental architecture. The SENet model is embedded in the proposed activation function to adaptively recalibrate the feature mapping weight in each channel, thereby enhancing the fitting capability of the activation function. In addition, the function has an elastic slope in the positive input region by simulating random noise to improve the generalization capability of neural networks. To prevent the generated noise from producing overly large variations during training, a special protection mechanism is adopted. In order to verify the effectiveness of the activation function, this paper uses CIFAR-10 and CIFAR-100 image datasets to conduct comparative experiments of the activation function under the exact same model. Experimental results show that the proposed activation function showed superior performance beyond other functions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Advanced Computational Intelligence and Intelligent Informatics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.