Abstract

Convolutional Neural Networks are machine learning models that have proven abilities in many variants of tasks. This powerful machine learning model sometimes suffers from overfitting. This paper proposes a method based on Reinforcement Learning for addressing this problem. In this research, the parameters of a target layer in the Convolutional Neural Network take as a state for the Agent of the Reinforcement Learning section. Then the Agent gives some actions as forming parameters of a hyperbolic secant function. This function’s form is changed gradually and implicitly by the proposed method. The inputs of the function are the weights of the layer, and its outputs multiply by the same weights to updating them. In this study, the proposed method is inspired by the Deep Deterministic Policy Gradient model because the actions of the Agent are into a continuous domain. To show the proposed method’s effectiveness, the classification task is considered using Convolutional Neural Networks. In this study, 7 datasets have been used for evaluating the model; MNIST, Extended MNIST, small-notMNIST, Fashion-MNIST, sign language MNIST, CIFAR-10, and CIFAR-100.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.