Abstract
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the CNN. We hypothesize that the smoother and optimized the training of a CNN goes, the more efficient the end result becomes. Therefore, in this paper, we implement a modified resilient backpropagation (MRPROP) algorithm to improve the convergence and efficiency of CNN training. Particularly, a tolerant band is introduced to avoid network overtraining, which is incorporated with the global best concept for weight updating criteria to allow the training algorithm of the CNN to optimize its weights more swiftly and precisely. For comparison, we present and analyze four different training algorithms for CNN along with MRPROP, i.e., resilient backpropagation (RPROP), Levenberg–Marquardt (LM), conjugate gradient (CG), and gradient descent with momentum (GDM). Experimental results showcase the merit of the proposed approach on a public face and skin dataset.
Highlights
The convolution neural network (CNN) algorithm is the most state-of-the-art algorithm due to its vast diversity of applications, which are found in many areas: classification, recognition, detection, etc
In this article, we focus on five representative training algorithms, namely: resilient backpropagation (RPROP) [20], Levenberg–Marquardt (LM) [21], gradient descent with momentum (GDM) [22], conjugate gradient (CG) [23], and the proposed training algorithm (MRPROP)
The optimized weight selection approach and training parameters in this paper reveal the basic tendencies in the convergence speed of the corresponding training algorithms for CNN
Summary
The convolution neural network (CNN) algorithm is the most state-of-the-art algorithm due to its vast diversity of applications, which are found in many areas: classification, recognition, detection, etc. A similar approach is proposed by Orlowska-Kowalska et al [17], in which they attached a simple proportional derivative control to the gradient processing This technique significantly increases the algorithm efficiency, as the gradient function is not computed directly. In this paper, we bring forward a modified training algorithm, modified resilient backpropagation (MRPROP) for the CNN [19], which helps the system by performing better weight changes to accomplish desired outputs efficiently and swiftly. RPROP is an efficient training algorithm that performs weight step alterations based on local gradient data. In each iteration, the algorithm jumps to local minima, and the update value delxy is decreased by factor ρ− , indicating that the last update is too big due to the change of the partial derivative of the corresponding weight wxy sign. To avoid this computational expense of the update value, there should be no adaptation in the succeeding step of the update value by setting rule above
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.