Abstract

In recent years, Convolutional Neural Networks (CNNs) have been widely used in image recognition due to their aptitude in large scale image processing. The CNN uses Back-propagation (BP) to train weights and biases, which in turn makes the error consistently smaller. The most common optimizers that uses a BP algorithm are Stochastic Gradient Decent (SGD), Adam, and Adadelta. These optimizers, however, have been proved to fall easily into the regional optimal solution. Little research has been conducted on the application of Soft Computing in CNN to fix the above problem, and most studies that have been conducted focus on Particle Swarm Optimization. Among them, the hybrid algorithm combined with SGD proposed by Albeahdili improves the image classification accuracy over that achieved by the original CNN. This study proposes the amalgamation of Improved Simplified Swarm Optimization (iSSO) with SGD, hence culminating in the iSSO-SGD which is intended train CNNs more efficiently to establish a better prediction model and improve the classification accuracy. The performance of the proposed iSSO-SGD can be affirmed through a comparison with the PSO-SGD, the Adam, Adadelta, rmsprop and momentum optimizers and their abilities in improving the accuracy of image classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.