Abstract
The deep network model comprises of several processing layers and deep learning techniques help us in representing data with diverse levels of abstraction. Based on the practical importance and the efficiency of machine learning, optimization of deep models are carried out relating to the objective functions and its parameters for a particular problem. The present work focuses on an empirical analysis of the performance of stochastic optimization methods with regard to hyperparameters for the deep Convolution Neural Network (CNN) and to understand the rate of convergence of the optimization methods in high dimensional parameter spaces. Experimentation has been carried out in deep CNN model with different optimization methods viz. SGD, AdaGard, AdaDelta and Adam. The empirical results are evaluated using benchmark CIFAR10 and CIFAR100 datasets. The optimal values of the hyperparameters obtained demonstrates that the optimizer Adam shows the best results compared to other methods viz. SGD, AdaGard, and AdaDelta over the considered datasets. Further, it is noted that classification accuracy can be increased by choosing the best optimization techniques with hyperparameter tuning to get the optimal configuration of the deep CNN model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Engineering and Advanced Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.