Abstract

Deep learning & especially Convolutional Neural Networks (CNNs) are taking a different shape in the area of Image Recognition & Classification. Number of different CNN architectures are found in the literature. Performance of any CNN model depends on various parameters such as size of the dataset, number of classes, weights of the model, hypermeters and optimizers so on. Transfer learning or Fine tuning a pretrained model has become very common during present time because of its advantages. Several model optimization techniques have been discussed in the literature. Stochastic Gradient Descent (SGD), Adam & RMS Prop model optimizers are more commonly used in CNN model optimization.This paper focusses on the effect of above three optimizers on well-known CNN models namely, ResNet50 and InceptionV3. Above optimizers are used to train the Fine-tuned CNN models for 15 Epochs on cat vs dog dataset created by hand picking hundreds of images of cats & dogs from Kaggle cat vs dog dataset. For this experimentation, learning rate of 0.001 is used, Categorical Cross Entropy is used to calculate the training & validation loss. Comparative analysis is made between optimizers by plotting training loss vs epochs & training accuracy vs epochs. Results showed that, SGD optimizer outperforms other two for ResNet50. Training accuracy of approximately 99% is observed for ResNet50 with 500 training & 100 validation images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.