Abstract

Abstract. This article explores the impact of various hyperparameters on the performance of image feature extraction using convolutional neural networks (CNNs), with a focus on learning rate, dropout rate, batch size, and the number of epochs. Using the CIFAR-10 dataset, extensive experiments were conducted to optimize these parameters, aiming to achieve high accuracy while avoiding overfitting. The findings underscore the importance of carefully selecting these hyperparameters to balance training efficiency and model performance. Through a rigorous analysis of the effects of these hyperparameters on model performance under various configurations, including training accuracy, test accuracy, training loss, and test loss, our experimental results indicate that the model achieves optimal performance with a learning rate of 0.0001 and a dropout rate of 0.5. The model demonstrates optimal performance in avoiding overfitting when the number of training epochs is set to 10. Additionally, although batch size has a relatively minor effect on overall model optimization, a slight improvement in performance was observed when the batch size was set to 32.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.