Abstract
Introduction: Emphasizing the effect of linear and non-linear optimizers on the performance of machine learning (ML) and deep learning (DL) models, this work addresses the CIFAR-10 image classification task. The work explores over multiple data sets how optimizers influence model behavior and efficiency. Objectives: With a view on five primary assessment metrics—accuracy, precision, recall, F1-score, and AUC the main purpose is to evaluate on SVM and CNN models the performance of linear optimizers (gradient descent, SGD) and non-linear optimizers (Adam, RMSProp). Methods: The study offers fair and complete comparison by way of a consistent evaluation strategy. Analyzing how different optimizer influences model convergence rates, computational cost, and training stability takes front stage in the experimental setting. Designed on CIFAR-10, optimizers are used on ML and DL models whose performance on all metrics is noted. Results: Results show that non-linear optimizers especially Adam much increase CNN model performance by means of faster convergence, higher classification accuracy, and better model stability. While helpful for simpler models, linear optimizers such SGD show slower convergence and limited adaptation with more complicated data. Conclusions: This study provides direction on selecting appropriate optimizers depending on model complexity, job needs, and computing restrictions coupled with important information on optimizing model training in image classification and other areas.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have