The immense popularity of convolutional neural network (CNN) models has sparked a growing interest in optimizing their hyperparameters. Discovering the ideal values for hyperparameters to achieve optimal CNN training is a complex and time-consuming task, often requiring repetitive numerical experiments. As a result, significant attention is currently being devoted to developing methods aimed at tailoring hyperparameters for specific CNN models and classification tasks. While existing optimization methods often yield favorable image classification results, they do not provide guidance on which hyperparameters are worth optimizing, the appropriate value ranges for those hyperparameters, or whether it is reasonable to use a subset of training data for the optimization process. This work is focused on the optimization of hyperparameters during transfer learning, with the goal of investigating how different optimization methods and hyperparameter selections impact the performance of fine-tuned models. In our experiments, we assessed the importance of various hyperparameters and identified the ranges within which optimal CNN training can be achieved. Additionally, we compared four hyperparameter optimization methods—grid search, random search, Bayesian optimization, and the Asynchronous Successive Halving Algorithm (ASHA). We also explored the feasibility of fine-tuning hyperparameters using a subset of the training data. By optimizing the hyperparameters, we observed an improvement in CNN classification accuracy of up to 6%. Furthermore, we found that achieving a balance in class distribution within the subset of data used for parameter optimization is crucial in establishing the optimal set of hyperparameters for CNN training. The results we obtained demonstrate that hyperparameter optimization is highly dependent on the specific task and dataset at hand.