This research proposes a series of novel learning rate optimization algorithms with two versions for Adaptive Moment Estimation (Adam), which is a common optimizer in Convolutional Neural Networks (CNNs). Optimizer that is used to control the training efficiency and prediction accuracy by controlling the convergence progress plays an important role in CNNs. However, optimizers such as Adam are usually not hyperparameter-free and very sensitive to the hyperparameters embedded in CNNs. For example, the learning rate is a hyperparameter that represents a step size in the calculations. The learning rate has the most significant influence on prediction accuracy, so optimizing the learning rate is the best way to improve accuracy. In this research, a series of Gaussian Process Regression (GPR)-based learning rate optimization (GLRO) algorithms are proposed to increase the classification accuracy. To be specific, the relationship between the learning rate and corresponding accuracy is studied and the potential learning rate is predicted by the GPR model which is built with previous learning rates and corresponding accuracies. Also, two strategies of the algorithm to select the input learning rate are tested separately. AlexNet, which is a state-of-the-art CNN, is used as a framework to evaluate the proposed algorithms. AlexNet is widely used in the healthcare system as medical imaging classification framework. The Stimulated Raman Scattering (SRS) images of human brain tumors are used to classify cells and non-cells in this research. The proposed GLRO are compared to the conventional learning rate annealing algorithm and the constant learning rate algorithm. The algorithms’ classifications of SRS images are evaluated in terms of accuracy, sensitivity, specificity, and precision. To further validate GLRO, multiple benchmark medical images and CNN frameworks are tested. The experimental results illustrate that the proposed GLRO algorithms outperform other algorithms by showing a 96% classification accuracy on SRS images and achieve promising classification results on the other datasets and CNN frameworks.