Stochastic proximal gradient descent (Prox-SGD) is a standard optimization algorithm for solving stochastic composite optimization problems in machine learning. However, the existing convergence rate analysis of Prox-SGD is mostly focused on convergence in expectation. To this end, we provide an almost sure convergence rate analysis of Prox-SGD in two optimization settings: convexity and strong convexity, showing that any implementation of the stochastic algorithm will converge with probability one. Specifically, we show that the average-iterates almost sure convergence rate of the function values for Prox-SGD is arbitrarily close to the optimal rate in the convex setting. For strongly convex objective functions, we obtain the almost sure convergence rate of the iterative sequence, which is also arbitrarily close to the optimal rate . Last but not least, we establish the more challenging last-iterate convergence rate of the function values on convex problems, which contrasts with existing results on convergence for a weighted average.