The canonical solution methodology for finite constrained Markov decision processes (CMDPs), where the objective is to maximize the expected infinite-horizon discounted rewards subject to the expected infinite-horizon discounted costs' constraints, is based on convex linear programming (LP). In this brief, we first prove that the optimization objective in the dual linear program of a finite CMDP is a piecewise linear convex (PWLC) function with respect to the Lagrange penalty multipliers. Next, we propose a novel, provably optimal, two-level gradient-aware search (GAS) algorithm which exploits the PWLC structure to find the optimal state-value function and Lagrange penalty multipliers of a finite CMDP. The proposed algorithm is applied in two stochastic control problems with constraints for performance comparison with binary search (BS), Lagrangian primal-dual optimization (PDO), and LP. Compared with the benchmark algorithms, it is shown that the proposed GAS algorithm converges to the optimal solution quickly without any hyperparameter tuning. In addition, the convergence speed of the proposed algorithm is not sensitive to the initialization of the Lagrange multipliers.