Over the last decades, many efforts have been made toward the understanding of the convergence rate of the gradient‐based method for both constrained and unconstrained optimization. The cases of the strongly convex and weakly convex payoff function have been extensively studied and are nowadays fully understood. Despite the impressive advances made in the convex optimization context, the nonlinear non‐convex optimization problems are still not fully exploited. In this paper, we are concerned with the nonlinear, non‐convex optimization problem under system dynamic constraints. We apply our analysis to parameter identification of systems governed by general nonlinear differential equations. The considered inverse problem is presented using optimal control tools. We tackle the optimization through the use of Fletcher‐Reeves nonlinear conjugate gradient method satisfying strong Wolfe conditions with inexact line search. We rigorously establish a convergence analysis of the method and report a new linear convergence rate which forms the main contribution of this work. The theoretical result reported in our analysis requires that the second derivative of the payoff functional be continuous and bounded. Numerical evidence on a selection of popular nonlinear models is presented as a direct application of parameter identification to support the theoretical findings.